Test Report: KVM_Linux_crio 18233

                    
                      cb33a82ee153a53ef0d3c63c71993fcdc3925c1f:2024-04-01:33841
                    
                

Test fail (31/319)

Order failed test Duration
39 TestAddons/parallel/Ingress 155.08
46 TestAddons/parallel/CloudSpanner 7.77
53 TestAddons/StoppedEnableDisable 154.46
124 TestFunctional/parallel/ImageCommands/ImageBuild 5.75
172 TestMultiControlPlane/serial/StopSecondaryNode 142.39
174 TestMultiControlPlane/serial/RestartSecondaryNode 58.98
176 TestMultiControlPlane/serial/RestartClusterKeepsNodes 373.39
179 TestMultiControlPlane/serial/StopCluster 142
239 TestMultiNode/serial/RestartKeepsNodes 308.43
241 TestMultiNode/serial/StopMultiNode 141.47
248 TestPreload 274.43
256 TestKubernetesUpgrade 452.11
293 TestPause/serial/SecondStartNoReconfiguration 65.3
327 TestStartStop/group/old-k8s-version/serial/FirstStart 284.91
347 TestStartStop/group/no-preload/serial/Stop 139.15
350 TestStartStop/group/embed-certs/serial/Stop 139.14
353 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.05
354 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
356 TestStartStop/group/old-k8s-version/serial/DeployApp 0.49
357 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 104.96
358 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
360 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
364 TestStartStop/group/old-k8s-version/serial/SecondStart 720.15
365 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.46
366 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.48
367 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.48
368 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.62
369 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 414.58
370 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 446.04
371 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 212.95
372 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 144.88
x
+
TestAddons/parallel/Ingress (155.08s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-881427 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-881427 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-881427 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [ae675593-97ae-4a01-8cae-475396963c4b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [ae675593-97ae-4a01-8cae-475396963c4b] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004566247s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-881427 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-881427 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.997886688s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-881427 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-881427 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.214
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-881427 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-881427 addons disable ingress-dns --alsologtostderr -v=1: (1.979505198s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-881427 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-881427 addons disable ingress --alsologtostderr -v=1: (7.891986689s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-881427 -n addons-881427
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-881427 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-881427 logs -n 25: (1.413052163s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p download-only-040534                                                                     | download-only-040534 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:06 UTC | 01 Apr 24 18:06 UTC |
	| delete  | -p download-only-794994                                                                     | download-only-794994 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:06 UTC | 01 Apr 24 18:06 UTC |
	| delete  | -p download-only-591417                                                                     | download-only-591417 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:06 UTC | 01 Apr 24 18:06 UTC |
	| delete  | -p download-only-040534                                                                     | download-only-040534 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:06 UTC | 01 Apr 24 18:06 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-431770 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:06 UTC |                     |
	|         | binary-mirror-431770                                                                        |                      |         |                |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |                |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |                |                     |                     |
	|         | http://127.0.0.1:37617                                                                      |                      |         |                |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |                |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |                |                     |                     |
	| delete  | -p binary-mirror-431770                                                                     | binary-mirror-431770 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:06 UTC | 01 Apr 24 18:06 UTC |
	| addons  | disable dashboard -p                                                                        | addons-881427        | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:06 UTC |                     |
	|         | addons-881427                                                                               |                      |         |                |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-881427        | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:06 UTC |                     |
	|         | addons-881427                                                                               |                      |         |                |                     |                     |
	| start   | -p addons-881427 --wait=true                                                                | addons-881427        | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:06 UTC | 01 Apr 24 18:08 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |                |                     |                     |
	|         | --addons=registry                                                                           |                      |         |                |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |                |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |                |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |                |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |                |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |                |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |                |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |                |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |                |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |                |                     |                     |
	|         |  --container-runtime=crio                                                                   |                      |         |                |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |                |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |                |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |                |                     |                     |
	| addons  | addons-881427 addons                                                                        | addons-881427        | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:08 UTC | 01 Apr 24 18:08 UTC |
	|         | disable metrics-server                                                                      |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| addons  | addons-881427 addons disable                                                                | addons-881427        | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:09 UTC | 01 Apr 24 18:09 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |                |                     |                     |
	|         | -v=1                                                                                        |                      |         |                |                     |                     |
	| ip      | addons-881427 ip                                                                            | addons-881427        | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:09 UTC | 01 Apr 24 18:09 UTC |
	| addons  | addons-881427 addons disable                                                                | addons-881427        | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:09 UTC | 01 Apr 24 18:09 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |                |                     |                     |
	|         | -v=1                                                                                        |                      |         |                |                     |                     |
	| ssh     | addons-881427 ssh cat                                                                       | addons-881427        | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:09 UTC | 01 Apr 24 18:09 UTC |
	|         | /opt/local-path-provisioner/pvc-de16cdd6-519d-46fd-98d1-b0afa2a23e43_default_test-pvc/file1 |                      |         |                |                     |                     |
	| addons  | addons-881427 addons disable                                                                | addons-881427        | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:09 UTC | 01 Apr 24 18:09 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-881427        | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:09 UTC | 01 Apr 24 18:09 UTC |
	|         | -p addons-881427                                                                            |                      |         |                |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-881427        | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:09 UTC |                     |
	|         | addons-881427                                                                               |                      |         |                |                     |                     |
	| addons  | enable headlamp                                                                             | addons-881427        | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:09 UTC | 01 Apr 24 18:09 UTC |
	|         | -p addons-881427                                                                            |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-881427        | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:09 UTC | 01 Apr 24 18:09 UTC |
	|         | addons-881427                                                                               |                      |         |                |                     |                     |
	| ssh     | addons-881427 ssh curl -s                                                                   | addons-881427        | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:09 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |                |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |                |                     |                     |
	| addons  | addons-881427 addons                                                                        | addons-881427        | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:09 UTC | 01 Apr 24 18:09 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| addons  | addons-881427 addons                                                                        | addons-881427        | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:09 UTC | 01 Apr 24 18:09 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| ip      | addons-881427 ip                                                                            | addons-881427        | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:11 UTC | 01 Apr 24 18:11 UTC |
	| addons  | addons-881427 addons disable                                                                | addons-881427        | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:11 UTC | 01 Apr 24 18:11 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |                |                     |                     |
	|         | -v=1                                                                                        |                      |         |                |                     |                     |
	| addons  | addons-881427 addons disable                                                                | addons-881427        | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:11 UTC | 01 Apr 24 18:11 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |                |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/01 18:06:31
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 18:06:31.336010   18511 out.go:291] Setting OutFile to fd 1 ...
	I0401 18:06:31.336125   18511 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 18:06:31.336134   18511 out.go:304] Setting ErrFile to fd 2...
	I0401 18:06:31.336137   18511 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 18:06:31.336314   18511 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-10493/.minikube/bin
	I0401 18:06:31.336904   18511 out.go:298] Setting JSON to false
	I0401 18:06:31.337701   18511 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2943,"bootTime":1711991848,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 18:06:31.337757   18511 start.go:139] virtualization: kvm guest
	I0401 18:06:31.340030   18511 out.go:177] * [addons-881427] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 18:06:31.341811   18511 out.go:177]   - MINIKUBE_LOCATION=18233
	I0401 18:06:31.341817   18511 notify.go:220] Checking for updates...
	I0401 18:06:31.343264   18511 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 18:06:31.344735   18511 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 18:06:31.346051   18511 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-10493/.minikube
	I0401 18:06:31.347365   18511 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 18:06:31.348652   18511 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 18:06:31.349959   18511 driver.go:392] Setting default libvirt URI to qemu:///system
	I0401 18:06:31.380779   18511 out.go:177] * Using the kvm2 driver based on user configuration
	I0401 18:06:31.382111   18511 start.go:297] selected driver: kvm2
	I0401 18:06:31.382124   18511 start.go:901] validating driver "kvm2" against <nil>
	I0401 18:06:31.382135   18511 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 18:06:31.383029   18511 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 18:06:31.383129   18511 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18233-10493/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0401 18:06:31.397195   18511 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0401 18:06:31.397246   18511 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0401 18:06:31.397511   18511 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 18:06:31.397587   18511 cni.go:84] Creating CNI manager for ""
	I0401 18:06:31.397604   18511 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 18:06:31.397618   18511 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0401 18:06:31.397698   18511 start.go:340] cluster config:
	{Name:addons-881427 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-881427 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 18:06:31.397855   18511 iso.go:125] acquiring lock: {Name:mka511ffe42ecd86bd7f46e7a17ddcdd3e5e4327 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 18:06:31.399782   18511 out.go:177] * Starting "addons-881427" primary control-plane node in "addons-881427" cluster
	I0401 18:06:31.401116   18511 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0401 18:06:31.401148   18511 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0401 18:06:31.401160   18511 cache.go:56] Caching tarball of preloaded images
	I0401 18:06:31.401250   18511 preload.go:173] Found /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 18:06:31.401265   18511 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0401 18:06:31.401591   18511 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/config.json ...
	I0401 18:06:31.401612   18511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/config.json: {Name:mk87f7613d29992512bc1caf86d7db9ba76178bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:06:31.401751   18511 start.go:360] acquireMachinesLock for addons-881427: {Name:mk6b7472209a8db5f40be4c2f0565da7e0094c19 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 18:06:31.401807   18511 start.go:364] duration metric: took 36.308µs to acquireMachinesLock for "addons-881427"
	I0401 18:06:31.401825   18511 start.go:93] Provisioning new machine with config: &{Name:addons-881427 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:addons-881427 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 18:06:31.401882   18511 start.go:125] createHost starting for "" (driver="kvm2")
	I0401 18:06:31.403540   18511 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0401 18:06:31.403651   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:06:31.403691   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:06:31.417980   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36879
	I0401 18:06:31.419320   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:06:31.419885   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:06:31.419907   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:06:31.420205   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:06:31.420394   18511 main.go:141] libmachine: (addons-881427) Calling .GetMachineName
	I0401 18:06:31.420534   18511 main.go:141] libmachine: (addons-881427) Calling .DriverName
	I0401 18:06:31.420675   18511 start.go:159] libmachine.API.Create for "addons-881427" (driver="kvm2")
	I0401 18:06:31.420698   18511 client.go:168] LocalClient.Create starting
	I0401 18:06:31.420737   18511 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem
	I0401 18:06:31.674150   18511 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem
	I0401 18:06:31.838327   18511 main.go:141] libmachine: Running pre-create checks...
	I0401 18:06:31.838350   18511 main.go:141] libmachine: (addons-881427) Calling .PreCreateCheck
	I0401 18:06:31.838904   18511 main.go:141] libmachine: (addons-881427) Calling .GetConfigRaw
	I0401 18:06:31.839386   18511 main.go:141] libmachine: Creating machine...
	I0401 18:06:31.839401   18511 main.go:141] libmachine: (addons-881427) Calling .Create
	I0401 18:06:31.839579   18511 main.go:141] libmachine: (addons-881427) Creating KVM machine...
	I0401 18:06:31.840943   18511 main.go:141] libmachine: (addons-881427) DBG | found existing default KVM network
	I0401 18:06:31.841749   18511 main.go:141] libmachine: (addons-881427) DBG | I0401 18:06:31.841556   18533 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015330}
	I0401 18:06:31.841776   18511 main.go:141] libmachine: (addons-881427) DBG | created network xml: 
	I0401 18:06:31.841787   18511 main.go:141] libmachine: (addons-881427) DBG | <network>
	I0401 18:06:31.841792   18511 main.go:141] libmachine: (addons-881427) DBG |   <name>mk-addons-881427</name>
	I0401 18:06:31.841805   18511 main.go:141] libmachine: (addons-881427) DBG |   <dns enable='no'/>
	I0401 18:06:31.841813   18511 main.go:141] libmachine: (addons-881427) DBG |   
	I0401 18:06:31.841824   18511 main.go:141] libmachine: (addons-881427) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0401 18:06:31.841832   18511 main.go:141] libmachine: (addons-881427) DBG |     <dhcp>
	I0401 18:06:31.841843   18511 main.go:141] libmachine: (addons-881427) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0401 18:06:31.841852   18511 main.go:141] libmachine: (addons-881427) DBG |     </dhcp>
	I0401 18:06:31.841858   18511 main.go:141] libmachine: (addons-881427) DBG |   </ip>
	I0401 18:06:31.841862   18511 main.go:141] libmachine: (addons-881427) DBG |   
	I0401 18:06:31.841867   18511 main.go:141] libmachine: (addons-881427) DBG | </network>
	I0401 18:06:31.841876   18511 main.go:141] libmachine: (addons-881427) DBG | 
	I0401 18:06:31.847141   18511 main.go:141] libmachine: (addons-881427) DBG | trying to create private KVM network mk-addons-881427 192.168.39.0/24...
	I0401 18:06:31.909113   18511 main.go:141] libmachine: (addons-881427) Setting up store path in /home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427 ...
	I0401 18:06:31.909135   18511 main.go:141] libmachine: (addons-881427) DBG | private KVM network mk-addons-881427 192.168.39.0/24 created
	I0401 18:06:31.909150   18511 main.go:141] libmachine: (addons-881427) Building disk image from file:///home/jenkins/minikube-integration/18233-10493/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso
	I0401 18:06:31.909192   18511 main.go:141] libmachine: (addons-881427) DBG | I0401 18:06:31.909058   18533 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18233-10493/.minikube
	I0401 18:06:31.909226   18511 main.go:141] libmachine: (addons-881427) Downloading /home/jenkins/minikube-integration/18233-10493/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18233-10493/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso...
	I0401 18:06:32.131505   18511 main.go:141] libmachine: (addons-881427) DBG | I0401 18:06:32.131386   18533 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427/id_rsa...
	I0401 18:06:32.202759   18511 main.go:141] libmachine: (addons-881427) DBG | I0401 18:06:32.202634   18533 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427/addons-881427.rawdisk...
	I0401 18:06:32.202786   18511 main.go:141] libmachine: (addons-881427) DBG | Writing magic tar header
	I0401 18:06:32.202797   18511 main.go:141] libmachine: (addons-881427) DBG | Writing SSH key tar header
	I0401 18:06:32.202805   18511 main.go:141] libmachine: (addons-881427) DBG | I0401 18:06:32.202736   18533 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427 ...
	I0401 18:06:32.202817   18511 main.go:141] libmachine: (addons-881427) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427
	I0401 18:06:32.202881   18511 main.go:141] libmachine: (addons-881427) Setting executable bit set on /home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427 (perms=drwx------)
	I0401 18:06:32.202905   18511 main.go:141] libmachine: (addons-881427) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18233-10493/.minikube/machines
	I0401 18:06:32.202916   18511 main.go:141] libmachine: (addons-881427) Setting executable bit set on /home/jenkins/minikube-integration/18233-10493/.minikube/machines (perms=drwxr-xr-x)
	I0401 18:06:32.202926   18511 main.go:141] libmachine: (addons-881427) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18233-10493/.minikube
	I0401 18:06:32.202940   18511 main.go:141] libmachine: (addons-881427) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18233-10493
	I0401 18:06:32.202951   18511 main.go:141] libmachine: (addons-881427) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0401 18:06:32.202964   18511 main.go:141] libmachine: (addons-881427) DBG | Checking permissions on dir: /home/jenkins
	I0401 18:06:32.202976   18511 main.go:141] libmachine: (addons-881427) DBG | Checking permissions on dir: /home
	I0401 18:06:32.202987   18511 main.go:141] libmachine: (addons-881427) DBG | Skipping /home - not owner
	I0401 18:06:32.203002   18511 main.go:141] libmachine: (addons-881427) Setting executable bit set on /home/jenkins/minikube-integration/18233-10493/.minikube (perms=drwxr-xr-x)
	I0401 18:06:32.203018   18511 main.go:141] libmachine: (addons-881427) Setting executable bit set on /home/jenkins/minikube-integration/18233-10493 (perms=drwxrwxr-x)
	I0401 18:06:32.203031   18511 main.go:141] libmachine: (addons-881427) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0401 18:06:32.203043   18511 main.go:141] libmachine: (addons-881427) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0401 18:06:32.203057   18511 main.go:141] libmachine: (addons-881427) Creating domain...
	I0401 18:06:32.203964   18511 main.go:141] libmachine: (addons-881427) define libvirt domain using xml: 
	I0401 18:06:32.203996   18511 main.go:141] libmachine: (addons-881427) <domain type='kvm'>
	I0401 18:06:32.204004   18511 main.go:141] libmachine: (addons-881427)   <name>addons-881427</name>
	I0401 18:06:32.204010   18511 main.go:141] libmachine: (addons-881427)   <memory unit='MiB'>4000</memory>
	I0401 18:06:32.204019   18511 main.go:141] libmachine: (addons-881427)   <vcpu>2</vcpu>
	I0401 18:06:32.204027   18511 main.go:141] libmachine: (addons-881427)   <features>
	I0401 18:06:32.204036   18511 main.go:141] libmachine: (addons-881427)     <acpi/>
	I0401 18:06:32.204047   18511 main.go:141] libmachine: (addons-881427)     <apic/>
	I0401 18:06:32.204055   18511 main.go:141] libmachine: (addons-881427)     <pae/>
	I0401 18:06:32.204061   18511 main.go:141] libmachine: (addons-881427)     
	I0401 18:06:32.204068   18511 main.go:141] libmachine: (addons-881427)   </features>
	I0401 18:06:32.204075   18511 main.go:141] libmachine: (addons-881427)   <cpu mode='host-passthrough'>
	I0401 18:06:32.204080   18511 main.go:141] libmachine: (addons-881427)   
	I0401 18:06:32.204092   18511 main.go:141] libmachine: (addons-881427)   </cpu>
	I0401 18:06:32.204101   18511 main.go:141] libmachine: (addons-881427)   <os>
	I0401 18:06:32.204112   18511 main.go:141] libmachine: (addons-881427)     <type>hvm</type>
	I0401 18:06:32.204125   18511 main.go:141] libmachine: (addons-881427)     <boot dev='cdrom'/>
	I0401 18:06:32.204137   18511 main.go:141] libmachine: (addons-881427)     <boot dev='hd'/>
	I0401 18:06:32.204146   18511 main.go:141] libmachine: (addons-881427)     <bootmenu enable='no'/>
	I0401 18:06:32.204163   18511 main.go:141] libmachine: (addons-881427)   </os>
	I0401 18:06:32.204171   18511 main.go:141] libmachine: (addons-881427)   <devices>
	I0401 18:06:32.204178   18511 main.go:141] libmachine: (addons-881427)     <disk type='file' device='cdrom'>
	I0401 18:06:32.204187   18511 main.go:141] libmachine: (addons-881427)       <source file='/home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427/boot2docker.iso'/>
	I0401 18:06:32.204192   18511 main.go:141] libmachine: (addons-881427)       <target dev='hdc' bus='scsi'/>
	I0401 18:06:32.204196   18511 main.go:141] libmachine: (addons-881427)       <readonly/>
	I0401 18:06:32.204202   18511 main.go:141] libmachine: (addons-881427)     </disk>
	I0401 18:06:32.204211   18511 main.go:141] libmachine: (addons-881427)     <disk type='file' device='disk'>
	I0401 18:06:32.204223   18511 main.go:141] libmachine: (addons-881427)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0401 18:06:32.204244   18511 main.go:141] libmachine: (addons-881427)       <source file='/home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427/addons-881427.rawdisk'/>
	I0401 18:06:32.204258   18511 main.go:141] libmachine: (addons-881427)       <target dev='hda' bus='virtio'/>
	I0401 18:06:32.204265   18511 main.go:141] libmachine: (addons-881427)     </disk>
	I0401 18:06:32.204279   18511 main.go:141] libmachine: (addons-881427)     <interface type='network'>
	I0401 18:06:32.204287   18511 main.go:141] libmachine: (addons-881427)       <source network='mk-addons-881427'/>
	I0401 18:06:32.204296   18511 main.go:141] libmachine: (addons-881427)       <model type='virtio'/>
	I0401 18:06:32.204315   18511 main.go:141] libmachine: (addons-881427)     </interface>
	I0401 18:06:32.204324   18511 main.go:141] libmachine: (addons-881427)     <interface type='network'>
	I0401 18:06:32.204329   18511 main.go:141] libmachine: (addons-881427)       <source network='default'/>
	I0401 18:06:32.204334   18511 main.go:141] libmachine: (addons-881427)       <model type='virtio'/>
	I0401 18:06:32.204361   18511 main.go:141] libmachine: (addons-881427)     </interface>
	I0401 18:06:32.204386   18511 main.go:141] libmachine: (addons-881427)     <serial type='pty'>
	I0401 18:06:32.204402   18511 main.go:141] libmachine: (addons-881427)       <target port='0'/>
	I0401 18:06:32.204415   18511 main.go:141] libmachine: (addons-881427)     </serial>
	I0401 18:06:32.204426   18511 main.go:141] libmachine: (addons-881427)     <console type='pty'>
	I0401 18:06:32.204447   18511 main.go:141] libmachine: (addons-881427)       <target type='serial' port='0'/>
	I0401 18:06:32.204472   18511 main.go:141] libmachine: (addons-881427)     </console>
	I0401 18:06:32.204489   18511 main.go:141] libmachine: (addons-881427)     <rng model='virtio'>
	I0401 18:06:32.204499   18511 main.go:141] libmachine: (addons-881427)       <backend model='random'>/dev/random</backend>
	I0401 18:06:32.204524   18511 main.go:141] libmachine: (addons-881427)     </rng>
	I0401 18:06:32.204536   18511 main.go:141] libmachine: (addons-881427)     
	I0401 18:06:32.204543   18511 main.go:141] libmachine: (addons-881427)     
	I0401 18:06:32.204557   18511 main.go:141] libmachine: (addons-881427)   </devices>
	I0401 18:06:32.204565   18511 main.go:141] libmachine: (addons-881427) </domain>
	I0401 18:06:32.204586   18511 main.go:141] libmachine: (addons-881427) 
	I0401 18:06:32.210422   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:ca:06:61 in network default
	I0401 18:06:32.210973   18511 main.go:141] libmachine: (addons-881427) Ensuring networks are active...
	I0401 18:06:32.210993   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:32.211788   18511 main.go:141] libmachine: (addons-881427) Ensuring network default is active
	I0401 18:06:32.212125   18511 main.go:141] libmachine: (addons-881427) Ensuring network mk-addons-881427 is active
	I0401 18:06:32.212756   18511 main.go:141] libmachine: (addons-881427) Getting domain xml...
	I0401 18:06:32.213674   18511 main.go:141] libmachine: (addons-881427) Creating domain...
	I0401 18:06:33.603244   18511 main.go:141] libmachine: (addons-881427) Waiting to get IP...
	I0401 18:06:33.604106   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:33.604487   18511 main.go:141] libmachine: (addons-881427) DBG | unable to find current IP address of domain addons-881427 in network mk-addons-881427
	I0401 18:06:33.604541   18511 main.go:141] libmachine: (addons-881427) DBG | I0401 18:06:33.604469   18533 retry.go:31] will retry after 269.634126ms: waiting for machine to come up
	I0401 18:06:33.876114   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:33.876608   18511 main.go:141] libmachine: (addons-881427) DBG | unable to find current IP address of domain addons-881427 in network mk-addons-881427
	I0401 18:06:33.876630   18511 main.go:141] libmachine: (addons-881427) DBG | I0401 18:06:33.876568   18533 retry.go:31] will retry after 251.291846ms: waiting for machine to come up
	I0401 18:06:34.128922   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:34.129395   18511 main.go:141] libmachine: (addons-881427) DBG | unable to find current IP address of domain addons-881427 in network mk-addons-881427
	I0401 18:06:34.129428   18511 main.go:141] libmachine: (addons-881427) DBG | I0401 18:06:34.129359   18533 retry.go:31] will retry after 376.203226ms: waiting for machine to come up
	I0401 18:06:34.506843   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:34.507208   18511 main.go:141] libmachine: (addons-881427) DBG | unable to find current IP address of domain addons-881427 in network mk-addons-881427
	I0401 18:06:34.507230   18511 main.go:141] libmachine: (addons-881427) DBG | I0401 18:06:34.507167   18533 retry.go:31] will retry after 566.832821ms: waiting for machine to come up
	I0401 18:06:35.076133   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:35.076592   18511 main.go:141] libmachine: (addons-881427) DBG | unable to find current IP address of domain addons-881427 in network mk-addons-881427
	I0401 18:06:35.076616   18511 main.go:141] libmachine: (addons-881427) DBG | I0401 18:06:35.076547   18533 retry.go:31] will retry after 563.222713ms: waiting for machine to come up
	I0401 18:06:35.641330   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:35.641730   18511 main.go:141] libmachine: (addons-881427) DBG | unable to find current IP address of domain addons-881427 in network mk-addons-881427
	I0401 18:06:35.641751   18511 main.go:141] libmachine: (addons-881427) DBG | I0401 18:06:35.641696   18533 retry.go:31] will retry after 580.091131ms: waiting for machine to come up
	I0401 18:06:36.223563   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:36.223941   18511 main.go:141] libmachine: (addons-881427) DBG | unable to find current IP address of domain addons-881427 in network mk-addons-881427
	I0401 18:06:36.223961   18511 main.go:141] libmachine: (addons-881427) DBG | I0401 18:06:36.223898   18533 retry.go:31] will retry after 918.869733ms: waiting for machine to come up
	I0401 18:06:37.144830   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:37.145276   18511 main.go:141] libmachine: (addons-881427) DBG | unable to find current IP address of domain addons-881427 in network mk-addons-881427
	I0401 18:06:37.145309   18511 main.go:141] libmachine: (addons-881427) DBG | I0401 18:06:37.145231   18533 retry.go:31] will retry after 1.003351501s: waiting for machine to come up
	I0401 18:06:38.150466   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:38.150866   18511 main.go:141] libmachine: (addons-881427) DBG | unable to find current IP address of domain addons-881427 in network mk-addons-881427
	I0401 18:06:38.150892   18511 main.go:141] libmachine: (addons-881427) DBG | I0401 18:06:38.150829   18533 retry.go:31] will retry after 1.861805809s: waiting for machine to come up
	I0401 18:06:40.013871   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:40.014294   18511 main.go:141] libmachine: (addons-881427) DBG | unable to find current IP address of domain addons-881427 in network mk-addons-881427
	I0401 18:06:40.014360   18511 main.go:141] libmachine: (addons-881427) DBG | I0401 18:06:40.014235   18533 retry.go:31] will retry after 1.635650648s: waiting for machine to come up
	I0401 18:06:41.651847   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:41.652367   18511 main.go:141] libmachine: (addons-881427) DBG | unable to find current IP address of domain addons-881427 in network mk-addons-881427
	I0401 18:06:41.652390   18511 main.go:141] libmachine: (addons-881427) DBG | I0401 18:06:41.652341   18533 retry.go:31] will retry after 2.723353102s: waiting for machine to come up
	I0401 18:06:44.379239   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:44.379762   18511 main.go:141] libmachine: (addons-881427) DBG | unable to find current IP address of domain addons-881427 in network mk-addons-881427
	I0401 18:06:44.379798   18511 main.go:141] libmachine: (addons-881427) DBG | I0401 18:06:44.379716   18533 retry.go:31] will retry after 3.188371174s: waiting for machine to come up
	I0401 18:06:47.569608   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:47.570021   18511 main.go:141] libmachine: (addons-881427) DBG | unable to find current IP address of domain addons-881427 in network mk-addons-881427
	I0401 18:06:47.570047   18511 main.go:141] libmachine: (addons-881427) DBG | I0401 18:06:47.569969   18533 retry.go:31] will retry after 3.319364247s: waiting for machine to come up
	I0401 18:06:50.890567   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:50.890960   18511 main.go:141] libmachine: (addons-881427) DBG | unable to find current IP address of domain addons-881427 in network mk-addons-881427
	I0401 18:06:50.891011   18511 main.go:141] libmachine: (addons-881427) DBG | I0401 18:06:50.890935   18533 retry.go:31] will retry after 5.529010547s: waiting for machine to come up
	I0401 18:06:56.425077   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:56.425557   18511 main.go:141] libmachine: (addons-881427) Found IP for machine: 192.168.39.214
	I0401 18:06:56.425576   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has current primary IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:56.425582   18511 main.go:141] libmachine: (addons-881427) Reserving static IP address...
	I0401 18:06:56.425997   18511 main.go:141] libmachine: (addons-881427) DBG | unable to find host DHCP lease matching {name: "addons-881427", mac: "52:54:00:4b:04:cb", ip: "192.168.39.214"} in network mk-addons-881427
	I0401 18:06:56.495162   18511 main.go:141] libmachine: (addons-881427) DBG | Getting to WaitForSSH function...
	I0401 18:06:56.495192   18511 main.go:141] libmachine: (addons-881427) Reserved static IP address: 192.168.39.214
	I0401 18:06:56.495205   18511 main.go:141] libmachine: (addons-881427) Waiting for SSH to be available...
	I0401 18:06:56.497821   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:56.498243   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:minikube Clientid:01:52:54:00:4b:04:cb}
	I0401 18:06:56.498272   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:56.498483   18511 main.go:141] libmachine: (addons-881427) DBG | Using SSH client type: external
	I0401 18:06:56.498510   18511 main.go:141] libmachine: (addons-881427) DBG | Using SSH private key: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427/id_rsa (-rw-------)
	I0401 18:06:56.498531   18511 main.go:141] libmachine: (addons-881427) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.214 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0401 18:06:56.498541   18511 main.go:141] libmachine: (addons-881427) DBG | About to run SSH command:
	I0401 18:06:56.498549   18511 main.go:141] libmachine: (addons-881427) DBG | exit 0
	I0401 18:06:56.629751   18511 main.go:141] libmachine: (addons-881427) DBG | SSH cmd err, output: <nil>: 
	I0401 18:06:56.630104   18511 main.go:141] libmachine: (addons-881427) KVM machine creation complete!
	I0401 18:06:56.630332   18511 main.go:141] libmachine: (addons-881427) Calling .GetConfigRaw
	I0401 18:06:56.630940   18511 main.go:141] libmachine: (addons-881427) Calling .DriverName
	I0401 18:06:56.631133   18511 main.go:141] libmachine: (addons-881427) Calling .DriverName
	I0401 18:06:56.631288   18511 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0401 18:06:56.631381   18511 main.go:141] libmachine: (addons-881427) Calling .GetState
	I0401 18:06:56.632681   18511 main.go:141] libmachine: Detecting operating system of created instance...
	I0401 18:06:56.632697   18511 main.go:141] libmachine: Waiting for SSH to be available...
	I0401 18:06:56.632702   18511 main.go:141] libmachine: Getting to WaitForSSH function...
	I0401 18:06:56.632708   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHHostname
	I0401 18:06:56.634803   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:56.635105   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:06:56.635135   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:56.635248   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHPort
	I0401 18:06:56.635415   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:06:56.635536   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:06:56.635646   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHUsername
	I0401 18:06:56.635843   18511 main.go:141] libmachine: Using SSH client type: native
	I0401 18:06:56.636061   18511 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0401 18:06:56.636074   18511 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0401 18:06:56.733194   18511 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 18:06:56.733222   18511 main.go:141] libmachine: Detecting the provisioner...
	I0401 18:06:56.733232   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHHostname
	I0401 18:06:56.735785   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:56.736142   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:06:56.736175   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:56.736331   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHPort
	I0401 18:06:56.736539   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:06:56.736699   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:06:56.736843   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHUsername
	I0401 18:06:56.737032   18511 main.go:141] libmachine: Using SSH client type: native
	I0401 18:06:56.737240   18511 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0401 18:06:56.737254   18511 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0401 18:06:56.834783   18511 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0401 18:06:56.834861   18511 main.go:141] libmachine: found compatible host: buildroot
	I0401 18:06:56.834872   18511 main.go:141] libmachine: Provisioning with buildroot...
	I0401 18:06:56.834884   18511 main.go:141] libmachine: (addons-881427) Calling .GetMachineName
	I0401 18:06:56.835099   18511 buildroot.go:166] provisioning hostname "addons-881427"
	I0401 18:06:56.835119   18511 main.go:141] libmachine: (addons-881427) Calling .GetMachineName
	I0401 18:06:56.835322   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHHostname
	I0401 18:06:56.839440   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:56.839903   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:06:56.839933   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:56.840108   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHPort
	I0401 18:06:56.840296   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:06:56.840472   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:06:56.840626   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHUsername
	I0401 18:06:56.840787   18511 main.go:141] libmachine: Using SSH client type: native
	I0401 18:06:56.840987   18511 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0401 18:06:56.841000   18511 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-881427 && echo "addons-881427" | sudo tee /etc/hostname
	I0401 18:06:56.954022   18511 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-881427
	
	I0401 18:06:56.954048   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHHostname
	I0401 18:06:56.956470   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:56.956784   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:06:56.956821   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:56.956952   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHPort
	I0401 18:06:56.957144   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:06:56.957303   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:06:56.957441   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHUsername
	I0401 18:06:56.957576   18511 main.go:141] libmachine: Using SSH client type: native
	I0401 18:06:56.957789   18511 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0401 18:06:56.957814   18511 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-881427' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-881427/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-881427' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 18:06:57.069089   18511 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 18:06:57.069120   18511 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18233-10493/.minikube CaCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18233-10493/.minikube}
	I0401 18:06:57.069162   18511 buildroot.go:174] setting up certificates
	I0401 18:06:57.069172   18511 provision.go:84] configureAuth start
	I0401 18:06:57.069183   18511 main.go:141] libmachine: (addons-881427) Calling .GetMachineName
	I0401 18:06:57.069466   18511 main.go:141] libmachine: (addons-881427) Calling .GetIP
	I0401 18:06:57.071861   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:57.072158   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:06:57.072180   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:57.072319   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHHostname
	I0401 18:06:57.074467   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:57.074781   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:06:57.074822   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:57.074917   18511 provision.go:143] copyHostCerts
	I0401 18:06:57.075069   18511 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem (1123 bytes)
	I0401 18:06:57.075232   18511 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem (1679 bytes)
	I0401 18:06:57.075365   18511 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem (1082 bytes)
	I0401 18:06:57.075430   18511 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem org=jenkins.addons-881427 san=[127.0.0.1 192.168.39.214 addons-881427 localhost minikube]
	I0401 18:06:57.326408   18511 provision.go:177] copyRemoteCerts
	I0401 18:06:57.326469   18511 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 18:06:57.326494   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHHostname
	I0401 18:06:57.329132   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:57.329606   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:06:57.329634   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:57.329810   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHPort
	I0401 18:06:57.330018   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:06:57.330206   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHUsername
	I0401 18:06:57.330330   18511 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427/id_rsa Username:docker}
	I0401 18:06:57.408714   18511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 18:06:57.438543   18511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0401 18:06:57.466550   18511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 18:06:57.492919   18511 provision.go:87] duration metric: took 423.731573ms to configureAuth
	I0401 18:06:57.492949   18511 buildroot.go:189] setting minikube options for container-runtime
	I0401 18:06:57.493165   18511 config.go:182] Loaded profile config "addons-881427": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 18:06:57.493246   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHHostname
	I0401 18:06:57.495893   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:57.496265   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:06:57.496293   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:57.496447   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHPort
	I0401 18:06:57.496636   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:06:57.496804   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:06:57.496939   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHUsername
	I0401 18:06:57.497115   18511 main.go:141] libmachine: Using SSH client type: native
	I0401 18:06:57.497274   18511 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0401 18:06:57.497288   18511 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 18:06:57.760942   18511 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 18:06:57.760966   18511 main.go:141] libmachine: Checking connection to Docker...
	I0401 18:06:57.760975   18511 main.go:141] libmachine: (addons-881427) Calling .GetURL
	I0401 18:06:57.762135   18511 main.go:141] libmachine: (addons-881427) DBG | Using libvirt version 6000000
	I0401 18:06:57.763972   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:57.764292   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:06:57.764318   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:57.764441   18511 main.go:141] libmachine: Docker is up and running!
	I0401 18:06:57.764458   18511 main.go:141] libmachine: Reticulating splines...
	I0401 18:06:57.764465   18511 client.go:171] duration metric: took 26.343759362s to LocalClient.Create
	I0401 18:06:57.764484   18511 start.go:167] duration metric: took 26.343809774s to libmachine.API.Create "addons-881427"
	I0401 18:06:57.764500   18511 start.go:293] postStartSetup for "addons-881427" (driver="kvm2")
	I0401 18:06:57.764512   18511 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 18:06:57.764527   18511 main.go:141] libmachine: (addons-881427) Calling .DriverName
	I0401 18:06:57.764731   18511 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 18:06:57.764749   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHHostname
	I0401 18:06:57.766751   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:57.767077   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:06:57.767102   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:57.767195   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHPort
	I0401 18:06:57.767348   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:06:57.767528   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHUsername
	I0401 18:06:57.767647   18511 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427/id_rsa Username:docker}
	I0401 18:06:57.848902   18511 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 18:06:57.853610   18511 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 18:06:57.853630   18511 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/addons for local assets ...
	I0401 18:06:57.853727   18511 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/files for local assets ...
	I0401 18:06:57.853763   18511 start.go:296] duration metric: took 89.253531ms for postStartSetup
	I0401 18:06:57.853809   18511 main.go:141] libmachine: (addons-881427) Calling .GetConfigRaw
	I0401 18:06:57.854353   18511 main.go:141] libmachine: (addons-881427) Calling .GetIP
	I0401 18:06:57.856573   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:57.856865   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:06:57.856910   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:57.857089   18511 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/config.json ...
	I0401 18:06:57.857258   18511 start.go:128] duration metric: took 26.45536674s to createHost
	I0401 18:06:57.857279   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHHostname
	I0401 18:06:57.859364   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:57.859674   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:06:57.859696   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:57.859804   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHPort
	I0401 18:06:57.859981   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:06:57.860176   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:06:57.860359   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHUsername
	I0401 18:06:57.860541   18511 main.go:141] libmachine: Using SSH client type: native
	I0401 18:06:57.860736   18511 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0401 18:06:57.860751   18511 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 18:06:57.962928   18511 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711994817.935069932
	
	I0401 18:06:57.962946   18511 fix.go:216] guest clock: 1711994817.935069932
	I0401 18:06:57.962954   18511 fix.go:229] Guest: 2024-04-01 18:06:57.935069932 +0000 UTC Remote: 2024-04-01 18:06:57.857269466 +0000 UTC m=+26.565030106 (delta=77.800466ms)
	I0401 18:06:57.962982   18511 fix.go:200] guest clock delta is within tolerance: 77.800466ms
	I0401 18:06:57.962990   18511 start.go:83] releasing machines lock for "addons-881427", held for 26.561171605s
	I0401 18:06:57.963012   18511 main.go:141] libmachine: (addons-881427) Calling .DriverName
	I0401 18:06:57.963268   18511 main.go:141] libmachine: (addons-881427) Calling .GetIP
	I0401 18:06:57.965732   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:57.966118   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:06:57.966143   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:57.966317   18511 main.go:141] libmachine: (addons-881427) Calling .DriverName
	I0401 18:06:57.966822   18511 main.go:141] libmachine: (addons-881427) Calling .DriverName
	I0401 18:06:57.967003   18511 main.go:141] libmachine: (addons-881427) Calling .DriverName
	I0401 18:06:57.967073   18511 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 18:06:57.967116   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHHostname
	I0401 18:06:57.967207   18511 ssh_runner.go:195] Run: cat /version.json
	I0401 18:06:57.967231   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHHostname
	I0401 18:06:57.969873   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:57.969944   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:57.970239   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:06:57.970265   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:57.970294   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:06:57.970314   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:57.970411   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHPort
	I0401 18:06:57.970559   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHPort
	I0401 18:06:57.970575   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:06:57.970700   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:06:57.970782   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHUsername
	I0401 18:06:57.970858   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHUsername
	I0401 18:06:57.970949   18511 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427/id_rsa Username:docker}
	I0401 18:06:57.970962   18511 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427/id_rsa Username:docker}
	I0401 18:06:58.096265   18511 ssh_runner.go:195] Run: systemctl --version
	I0401 18:06:58.103073   18511 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 18:06:58.264811   18511 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 18:06:58.272290   18511 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 18:06:58.272344   18511 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 18:06:58.289978   18511 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 18:06:58.290000   18511 start.go:494] detecting cgroup driver to use...
	I0401 18:06:58.290065   18511 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 18:06:58.310938   18511 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 18:06:58.326098   18511 docker.go:217] disabling cri-docker service (if available) ...
	I0401 18:06:58.326164   18511 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 18:06:58.340541   18511 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 18:06:58.354770   18511 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 18:06:58.465319   18511 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 18:06:58.628021   18511 docker.go:233] disabling docker service ...
	I0401 18:06:58.628089   18511 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 18:06:58.645291   18511 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 18:06:58.659716   18511 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 18:06:58.790924   18511 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 18:06:58.930510   18511 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 18:06:58.946330   18511 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 18:06:58.967234   18511 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0401 18:06:58.967300   18511 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:06:58.979110   18511 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 18:06:58.979189   18511 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:06:58.992418   18511 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:06:59.004874   18511 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:06:59.017051   18511 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 18:06:59.029202   18511 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:06:59.041379   18511 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:06:59.062802   18511 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:06:59.074454   18511 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 18:06:59.084719   18511 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0401 18:06:59.084790   18511 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0401 18:06:59.099017   18511 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 18:06:59.110059   18511 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 18:06:59.240883   18511 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 18:06:59.580154   18511 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 18:06:59.580246   18511 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 18:06:59.586428   18511 start.go:562] Will wait 60s for crictl version
	I0401 18:06:59.586488   18511 ssh_runner.go:195] Run: which crictl
	I0401 18:06:59.590851   18511 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 18:06:59.628704   18511 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 18:06:59.628811   18511 ssh_runner.go:195] Run: crio --version
	I0401 18:06:59.663479   18511 ssh_runner.go:195] Run: crio --version
	I0401 18:06:59.701027   18511 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0401 18:06:59.702608   18511 main.go:141] libmachine: (addons-881427) Calling .GetIP
	I0401 18:06:59.705394   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:59.705774   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:06:59.705802   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:59.705986   18511 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0401 18:06:59.711025   18511 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 18:06:59.726160   18511 kubeadm.go:877] updating cluster {Name:addons-881427 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.
3 ClusterName:addons-881427 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.214 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 18:06:59.726280   18511 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0401 18:06:59.726331   18511 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 18:06:59.766249   18511 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0401 18:06:59.766329   18511 ssh_runner.go:195] Run: which lz4
	I0401 18:06:59.770813   18511 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0401 18:06:59.775569   18511 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 18:06:59.775599   18511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0401 18:07:01.356805   18511 crio.go:462] duration metric: took 1.586015191s to copy over tarball
	I0401 18:07:01.356865   18511 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 18:07:04.132930   18511 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.776038335s)
	I0401 18:07:04.132964   18511 crio.go:469] duration metric: took 2.776134141s to extract the tarball
	I0401 18:07:04.132978   18511 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 18:07:04.173217   18511 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 18:07:04.223493   18511 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 18:07:04.223520   18511 cache_images.go:84] Images are preloaded, skipping loading
	I0401 18:07:04.223530   18511 kubeadm.go:928] updating node { 192.168.39.214 8443 v1.29.3 crio true true} ...
	I0401 18:07:04.223665   18511 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-881427 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.214
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:addons-881427 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 18:07:04.223759   18511 ssh_runner.go:195] Run: crio config
	I0401 18:07:04.271009   18511 cni.go:84] Creating CNI manager for ""
	I0401 18:07:04.271031   18511 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 18:07:04.271042   18511 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 18:07:04.271062   18511 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.214 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-881427 NodeName:addons-881427 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.214"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.214 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 18:07:04.271640   18511 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.214
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-881427"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.214
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.214"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 18:07:04.271700   18511 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0401 18:07:04.283256   18511 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 18:07:04.283343   18511 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 18:07:04.294600   18511 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0401 18:07:04.312993   18511 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 18:07:04.330955   18511 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0401 18:07:04.349108   18511 ssh_runner.go:195] Run: grep 192.168.39.214	control-plane.minikube.internal$ /etc/hosts
	I0401 18:07:04.353316   18511 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.214	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 18:07:04.367102   18511 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 18:07:04.494094   18511 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 18:07:04.514302   18511 certs.go:68] Setting up /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427 for IP: 192.168.39.214
	I0401 18:07:04.514322   18511 certs.go:194] generating shared ca certs ...
	I0401 18:07:04.514338   18511 certs.go:226] acquiring lock for ca certs: {Name:mk348b3e250c104b662139cd7212c6c6dfda3180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:07:04.514473   18511 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key
	I0401 18:07:04.645847   18511 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt ...
	I0401 18:07:04.645873   18511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt: {Name:mk76b7abd1f080e01a5a32c74a7791d486abaeb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:07:04.646016   18511 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key ...
	I0401 18:07:04.646027   18511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key: {Name:mk81827d88412cbe70f5da178f51bf43ab58da51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:07:04.646114   18511 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key
	I0401 18:07:04.935408   18511 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt ...
	I0401 18:07:04.935445   18511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt: {Name:mkd287c026a1a401803f9827d16e8f4e5e8f5f0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:07:04.935612   18511 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key ...
	I0401 18:07:04.935624   18511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key: {Name:mk9d68668cde28d3ba6d892d6bb735ded03ae541 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:07:04.935710   18511 certs.go:256] generating profile certs ...
	I0401 18:07:04.935777   18511 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/client.key
	I0401 18:07:04.935796   18511 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/client.crt with IP's: []
	I0401 18:07:05.192514   18511 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/client.crt ...
	I0401 18:07:05.192545   18511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/client.crt: {Name:mk43b268f304c81517331551fc83c26cef5077dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:07:05.192712   18511 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/client.key ...
	I0401 18:07:05.192724   18511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/client.key: {Name:mk521c9659c2d31aeaf7ef5eba85e7639ea4a9ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:07:05.192800   18511 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/apiserver.key.76ae8eb6
	I0401 18:07:05.192820   18511 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/apiserver.crt.76ae8eb6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.214]
	I0401 18:07:05.415868   18511 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/apiserver.crt.76ae8eb6 ...
	I0401 18:07:05.415895   18511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/apiserver.crt.76ae8eb6: {Name:mkdf827ea04983b9ec6dae9f2126b6d3c6a70025 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:07:05.416043   18511 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/apiserver.key.76ae8eb6 ...
	I0401 18:07:05.416056   18511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/apiserver.key.76ae8eb6: {Name:mk70beceff4cc257f113f900a07485a1e95d03e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:07:05.416121   18511 certs.go:381] copying /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/apiserver.crt.76ae8eb6 -> /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/apiserver.crt
	I0401 18:07:05.416186   18511 certs.go:385] copying /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/apiserver.key.76ae8eb6 -> /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/apiserver.key
	I0401 18:07:05.416236   18511 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/proxy-client.key
	I0401 18:07:05.416252   18511 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/proxy-client.crt with IP's: []
	I0401 18:07:05.475322   18511 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/proxy-client.crt ...
	I0401 18:07:05.475347   18511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/proxy-client.crt: {Name:mk58715cca1aa62d818cadf2b60d5389ae79761a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:07:05.475550   18511 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/proxy-client.key ...
	I0401 18:07:05.475565   18511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/proxy-client.key: {Name:mk5b3f57d5c512ff6cafbf2df8ea8175d43b554f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:07:05.475751   18511 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem (1675 bytes)
	I0401 18:07:05.475784   18511 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem (1082 bytes)
	I0401 18:07:05.475803   18511 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem (1123 bytes)
	I0401 18:07:05.475822   18511 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem (1679 bytes)
	I0401 18:07:05.476349   18511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 18:07:05.503975   18511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 18:07:05.530240   18511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 18:07:05.556910   18511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 18:07:05.584874   18511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0401 18:07:05.611692   18511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0401 18:07:05.638522   18511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 18:07:05.665043   18511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 18:07:05.692137   18511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 18:07:05.718444   18511 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0401 18:07:05.737181   18511 ssh_runner.go:195] Run: openssl version
	I0401 18:07:05.744505   18511 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 18:07:05.757839   18511 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 18:07:05.763242   18511 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 18:07 /usr/share/ca-certificates/minikubeCA.pem
	I0401 18:07:05.763306   18511 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 18:07:05.769801   18511 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 18:07:05.782859   18511 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 18:07:05.787685   18511 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 18:07:05.787749   18511 kubeadm.go:391] StartCluster: {Name:addons-881427 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 C
lusterName:addons-881427 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.214 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 18:07:05.787847   18511 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 18:07:05.787887   18511 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 18:07:05.836129   18511 cri.go:89] found id: ""
	I0401 18:07:05.836200   18511 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 18:07:05.849302   18511 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 18:07:05.864275   18511 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 18:07:05.875661   18511 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 18:07:05.875686   18511 kubeadm.go:156] found existing configuration files:
	
	I0401 18:07:05.875736   18511 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 18:07:05.886555   18511 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 18:07:05.886618   18511 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 18:07:05.897483   18511 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 18:07:05.908101   18511 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 18:07:05.908153   18511 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 18:07:05.919791   18511 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 18:07:05.930719   18511 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 18:07:05.930769   18511 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 18:07:05.942183   18511 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 18:07:05.952972   18511 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 18:07:05.953031   18511 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 18:07:05.965428   18511 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 18:07:06.173908   18511 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 18:07:16.671545   18511 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0401 18:07:16.671592   18511 kubeadm.go:309] [preflight] Running pre-flight checks
	I0401 18:07:16.671650   18511 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 18:07:16.671767   18511 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 18:07:16.671851   18511 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 18:07:16.671917   18511 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 18:07:16.673748   18511 out.go:204]   - Generating certificates and keys ...
	I0401 18:07:16.673826   18511 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0401 18:07:16.673892   18511 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0401 18:07:16.673957   18511 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 18:07:16.674035   18511 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0401 18:07:16.674109   18511 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0401 18:07:16.674154   18511 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0401 18:07:16.674243   18511 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0401 18:07:16.674392   18511 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-881427 localhost] and IPs [192.168.39.214 127.0.0.1 ::1]
	I0401 18:07:16.674484   18511 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0401 18:07:16.674679   18511 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-881427 localhost] and IPs [192.168.39.214 127.0.0.1 ::1]
	I0401 18:07:16.674781   18511 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 18:07:16.674870   18511 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 18:07:16.674943   18511 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0401 18:07:16.675021   18511 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 18:07:16.675094   18511 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 18:07:16.675161   18511 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 18:07:16.675210   18511 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 18:07:16.675261   18511 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 18:07:16.675306   18511 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 18:07:16.675398   18511 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 18:07:16.675494   18511 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 18:07:16.677214   18511 out.go:204]   - Booting up control plane ...
	I0401 18:07:16.677322   18511 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 18:07:16.677444   18511 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 18:07:16.677540   18511 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 18:07:16.677715   18511 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 18:07:16.677838   18511 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 18:07:16.677912   18511 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0401 18:07:16.678098   18511 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 18:07:16.678211   18511 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003194 seconds
	I0401 18:07:16.678387   18511 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 18:07:16.678567   18511 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 18:07:16.678658   18511 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 18:07:16.678910   18511 kubeadm.go:309] [mark-control-plane] Marking the node addons-881427 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 18:07:16.678989   18511 kubeadm.go:309] [bootstrap-token] Using token: 3q7dw3.76ebyztxncoayojs
	I0401 18:07:16.681670   18511 out.go:204]   - Configuring RBAC rules ...
	I0401 18:07:16.681800   18511 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 18:07:16.681901   18511 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 18:07:16.682095   18511 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 18:07:16.682321   18511 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 18:07:16.682495   18511 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 18:07:16.682630   18511 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 18:07:16.682794   18511 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 18:07:16.682860   18511 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0401 18:07:16.682925   18511 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0401 18:07:16.682936   18511 kubeadm.go:309] 
	I0401 18:07:16.683021   18511 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0401 18:07:16.683034   18511 kubeadm.go:309] 
	I0401 18:07:16.683131   18511 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0401 18:07:16.683151   18511 kubeadm.go:309] 
	I0401 18:07:16.683187   18511 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0401 18:07:16.683259   18511 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 18:07:16.683331   18511 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 18:07:16.683344   18511 kubeadm.go:309] 
	I0401 18:07:16.683423   18511 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0401 18:07:16.683433   18511 kubeadm.go:309] 
	I0401 18:07:16.683553   18511 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 18:07:16.683570   18511 kubeadm.go:309] 
	I0401 18:07:16.683642   18511 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0401 18:07:16.683756   18511 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 18:07:16.683845   18511 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 18:07:16.683861   18511 kubeadm.go:309] 
	I0401 18:07:16.683997   18511 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 18:07:16.684136   18511 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0401 18:07:16.684143   18511 kubeadm.go:309] 
	I0401 18:07:16.684211   18511 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 3q7dw3.76ebyztxncoayojs \
	I0401 18:07:16.684339   18511 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b8a0197ad47aa27a5800307c57228d22e61e4d31af785fa8a896f2b7fab267b8 \
	I0401 18:07:16.684372   18511 kubeadm.go:309] 	--control-plane 
	I0401 18:07:16.684381   18511 kubeadm.go:309] 
	I0401 18:07:16.684511   18511 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0401 18:07:16.684521   18511 kubeadm.go:309] 
	I0401 18:07:16.684616   18511 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 3q7dw3.76ebyztxncoayojs \
	I0401 18:07:16.684785   18511 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b8a0197ad47aa27a5800307c57228d22e61e4d31af785fa8a896f2b7fab267b8 
	I0401 18:07:16.684813   18511 cni.go:84] Creating CNI manager for ""
	I0401 18:07:16.684823   18511 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 18:07:16.686434   18511 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0401 18:07:16.687792   18511 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0401 18:07:16.752772   18511 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0401 18:07:16.818717   18511 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 18:07:16.818782   18511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:07:16.818800   18511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-881427 minikube.k8s.io/updated_at=2024_04_01T18_07_16_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2 minikube.k8s.io/name=addons-881427 minikube.k8s.io/primary=true
	I0401 18:07:16.983556   18511 ops.go:34] apiserver oom_adj: -16
	I0401 18:07:16.983691   18511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:07:17.484096   18511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:07:17.984252   18511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:07:18.484381   18511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:07:18.983683   18511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:07:19.483695   18511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:07:19.984231   18511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:07:20.484484   18511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:07:20.984087   18511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:07:21.484239   18511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:07:21.983753   18511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:07:22.484742   18511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:07:22.984746   18511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:07:23.483911   18511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:07:23.983861   18511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:07:24.483980   18511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:07:24.984107   18511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:07:25.484013   18511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:07:25.983841   18511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:07:26.483908   18511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:07:26.984545   18511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:07:27.484614   18511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:07:27.983880   18511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:07:28.484523   18511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:07:28.984436   18511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:07:29.100551   18511 kubeadm.go:1107] duration metric: took 12.281823694s to wait for elevateKubeSystemPrivileges
	W0401 18:07:29.100598   18511 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0401 18:07:29.100609   18511 kubeadm.go:393] duration metric: took 23.312863663s to StartCluster
	I0401 18:07:29.100630   18511 settings.go:142] acquiring lock: {Name:mk5cd3d9600680d3808ad7ff6310a5e71b09e71d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:07:29.100806   18511 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 18:07:29.101245   18511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/kubeconfig: {Name:mkbd988e40ba29769e9f8a43c4d876f38e957f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:07:29.101478   18511 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0401 18:07:29.101501   18511 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.214 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 18:07:29.103398   18511 out.go:177] * Verifying Kubernetes components...
	I0401 18:07:29.101552   18511 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0401 18:07:29.101697   18511 config.go:182] Loaded profile config "addons-881427": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 18:07:29.104835   18511 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 18:07:29.104849   18511 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-881427"
	I0401 18:07:29.104855   18511 addons.go:69] Setting yakd=true in profile "addons-881427"
	I0401 18:07:29.104912   18511 addons.go:69] Setting inspektor-gadget=true in profile "addons-881427"
	I0401 18:07:29.104932   18511 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-881427"
	I0401 18:07:29.104933   18511 addons.go:69] Setting default-storageclass=true in profile "addons-881427"
	I0401 18:07:29.104938   18511 addons.go:69] Setting gcp-auth=true in profile "addons-881427"
	I0401 18:07:29.104951   18511 addons.go:234] Setting addon inspektor-gadget=true in "addons-881427"
	I0401 18:07:29.104956   18511 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-881427"
	I0401 18:07:29.104952   18511 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-881427"
	I0401 18:07:29.104931   18511 addons.go:69] Setting helm-tiller=true in profile "addons-881427"
	I0401 18:07:29.105246   18511 addons.go:234] Setting addon helm-tiller=true in "addons-881427"
	I0401 18:07:29.105275   18511 host.go:66] Checking if "addons-881427" exists ...
	I0401 18:07:29.104958   18511 addons.go:69] Setting volumesnapshots=true in profile "addons-881427"
	I0401 18:07:29.105342   18511 addons.go:234] Setting addon volumesnapshots=true in "addons-881427"
	I0401 18:07:29.104922   18511 addons.go:69] Setting metrics-server=true in profile "addons-881427"
	I0401 18:07:29.105382   18511 host.go:66] Checking if "addons-881427" exists ...
	I0401 18:07:29.105397   18511 addons.go:234] Setting addon metrics-server=true in "addons-881427"
	I0401 18:07:29.105423   18511 host.go:66] Checking if "addons-881427" exists ...
	I0401 18:07:29.104919   18511 addons.go:69] Setting cloud-spanner=true in profile "addons-881427"
	I0401 18:07:29.105481   18511 addons.go:234] Setting addon cloud-spanner=true in "addons-881427"
	I0401 18:07:29.105484   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.105507   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.104927   18511 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-881427"
	I0401 18:07:29.105526   18511 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-881427"
	I0401 18:07:29.105543   18511 host.go:66] Checking if "addons-881427" exists ...
	I0401 18:07:29.105507   18511 host.go:66] Checking if "addons-881427" exists ...
	I0401 18:07:29.105685   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.105706   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.105802   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.105829   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.105890   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.105916   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.105958   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.104963   18511 addons.go:69] Setting storage-provisioner=true in profile "addons-881427"
	I0401 18:07:29.106003   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.106025   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.104925   18511 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-881427"
	I0401 18:07:29.106061   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.106068   18511 host.go:66] Checking if "addons-881427" exists ...
	I0401 18:07:29.104970   18511 mustload.go:65] Loading cluster: addons-881427
	I0401 18:07:29.106328   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.106346   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.106389   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.106399   18511 config.go:182] Loaded profile config "addons-881427": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 18:07:29.106409   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.106026   18511 addons.go:234] Setting addon storage-provisioner=true in "addons-881427"
	I0401 18:07:29.104916   18511 addons.go:234] Setting addon yakd=true in "addons-881427"
	I0401 18:07:29.104972   18511 addons.go:69] Setting ingress=true in profile "addons-881427"
	I0401 18:07:29.106488   18511 addons.go:234] Setting addon ingress=true in "addons-881427"
	I0401 18:07:29.106513   18511 host.go:66] Checking if "addons-881427" exists ...
	I0401 18:07:29.106617   18511 host.go:66] Checking if "addons-881427" exists ...
	I0401 18:07:29.106756   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.106789   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.104971   18511 addons.go:69] Setting registry=true in profile "addons-881427"
	I0401 18:07:29.106840   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.106848   18511 addons.go:234] Setting addon registry=true in "addons-881427"
	I0401 18:07:29.106866   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.106981   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.107012   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.104967   18511 addons.go:69] Setting ingress-dns=true in profile "addons-881427"
	I0401 18:07:29.107203   18511 addons.go:234] Setting addon ingress-dns=true in "addons-881427"
	I0401 18:07:29.107239   18511 host.go:66] Checking if "addons-881427" exists ...
	I0401 18:07:29.107564   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.107581   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.104994   18511 host.go:66] Checking if "addons-881427" exists ...
	I0401 18:07:29.107944   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.107959   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.110787   18511 host.go:66] Checking if "addons-881427" exists ...
	I0401 18:07:29.114111   18511 host.go:66] Checking if "addons-881427" exists ...
	I0401 18:07:29.114489   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.114520   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.126847   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34659
	I0401 18:07:29.126949   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45409
	I0401 18:07:29.127343   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.127410   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.127861   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.127887   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.127912   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.127927   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.128288   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.128297   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.128289   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43077
	I0401 18:07:29.128524   18511 main.go:141] libmachine: (addons-881427) Calling .GetState
	I0401 18:07:29.128915   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.128958   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.129448   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.130127   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.130144   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.130502   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.130898   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.130925   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.130936   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35095
	I0401 18:07:29.131260   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.132296   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.132315   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.132625   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.134065   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.134088   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.134616   18511 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-881427"
	I0401 18:07:29.134665   18511 host.go:66] Checking if "addons-881427" exists ...
	I0401 18:07:29.135028   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.135070   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.136086   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.136120   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.143957   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45287
	I0401 18:07:29.144735   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.145310   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.145328   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.145804   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.146377   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.146413   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.147545   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33601
	I0401 18:07:29.148863   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.149922   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.149950   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.150368   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.150915   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.150942   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.153935   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39489
	I0401 18:07:29.154558   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.155384   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.155407   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.155771   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.155950   18511 main.go:141] libmachine: (addons-881427) Calling .GetState
	I0401 18:07:29.156562   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37291
	I0401 18:07:29.157239   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.157604   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38059
	I0401 18:07:29.157737   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.157752   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.158226   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.158522   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.158956   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.158989   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.159405   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.159432   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.159621   18511 addons.go:234] Setting addon default-storageclass=true in "addons-881427"
	I0401 18:07:29.159658   18511 host.go:66] Checking if "addons-881427" exists ...
	I0401 18:07:29.159892   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.159984   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.159996   18511 main.go:141] libmachine: (addons-881427) Calling .GetState
	I0401 18:07:29.160014   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.161729   18511 main.go:141] libmachine: (addons-881427) Calling .DriverName
	I0401 18:07:29.164123   18511 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0401 18:07:29.165512   18511 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0401 18:07:29.165529   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0401 18:07:29.165549   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHHostname
	I0401 18:07:29.168943   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:29.169317   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:07:29.169340   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:29.169592   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHPort
	I0401 18:07:29.169787   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:07:29.169994   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHUsername
	I0401 18:07:29.170150   18511 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427/id_rsa Username:docker}
	I0401 18:07:29.171890   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35727
	I0401 18:07:29.172805   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32915
	I0401 18:07:29.173115   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.175255   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42637
	I0401 18:07:29.175521   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38693
	I0401 18:07:29.175575   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.175764   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.175786   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.175836   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.176131   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.176146   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.176270   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.176281   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.176398   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.176822   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.176848   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.177045   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.177060   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.177268   18511 main.go:141] libmachine: (addons-881427) Calling .GetState
	I0401 18:07:29.177327   18511 main.go:141] libmachine: (addons-881427) Calling .GetState
	I0401 18:07:29.179192   18511 main.go:141] libmachine: (addons-881427) Calling .DriverName
	I0401 18:07:29.179250   18511 host.go:66] Checking if "addons-881427" exists ...
	I0401 18:07:29.179627   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.179658   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.181839   18511 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0401 18:07:29.183101   18511 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0401 18:07:29.183118   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0401 18:07:29.183137   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHHostname
	I0401 18:07:29.180965   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.183357   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35169
	I0401 18:07:29.183805   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.184339   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.184353   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.184815   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.184834   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.185312   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.185507   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.185944   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.185972   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.186353   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:29.186831   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.186868   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.187061   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHPort
	I0401 18:07:29.187079   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:07:29.187103   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:29.187252   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:07:29.187418   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHUsername
	I0401 18:07:29.187570   18511 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427/id_rsa Username:docker}
	I0401 18:07:29.189946   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43237
	I0401 18:07:29.190462   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.190938   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.190957   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.191284   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.191786   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.191824   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.195768   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45375
	I0401 18:07:29.196238   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.196763   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.196778   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.197330   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.197528   18511 main.go:141] libmachine: (addons-881427) Calling .GetState
	I0401 18:07:29.198208   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44965
	I0401 18:07:29.199181   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.200986   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41177
	I0401 18:07:29.201535   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.201560   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.201706   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39541
	I0401 18:07:29.202117   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.202152   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.202670   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.202703   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.203011   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.203031   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.203094   18511 main.go:141] libmachine: (addons-881427) Calling .DriverName
	I0401 18:07:29.205053   18511 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0401 18:07:29.203548   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.203897   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.206722   18511 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0401 18:07:29.206734   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0401 18:07:29.206751   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHHostname
	I0401 18:07:29.207021   18511 main.go:141] libmachine: (addons-881427) Calling .DriverName
	I0401 18:07:29.208970   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.208986   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.209445   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.209590   18511 main.go:141] libmachine: (addons-881427) Calling .GetState
	I0401 18:07:29.209630   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:29.209870   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40445
	I0401 18:07:29.210033   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:07:29.210056   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:29.210461   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHPort
	I0401 18:07:29.210541   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.210616   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:07:29.210766   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHUsername
	I0401 18:07:29.211016   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.211035   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.211061   18511 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427/id_rsa Username:docker}
	I0401 18:07:29.211089   18511 main.go:141] libmachine: (addons-881427) Calling .DriverName
	I0401 18:07:29.213035   18511 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0401 18:07:29.211569   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.213551   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35841
	I0401 18:07:29.215818   18511 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0401 18:07:29.215000   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.215373   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.215984   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33861
	I0401 18:07:29.217186   18511 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0401 18:07:29.217209   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.217505   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.217796   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.218710   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.218771   18511 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0401 18:07:29.219123   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.219945   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.220658   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.220708   18511 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0401 18:07:29.222130   18511 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0401 18:07:29.221467   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.222130   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.221717   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44357
	I0401 18:07:29.222217   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.222468   18511 main.go:141] libmachine: (addons-881427) Calling .GetState
	I0401 18:07:29.222685   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.223806   18511 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0401 18:07:29.224323   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.224921   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45161
	I0401 18:07:29.225625   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42191
	I0401 18:07:29.225863   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37619
	I0401 18:07:29.226283   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43667
	I0401 18:07:29.226437   18511 main.go:141] libmachine: (addons-881427) Calling .DriverName
	I0401 18:07:29.227240   18511 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0401 18:07:29.228736   18511 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0401 18:07:29.227350   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.229815   18511 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0401 18:07:29.227840   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.227888   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.228190   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.228340   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.228589   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32927
	I0401 18:07:29.228755   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0401 18:07:29.229192   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.231047   18511 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0401 18:07:29.231065   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0401 18:07:29.231078   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHHostname
	I0401 18:07:29.231136   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHHostname
	I0401 18:07:29.231278   18511 main.go:141] libmachine: (addons-881427) Calling .GetState
	I0401 18:07:29.232341   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.232365   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.232435   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.232481   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.232498   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.232504   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.232517   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.232617   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.232636   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.232940   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.233202   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.233720   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.233739   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.233788   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.233808   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.233831   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.234445   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.234481   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.234692   18511 main.go:141] libmachine: (addons-881427) Calling .GetState
	I0401 18:07:29.234716   18511 main.go:141] libmachine: (addons-881427) Calling .GetState
	I0401 18:07:29.234732   18511 main.go:141] libmachine: (addons-881427) Calling .GetState
	I0401 18:07:29.234758   18511 main.go:141] libmachine: (addons-881427) Calling .GetState
	I0401 18:07:29.235081   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:29.236077   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:07:29.236093   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:29.236478   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHPort
	I0401 18:07:29.236621   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:07:29.236719   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHUsername
	I0401 18:07:29.236803   18511 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427/id_rsa Username:docker}
	I0401 18:07:29.237344   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:29.237513   18511 main.go:141] libmachine: (addons-881427) Calling .DriverName
	I0401 18:07:29.237793   18511 main.go:141] libmachine: (addons-881427) Calling .DriverName
	I0401 18:07:29.239492   18511 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0401 18:07:29.238327   18511 main.go:141] libmachine: (addons-881427) Calling .DriverName
	I0401 18:07:29.238350   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:07:29.238589   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHPort
	I0401 18:07:29.238747   18511 main.go:141] libmachine: (addons-881427) Calling .DriverName
	I0401 18:07:29.239323   18511 main.go:141] libmachine: (addons-881427) Calling .DriverName
	I0401 18:07:29.240886   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:29.240938   18511 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.26.0
	I0401 18:07:29.242340   18511 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0401 18:07:29.242365   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0401 18:07:29.242386   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHHostname
	I0401 18:07:29.240964   18511 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0401 18:07:29.242413   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0401 18:07:29.242430   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHHostname
	I0401 18:07:29.244178   18511 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0401 18:07:29.241396   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:07:29.245944   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:29.246191   18511 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 18:07:29.246202   18511 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0401 18:07:29.246298   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:29.246449   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHUsername
	I0401 18:07:29.246885   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHPort
	I0401 18:07:29.247512   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 18:07:29.247534   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHHostname
	I0401 18:07:29.247000   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHPort
	I0401 18:07:29.247306   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45289
	I0401 18:07:29.247568   18511 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0401 18:07:29.247673   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:07:29.247680   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:07:29.248396   18511 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427/id_rsa Username:docker}
	I0401 18:07:29.248425   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:07:29.248442   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:07:29.248647   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.249611   18511 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0401 18:07:29.249720   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:29.249763   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:29.250189   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHUsername
	I0401 18:07:29.250364   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHUsername
	I0401 18:07:29.250862   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.252118   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46209
	I0401 18:07:29.252141   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:29.252244   18511 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0401 18:07:29.252784   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHPort
	I0401 18:07:29.253542   18511 out.go:177]   - Using image docker.io/busybox:stable
	I0401 18:07:29.253638   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.254942   18511 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0401 18:07:29.253727   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:07:29.254953   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0401 18:07:29.254968   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:29.254969   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHHostname
	I0401 18:07:29.253964   18511 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427/id_rsa Username:docker}
	I0401 18:07:29.256516   18511 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0401 18:07:29.256529   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0401 18:07:29.256544   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHHostname
	I0401 18:07:29.254008   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:07:29.254164   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37773
	I0401 18:07:29.254268   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.254307   18511 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427/id_rsa Username:docker}
	I0401 18:07:29.255208   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.257477   18511 main.go:141] libmachine: (addons-881427) Calling .GetState
	I0401 18:07:29.257493   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHUsername
	I0401 18:07:29.257680   18511 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427/id_rsa Username:docker}
	I0401 18:07:29.257780   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.258111   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.258250   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.258705   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.258722   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.258781   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.259154   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.259333   18511 main.go:141] libmachine: (addons-881427) Calling .GetState
	I0401 18:07:29.259212   18511 main.go:141] libmachine: (addons-881427) Calling .GetState
	I0401 18:07:29.259959   18511 main.go:141] libmachine: (addons-881427) Calling .DriverName
	I0401 18:07:29.262069   18511 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 18:07:29.260524   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:29.260995   18511 main.go:141] libmachine: (addons-881427) Calling .DriverName
	I0401 18:07:29.261242   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHPort
	I0401 18:07:29.262649   18511 main.go:141] libmachine: (addons-881427) Calling .DriverName
	I0401 18:07:29.263305   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:29.263963   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHPort
	I0401 18:07:29.265875   18511 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 18:07:29.265887   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 18:07:29.265903   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHHostname
	I0401 18:07:29.265955   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:07:29.265977   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:29.266061   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:07:29.266079   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:29.268248   18511 out.go:177]   - Using image docker.io/registry:2.8.3
	I0401 18:07:29.266530   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44073
	I0401 18:07:29.266674   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:07:29.266694   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:07:29.271197   18511 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0401 18:07:29.269701   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:29.270162   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHUsername
	I0401 18:07:29.270312   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHPort
	I0401 18:07:29.270416   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHUsername
	I0401 18:07:29.273733   18511 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0401 18:07:29.273749   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0401 18:07:29.273777   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHHostname
	I0401 18:07:29.273808   18511 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427/id_rsa Username:docker}
	I0401 18:07:29.273832   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:07:29.273857   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:29.273865   18511 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0401 18:07:29.275780   18511 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0401 18:07:29.275796   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0401 18:07:29.275813   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHHostname
	I0401 18:07:29.274169   18511 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427/id_rsa Username:docker}
	I0401 18:07:29.274198   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:07:29.281886   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHUsername
	I0401 18:07:29.282019   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.282204   18511 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427/id_rsa Username:docker}
	I0401 18:07:29.283005   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.283023   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.283321   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.283400   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:29.283564   18511 main.go:141] libmachine: (addons-881427) Calling .GetState
	I0401 18:07:29.283791   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:07:29.283826   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:29.283985   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHPort
	I0401 18:07:29.284127   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:07:29.284272   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHUsername
	I0401 18:07:29.284625   18511 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427/id_rsa Username:docker}
	I0401 18:07:29.284654   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:29.285161   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:07:29.285181   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:29.285354   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHPort
	I0401 18:07:29.285497   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:07:29.285539   18511 main.go:141] libmachine: (addons-881427) Calling .DriverName
	I0401 18:07:29.285650   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHUsername
	I0401 18:07:29.285799   18511 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427/id_rsa Username:docker}
	I0401 18:07:29.285850   18511 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 18:07:29.285863   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 18:07:29.285877   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHHostname
	I0401 18:07:29.288557   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:29.288912   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:07:29.288931   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:29.289216   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHPort
	I0401 18:07:29.289396   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:07:29.289562   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHUsername
	I0401 18:07:29.289740   18511 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427/id_rsa Username:docker}
	W0401 18:07:29.291715   18511 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:39110->192.168.39.214:22: read: connection reset by peer
	I0401 18:07:29.291751   18511 retry.go:31] will retry after 133.062826ms: ssh: handshake failed: read tcp 192.168.39.1:39110->192.168.39.214:22: read: connection reset by peer
	I0401 18:07:29.601285   18511 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0401 18:07:29.601316   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0401 18:07:29.771193   18511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0401 18:07:29.827808   18511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 18:07:29.829199   18511 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0401 18:07:29.829222   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0401 18:07:29.832447   18511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 18:07:29.833654   18511 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0401 18:07:29.833672   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0401 18:07:29.854817   18511 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 18:07:29.854848   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0401 18:07:29.881750   18511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0401 18:07:29.899630   18511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0401 18:07:29.919065   18511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0401 18:07:29.921975   18511 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0401 18:07:29.922001   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0401 18:07:29.933421   18511 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0401 18:07:29.933441   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0401 18:07:29.937383   18511 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0401 18:07:29.937407   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0401 18:07:29.939080   18511 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0401 18:07:29.939101   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0401 18:07:29.943111   18511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0401 18:07:30.034695   18511 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0401 18:07:30.034724   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0401 18:07:30.043835   18511 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0401 18:07:30.043859   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0401 18:07:30.132722   18511 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0401 18:07:30.132749   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0401 18:07:30.158051   18511 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 18:07:30.158075   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 18:07:30.207629   18511 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0401 18:07:30.207660   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0401 18:07:30.232444   18511 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.130935041s)
	I0401 18:07:30.232518   18511 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.127656068s)
	I0401 18:07:30.232597   18511 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 18:07:30.232605   18511 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0401 18:07:30.236455   18511 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0401 18:07:30.236473   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0401 18:07:30.240946   18511 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0401 18:07:30.240964   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0401 18:07:30.272222   18511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0401 18:07:30.376582   18511 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0401 18:07:30.376608   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0401 18:07:30.447422   18511 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0401 18:07:30.447452   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0401 18:07:30.458544   18511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0401 18:07:30.547441   18511 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 18:07:30.547503   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0401 18:07:30.557080   18511 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0401 18:07:30.557109   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0401 18:07:30.609328   18511 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0401 18:07:30.609356   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0401 18:07:30.804735   18511 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0401 18:07:30.804760   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0401 18:07:30.810859   18511 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0401 18:07:30.810884   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0401 18:07:30.831095   18511 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0401 18:07:30.831121   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0401 18:07:30.934738   18511 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0401 18:07:30.934761   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0401 18:07:30.994149   18511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 18:07:31.164427   18511 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0401 18:07:31.164457   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0401 18:07:31.218895   18511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0401 18:07:31.310557   18511 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0401 18:07:31.310584   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0401 18:07:31.321942   18511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0401 18:07:31.596152   18511 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0401 18:07:31.596179   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0401 18:07:31.784906   18511 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0401 18:07:31.784928   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0401 18:07:32.046833   18511 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0401 18:07:32.046861   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0401 18:07:32.180706   18511 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0401 18:07:32.180733   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0401 18:07:32.372542   18511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0401 18:07:32.592144   18511 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0401 18:07:32.592173   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0401 18:07:33.085485   18511 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0401 18:07:33.085509   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0401 18:07:33.490965   18511 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0401 18:07:33.490991   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0401 18:07:33.772229   18511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0401 18:07:34.188302   18511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.360462515s)
	I0401 18:07:34.188347   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:34.188356   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:34.188633   18511 main.go:141] libmachine: (addons-881427) DBG | Closing plugin on server side
	I0401 18:07:34.188679   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:34.188688   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:34.188697   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:34.188703   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:34.188952   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:34.188969   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:34.188982   18511 main.go:141] libmachine: (addons-881427) DBG | Closing plugin on server side
	I0401 18:07:34.189416   18511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.418182636s)
	I0401 18:07:34.189448   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:34.189458   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:34.189665   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:34.189684   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:34.189692   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:34.189700   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:34.189910   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:34.189924   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:34.203835   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:34.203857   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:34.204196   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:34.204218   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:34.952018   18511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.119536913s)
	I0401 18:07:34.952054   18511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.070259733s)
	I0401 18:07:34.952068   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:34.952080   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:34.952088   18511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.052425015s)
	I0401 18:07:34.952119   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:34.952093   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:34.952137   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:34.952153   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:34.952518   18511 main.go:141] libmachine: (addons-881427) DBG | Closing plugin on server side
	I0401 18:07:34.952523   18511 main.go:141] libmachine: (addons-881427) DBG | Closing plugin on server side
	I0401 18:07:34.952531   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:34.952537   18511 main.go:141] libmachine: (addons-881427) DBG | Closing plugin on server side
	I0401 18:07:34.952545   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:34.952556   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:34.952564   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:34.952566   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:34.952603   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:34.952616   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:34.952643   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:34.952837   18511 main.go:141] libmachine: (addons-881427) DBG | Closing plugin on server side
	I0401 18:07:34.952847   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:34.952857   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:34.952905   18511 main.go:141] libmachine: (addons-881427) DBG | Closing plugin on server side
	I0401 18:07:34.952934   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:34.952942   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:34.954426   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:34.954457   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:34.954475   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:34.954495   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:34.954704   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:34.954720   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:36.043776   18511 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0401 18:07:36.043817   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHHostname
	I0401 18:07:36.046659   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:36.047072   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:07:36.047103   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:36.047228   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHPort
	I0401 18:07:36.047453   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:07:36.047622   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHUsername
	I0401 18:07:36.047769   18511 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427/id_rsa Username:docker}
	I0401 18:07:36.921790   18511 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0401 18:07:37.370425   18511 addons.go:234] Setting addon gcp-auth=true in "addons-881427"
	I0401 18:07:37.370482   18511 host.go:66] Checking if "addons-881427" exists ...
	I0401 18:07:37.370784   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:37.370813   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:37.386154   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34077
	I0401 18:07:37.386621   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:37.387075   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:37.387093   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:37.387491   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:37.387927   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:37.387956   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:37.402879   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38799
	I0401 18:07:37.403459   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:37.403962   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:37.403985   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:37.404332   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:37.404516   18511 main.go:141] libmachine: (addons-881427) Calling .GetState
	I0401 18:07:37.406254   18511 main.go:141] libmachine: (addons-881427) Calling .DriverName
	I0401 18:07:37.406480   18511 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0401 18:07:37.406506   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHHostname
	I0401 18:07:37.409172   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:37.409715   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:07:37.409743   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:37.409863   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHPort
	I0401 18:07:37.410054   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:07:37.410214   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHUsername
	I0401 18:07:37.410328   18511 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427/id_rsa Username:docker}
	I0401 18:07:38.955306   18511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.036196484s)
	I0401 18:07:38.955346   18511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.012208075s)
	I0401 18:07:38.955366   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:38.955375   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:38.955392   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:38.955427   18511 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.722795841s)
	I0401 18:07:38.955443   18511 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.722825819s)
	I0401 18:07:38.955379   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:38.955453   18511 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0401 18:07:38.955525   18511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.683269612s)
	I0401 18:07:38.955548   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:38.955555   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:38.955552   18511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.496983646s)
	I0401 18:07:38.955596   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:38.955604   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:38.955632   18511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.96145091s)
	I0401 18:07:38.955654   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:38.955670   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:38.955731   18511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.736796486s)
	I0401 18:07:38.955752   18511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.63378487s)
	I0401 18:07:38.955767   18511 main.go:141] libmachine: Making call to close driver server
	W0401 18:07:38.955757   18511 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0401 18:07:38.955790   18511 retry.go:31] will retry after 220.630061ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0401 18:07:38.955774   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:38.955841   18511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.583265242s)
	I0401 18:07:38.955857   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:38.955857   18511 main.go:141] libmachine: (addons-881427) DBG | Closing plugin on server side
	I0401 18:07:38.955865   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:38.955895   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:38.955907   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:38.955916   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:38.955923   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:38.956258   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:38.956270   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:38.956279   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:38.956287   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:38.956446   18511 node_ready.go:35] waiting up to 6m0s for node "addons-881427" to be "Ready" ...
	I0401 18:07:38.959120   18511 main.go:141] libmachine: (addons-881427) DBG | Closing plugin on server side
	I0401 18:07:38.959135   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:38.959141   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:38.959148   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:38.959148   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:38.959155   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:38.959163   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:38.959171   18511 main.go:141] libmachine: (addons-881427) DBG | Closing plugin on server side
	I0401 18:07:38.959172   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:38.959179   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:38.959190   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:38.959190   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:38.959198   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:38.959214   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:38.959183   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:38.959229   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:38.959198   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:38.959251   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:38.959202   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:38.959271   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:38.959153   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:38.959321   18511 main.go:141] libmachine: (addons-881427) DBG | Closing plugin on server side
	I0401 18:07:38.959322   18511 addons.go:470] Verifying addon ingress=true in "addons-881427"
	I0401 18:07:38.959371   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:38.959404   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:38.959424   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:38.959442   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:38.959164   18511 main.go:141] libmachine: (addons-881427) DBG | Closing plugin on server side
	I0401 18:07:38.962302   18511 out.go:177] * Verifying ingress addon...
	I0401 18:07:38.959128   18511 main.go:141] libmachine: (addons-881427) DBG | Closing plugin on server side
	I0401 18:07:38.959219   18511 main.go:141] libmachine: (addons-881427) DBG | Closing plugin on server side
	I0401 18:07:38.959259   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:38.959599   18511 main.go:141] libmachine: (addons-881427) DBG | Closing plugin on server side
	I0401 18:07:38.959621   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:38.959625   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:38.959638   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:38.959649   18511 main.go:141] libmachine: (addons-881427) DBG | Closing plugin on server side
	I0401 18:07:38.959657   18511 main.go:141] libmachine: (addons-881427) DBG | Closing plugin on server side
	I0401 18:07:38.959671   18511 main.go:141] libmachine: (addons-881427) DBG | Closing plugin on server side
	I0401 18:07:38.959702   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:38.962364   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:38.962375   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:38.962387   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:38.962401   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:38.964246   18511 addons.go:470] Verifying addon metrics-server=true in "addons-881427"
	I0401 18:07:38.962724   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:38.964266   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:38.962746   18511 main.go:141] libmachine: (addons-881427) DBG | Closing plugin on server side
	I0401 18:07:38.963849   18511 node_ready.go:49] node "addons-881427" has status "Ready":"True"
	I0401 18:07:38.965710   18511 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-881427 service yakd-dashboard -n yakd-dashboard
	
	I0401 18:07:38.964341   18511 node_ready.go:38] duration metric: took 7.854162ms for node "addons-881427" to be "Ready" ...
	I0401 18:07:38.964197   18511 addons.go:470] Verifying addon registry=true in "addons-881427"
	I0401 18:07:38.964899   18511 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0401 18:07:38.967018   18511 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 18:07:38.968433   18511 out.go:177] * Verifying registry addon...
	I0401 18:07:38.970649   18511 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0401 18:07:39.014890   18511 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0401 18:07:39.014916   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:39.016540   18511 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-7fhsg" in "kube-system" namespace to be "Ready" ...
	I0401 18:07:39.019862   18511 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0401 18:07:39.019880   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:39.023945   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:39.023960   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:39.024275   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:39.024297   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:39.024300   18511 main.go:141] libmachine: (addons-881427) DBG | Closing plugin on server side
	I0401 18:07:39.033757   18511 pod_ready.go:92] pod "coredns-76f75df574-7fhsg" in "kube-system" namespace has status "Ready":"True"
	I0401 18:07:39.033784   18511 pod_ready.go:81] duration metric: took 17.222317ms for pod "coredns-76f75df574-7fhsg" in "kube-system" namespace to be "Ready" ...
	I0401 18:07:39.033797   18511 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-fgjvr" in "kube-system" namespace to be "Ready" ...
	I0401 18:07:39.121401   18511 pod_ready.go:92] pod "coredns-76f75df574-fgjvr" in "kube-system" namespace has status "Ready":"True"
	I0401 18:07:39.121426   18511 pod_ready.go:81] duration metric: took 87.619988ms for pod "coredns-76f75df574-fgjvr" in "kube-system" namespace to be "Ready" ...
	I0401 18:07:39.121438   18511 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-881427" in "kube-system" namespace to be "Ready" ...
	I0401 18:07:39.141254   18511 pod_ready.go:92] pod "etcd-addons-881427" in "kube-system" namespace has status "Ready":"True"
	I0401 18:07:39.141282   18511 pod_ready.go:81] duration metric: took 19.83635ms for pod "etcd-addons-881427" in "kube-system" namespace to be "Ready" ...
	I0401 18:07:39.141294   18511 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-881427" in "kube-system" namespace to be "Ready" ...
	I0401 18:07:39.155198   18511 pod_ready.go:92] pod "kube-apiserver-addons-881427" in "kube-system" namespace has status "Ready":"True"
	I0401 18:07:39.155219   18511 pod_ready.go:81] duration metric: took 13.916644ms for pod "kube-apiserver-addons-881427" in "kube-system" namespace to be "Ready" ...
	I0401 18:07:39.155232   18511 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-881427" in "kube-system" namespace to be "Ready" ...
	I0401 18:07:39.177517   18511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0401 18:07:39.360176   18511 pod_ready.go:92] pod "kube-controller-manager-addons-881427" in "kube-system" namespace has status "Ready":"True"
	I0401 18:07:39.360213   18511 pod_ready.go:81] duration metric: took 204.974264ms for pod "kube-controller-manager-addons-881427" in "kube-system" namespace to be "Ready" ...
	I0401 18:07:39.360225   18511 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fz2ml" in "kube-system" namespace to be "Ready" ...
	I0401 18:07:39.462952   18511 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-881427" context rescaled to 1 replicas
	I0401 18:07:39.476395   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:39.487085   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:39.760703   18511 pod_ready.go:92] pod "kube-proxy-fz2ml" in "kube-system" namespace has status "Ready":"True"
	I0401 18:07:39.760731   18511 pod_ready.go:81] duration metric: took 400.497834ms for pod "kube-proxy-fz2ml" in "kube-system" namespace to be "Ready" ...
	I0401 18:07:39.760744   18511 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-881427" in "kube-system" namespace to be "Ready" ...
	I0401 18:07:39.983177   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:39.993106   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:40.160007   18511 pod_ready.go:92] pod "kube-scheduler-addons-881427" in "kube-system" namespace has status "Ready":"True"
	I0401 18:07:40.160029   18511 pod_ready.go:81] duration metric: took 399.277189ms for pod "kube-scheduler-addons-881427" in "kube-system" namespace to be "Ready" ...
	I0401 18:07:40.160039   18511 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-75d6c48ddd-s96px" in "kube-system" namespace to be "Ready" ...
	I0401 18:07:40.480719   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:40.488617   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:40.972907   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:40.982748   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:41.493521   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:41.500156   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:41.535253   18511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.762966614s)
	I0401 18:07:41.535313   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:41.535320   18511 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.12881588s)
	I0401 18:07:41.537264   18511 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0401 18:07:41.535326   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:41.540616   18511 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0401 18:07:41.539169   18511 main.go:141] libmachine: (addons-881427) DBG | Closing plugin on server side
	I0401 18:07:41.539186   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:41.542901   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:41.542911   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:41.542917   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:41.542958   18511 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0401 18:07:41.542982   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0401 18:07:41.543246   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:41.543252   18511 main.go:141] libmachine: (addons-881427) DBG | Closing plugin on server side
	I0401 18:07:41.543292   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:41.543313   18511 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-881427"
	I0401 18:07:41.544797   18511 out.go:177] * Verifying csi-hostpath-driver addon...
	I0401 18:07:41.546812   18511 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0401 18:07:41.592018   18511 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0401 18:07:41.592059   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:41.743440   18511 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0401 18:07:41.743464   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0401 18:07:41.799259   18511 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0401 18:07:41.799284   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0401 18:07:41.875310   18511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.697751489s)
	I0401 18:07:41.875362   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:41.875373   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:41.875664   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:41.875693   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:41.875704   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:41.875715   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:41.876593   18511 main.go:141] libmachine: (addons-881427) DBG | Closing plugin on server side
	I0401 18:07:41.876630   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:41.876649   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:41.936260   18511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0401 18:07:41.975391   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:41.978591   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:42.052926   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:42.168168   18511 pod_ready.go:102] pod "metrics-server-75d6c48ddd-s96px" in "kube-system" namespace has status "Ready":"False"
	I0401 18:07:42.475398   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:42.480694   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:42.553047   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:42.990743   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:42.991405   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:43.066454   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:43.257309   18511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.320998071s)
	I0401 18:07:43.257358   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:43.257370   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:43.257689   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:43.257705   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:43.257713   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:43.257721   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:43.258001   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:43.258022   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:43.258049   18511 main.go:141] libmachine: (addons-881427) DBG | Closing plugin on server side
	I0401 18:07:43.259119   18511 addons.go:470] Verifying addon gcp-auth=true in "addons-881427"
	I0401 18:07:43.261741   18511 out.go:177] * Verifying gcp-auth addon...
	I0401 18:07:43.264558   18511 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0401 18:07:43.291105   18511 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0401 18:07:43.291130   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:43.478768   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:43.501357   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:43.554154   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:43.769184   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:43.973120   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:43.976513   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:44.052357   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:44.268803   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:44.472450   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:44.475820   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:44.553253   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:44.668599   18511 pod_ready.go:102] pod "metrics-server-75d6c48ddd-s96px" in "kube-system" namespace has status "Ready":"False"
	I0401 18:07:44.769636   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:44.974586   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:44.975195   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:45.054396   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:45.268852   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:45.472446   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:45.476309   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:45.552921   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:45.769280   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:45.971976   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:45.975457   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:46.054227   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:46.268609   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:46.605539   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:46.609121   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:46.613268   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:46.698861   18511 pod_ready.go:102] pod "metrics-server-75d6c48ddd-s96px" in "kube-system" namespace has status "Ready":"False"
	I0401 18:07:46.769004   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:46.978030   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:46.978905   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:47.054971   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:47.268429   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:47.478054   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:47.480344   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:47.553386   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:47.769215   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:47.972728   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:47.976340   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:48.056843   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:48.268663   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:48.473446   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:48.476668   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:48.552810   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:48.770123   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:48.972285   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:48.976462   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:49.055137   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:49.168107   18511 pod_ready.go:102] pod "metrics-server-75d6c48ddd-s96px" in "kube-system" namespace has status "Ready":"False"
	I0401 18:07:49.268640   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:49.472960   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:49.475289   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:49.552588   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:49.769818   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:49.973337   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:49.976040   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:50.052885   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:50.269500   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:50.471379   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:50.475057   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:50.552871   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:50.769060   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:50.972354   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:50.976078   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:51.053068   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:51.268987   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:51.473198   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:51.475498   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:51.552451   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:51.671022   18511 pod_ready.go:102] pod "metrics-server-75d6c48ddd-s96px" in "kube-system" namespace has status "Ready":"False"
	I0401 18:07:51.772117   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:51.973132   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:51.976746   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:52.053811   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:52.268709   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:52.472182   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:52.476104   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:52.552475   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:52.769192   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:52.971729   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:52.975459   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:53.052612   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:53.268919   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:53.472647   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:53.476122   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:53.552011   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:53.768722   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:53.972423   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:53.976084   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:54.054791   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:54.165871   18511 pod_ready.go:102] pod "metrics-server-75d6c48ddd-s96px" in "kube-system" namespace has status "Ready":"False"
	I0401 18:07:54.272101   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:54.473070   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:54.478317   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:54.552326   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:54.768683   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:54.971988   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:54.975258   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:55.052709   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:55.274508   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:55.472041   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:55.475664   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:55.553038   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:55.772531   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:55.972358   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:55.978116   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:56.054555   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:56.168127   18511 pod_ready.go:102] pod "metrics-server-75d6c48ddd-s96px" in "kube-system" namespace has status "Ready":"False"
	I0401 18:07:56.269332   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:56.473782   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:56.476724   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:56.564078   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:56.768738   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:56.978415   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:56.986695   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:57.052971   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:57.269365   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:57.472544   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:57.484480   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:57.553702   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:57.768749   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:57.974059   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:57.976082   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:58.054083   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:58.168430   18511 pod_ready.go:102] pod "metrics-server-75d6c48ddd-s96px" in "kube-system" namespace has status "Ready":"False"
	I0401 18:07:58.271756   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:58.473804   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:58.475671   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:58.552775   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:58.770500   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:58.972162   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:58.975414   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:59.052602   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:59.289962   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:59.473109   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:59.476707   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:59.553404   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:59.768974   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:59.971939   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:59.975072   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:08:00.054418   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:00.269025   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:00.472065   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:00.474961   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:08:00.553526   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:00.672856   18511 pod_ready.go:102] pod "metrics-server-75d6c48ddd-s96px" in "kube-system" namespace has status "Ready":"False"
	I0401 18:08:00.768080   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:00.972753   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:00.976993   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:08:01.052540   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:01.269007   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:01.473529   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:01.476995   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:08:01.554524   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:01.769529   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:01.975348   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:01.978031   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:08:02.053217   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:02.270830   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:02.472246   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:02.475484   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:08:02.552672   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:02.770248   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:02.979654   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:02.981815   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:08:03.053145   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:03.167881   18511 pod_ready.go:102] pod "metrics-server-75d6c48ddd-s96px" in "kube-system" namespace has status "Ready":"False"
	I0401 18:08:03.268753   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:03.472071   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:03.476697   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:08:03.553069   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:03.768098   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:03.972902   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:03.980274   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:08:04.052340   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:04.269723   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:04.475079   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:04.477093   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:08:04.553116   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:04.769884   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:04.971784   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:04.975171   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:08:05.052818   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:05.168277   18511 pod_ready.go:102] pod "metrics-server-75d6c48ddd-s96px" in "kube-system" namespace has status "Ready":"False"
	I0401 18:08:05.269393   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:05.473769   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:05.476254   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:08:05.554582   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:05.768982   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:05.972510   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:05.975774   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:08:06.065116   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:06.269775   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:06.472543   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:06.476151   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:08:06.553575   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:06.772655   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:06.972446   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:06.975227   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:08:07.052728   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:07.269543   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:07.472290   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:07.475982   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:08:07.553135   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:07.676903   18511 pod_ready.go:102] pod "metrics-server-75d6c48ddd-s96px" in "kube-system" namespace has status "Ready":"False"
	I0401 18:08:07.770183   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:07.972891   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:07.976475   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:08:08.052964   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:08.268873   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:08.476615   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:08.479559   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:08:08.553608   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:08.769704   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:08.973009   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:08.976085   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:08:09.055121   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:09.269505   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:09.474985   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:09.478052   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:08:09.554915   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:09.769849   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:09.972356   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:09.976893   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:08:10.052816   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:10.167420   18511 pod_ready.go:102] pod "metrics-server-75d6c48ddd-s96px" in "kube-system" namespace has status "Ready":"False"
	I0401 18:08:10.268926   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:10.472924   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:10.477748   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:08:10.553031   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:10.948636   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:10.983814   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:08:11.004885   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:11.054650   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:11.271798   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:11.473385   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:11.476422   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:08:11.553366   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:11.769914   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:11.972184   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:11.976897   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:08:12.056710   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:12.198334   18511 pod_ready.go:102] pod "metrics-server-75d6c48ddd-s96px" in "kube-system" namespace has status "Ready":"False"
	I0401 18:08:12.269858   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:12.471730   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:12.475585   18511 kapi.go:107] duration metric: took 33.504934964s to wait for kubernetes.io/minikube-addons=registry ...
	I0401 18:08:12.553144   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:12.768783   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:12.972704   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:13.052617   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:13.272782   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:13.472491   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:13.553436   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:13.771063   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:13.972010   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:14.052725   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:14.269700   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:14.474356   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:14.557066   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:14.678672   18511 pod_ready.go:102] pod "metrics-server-75d6c48ddd-s96px" in "kube-system" namespace has status "Ready":"False"
	I0401 18:08:14.770768   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:14.978508   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:15.055156   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:15.271815   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:15.473052   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:15.553007   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:15.770333   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:15.988362   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:16.057792   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:16.166030   18511 pod_ready.go:92] pod "metrics-server-75d6c48ddd-s96px" in "kube-system" namespace has status "Ready":"True"
	I0401 18:08:16.166060   18511 pod_ready.go:81] duration metric: took 36.006014081s for pod "metrics-server-75d6c48ddd-s96px" in "kube-system" namespace to be "Ready" ...
	I0401 18:08:16.166072   18511 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-m86dq" in "kube-system" namespace to be "Ready" ...
	I0401 18:08:16.172695   18511 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-m86dq" in "kube-system" namespace has status "Ready":"True"
	I0401 18:08:16.172724   18511 pod_ready.go:81] duration metric: took 6.64372ms for pod "nvidia-device-plugin-daemonset-m86dq" in "kube-system" namespace to be "Ready" ...
	I0401 18:08:16.172748   18511 pod_ready.go:38] duration metric: took 37.205708904s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 18:08:16.172767   18511 api_server.go:52] waiting for apiserver process to appear ...
	I0401 18:08:16.172825   18511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 18:08:16.217961   18511 api_server.go:72] duration metric: took 47.116427575s to wait for apiserver process to appear ...
	I0401 18:08:16.217984   18511 api_server.go:88] waiting for apiserver healthz status ...
	I0401 18:08:16.218000   18511 api_server.go:253] Checking apiserver healthz at https://192.168.39.214:8443/healthz ...
	I0401 18:08:16.222594   18511 api_server.go:279] https://192.168.39.214:8443/healthz returned 200:
	ok
	I0401 18:08:16.223591   18511 api_server.go:141] control plane version: v1.29.3
	I0401 18:08:16.223619   18511 api_server.go:131] duration metric: took 5.629585ms to wait for apiserver health ...
	I0401 18:08:16.223627   18511 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 18:08:16.234779   18511 system_pods.go:59] 18 kube-system pods found
	I0401 18:08:16.234817   18511 system_pods.go:61] "coredns-76f75df574-7fhsg" [8e044680-92e0-46d9-aa37-6e95b606d9c6] Running
	I0401 18:08:16.234826   18511 system_pods.go:61] "csi-hostpath-attacher-0" [f64f7572-e225-467c-ab07-def542d15d28] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0401 18:08:16.234835   18511 system_pods.go:61] "csi-hostpath-resizer-0" [b630782c-3751-4074-92ca-f544f91651c3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0401 18:08:16.234843   18511 system_pods.go:61] "csi-hostpathplugin-fs5mb" [4f9b358f-3334-45d6-bf37-8b9d4a5cdf22] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0401 18:08:16.234851   18511 system_pods.go:61] "etcd-addons-881427" [06c62b43-d87e-4129-8091-f93fd58025e0] Running
	I0401 18:08:16.234856   18511 system_pods.go:61] "kube-apiserver-addons-881427" [1fa6dfbf-bb14-4943-bb7e-89b91749d25a] Running
	I0401 18:08:16.234862   18511 system_pods.go:61] "kube-controller-manager-addons-881427" [60bef674-d6a8-4ce8-bbe3-08d8ff1bd11e] Running
	I0401 18:08:16.234871   18511 system_pods.go:61] "kube-ingress-dns-minikube" [2f402a8d-9920-4ab9-b8f5-b24ff9528a04] Running
	I0401 18:08:16.234876   18511 system_pods.go:61] "kube-proxy-fz2ml" [6263627a-2781-45c7-b2a4-b06ab6c04879] Running
	I0401 18:08:16.234886   18511 system_pods.go:61] "kube-scheduler-addons-881427" [5576952d-82a5-40ba-a78e-91409f3a748f] Running
	I0401 18:08:16.234891   18511 system_pods.go:61] "metrics-server-75d6c48ddd-s96px" [ae3f8b9b-1cda-4f49-bb5d-a99466fe6135] Running
	I0401 18:08:16.234897   18511 system_pods.go:61] "nvidia-device-plugin-daemonset-m86dq" [dd4046ef-ce6a-48e2-9d0e-bf3aa98f9156] Running
	I0401 18:08:16.234903   18511 system_pods.go:61] "registry-9jpg9" [257b26ce-194a-4b12-b7f6-a5da0f9cf9e6] Running
	I0401 18:08:16.234907   18511 system_pods.go:61] "registry-proxy-hhmlr" [dae5e9cd-9b99-49cd-aa43-a0dd80d05e0f] Running
	I0401 18:08:16.234916   18511 system_pods.go:61] "snapshot-controller-58dbcc7b99-gpmcg" [56b71b6f-9ddf-43ca-9893-1895d0c71024] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0401 18:08:16.234933   18511 system_pods.go:61] "snapshot-controller-58dbcc7b99-rtgfk" [561da000-21ec-4e67-a8df-8aa9357a125f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0401 18:08:16.234938   18511 system_pods.go:61] "storage-provisioner" [2d770fd8-541f-4ea5-bbff-8bdba366a91b] Running
	I0401 18:08:16.234941   18511 system_pods.go:61] "tiller-deploy-7b677967b9-swl9s" [a6dccfe9-2e74-4db2-b2b9-a8e8e6abcf92] Running
	I0401 18:08:16.234948   18511 system_pods.go:74] duration metric: took 11.315063ms to wait for pod list to return data ...
	I0401 18:08:16.234957   18511 default_sa.go:34] waiting for default service account to be created ...
	I0401 18:08:16.236872   18511 default_sa.go:45] found service account: "default"
	I0401 18:08:16.236886   18511 default_sa.go:55] duration metric: took 1.923284ms for default service account to be created ...
	I0401 18:08:16.236893   18511 system_pods.go:116] waiting for k8s-apps to be running ...
	I0401 18:08:16.245335   18511 system_pods.go:86] 18 kube-system pods found
	I0401 18:08:16.245357   18511 system_pods.go:89] "coredns-76f75df574-7fhsg" [8e044680-92e0-46d9-aa37-6e95b606d9c6] Running
	I0401 18:08:16.245366   18511 system_pods.go:89] "csi-hostpath-attacher-0" [f64f7572-e225-467c-ab07-def542d15d28] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0401 18:08:16.245372   18511 system_pods.go:89] "csi-hostpath-resizer-0" [b630782c-3751-4074-92ca-f544f91651c3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0401 18:08:16.245380   18511 system_pods.go:89] "csi-hostpathplugin-fs5mb" [4f9b358f-3334-45d6-bf37-8b9d4a5cdf22] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0401 18:08:16.245421   18511 system_pods.go:89] "etcd-addons-881427" [06c62b43-d87e-4129-8091-f93fd58025e0] Running
	I0401 18:08:16.245428   18511 system_pods.go:89] "kube-apiserver-addons-881427" [1fa6dfbf-bb14-4943-bb7e-89b91749d25a] Running
	I0401 18:08:16.245433   18511 system_pods.go:89] "kube-controller-manager-addons-881427" [60bef674-d6a8-4ce8-bbe3-08d8ff1bd11e] Running
	I0401 18:08:16.245439   18511 system_pods.go:89] "kube-ingress-dns-minikube" [2f402a8d-9920-4ab9-b8f5-b24ff9528a04] Running
	I0401 18:08:16.245443   18511 system_pods.go:89] "kube-proxy-fz2ml" [6263627a-2781-45c7-b2a4-b06ab6c04879] Running
	I0401 18:08:16.245450   18511 system_pods.go:89] "kube-scheduler-addons-881427" [5576952d-82a5-40ba-a78e-91409f3a748f] Running
	I0401 18:08:16.245454   18511 system_pods.go:89] "metrics-server-75d6c48ddd-s96px" [ae3f8b9b-1cda-4f49-bb5d-a99466fe6135] Running
	I0401 18:08:16.245458   18511 system_pods.go:89] "nvidia-device-plugin-daemonset-m86dq" [dd4046ef-ce6a-48e2-9d0e-bf3aa98f9156] Running
	I0401 18:08:16.245462   18511 system_pods.go:89] "registry-9jpg9" [257b26ce-194a-4b12-b7f6-a5da0f9cf9e6] Running
	I0401 18:08:16.245466   18511 system_pods.go:89] "registry-proxy-hhmlr" [dae5e9cd-9b99-49cd-aa43-a0dd80d05e0f] Running
	I0401 18:08:16.245472   18511 system_pods.go:89] "snapshot-controller-58dbcc7b99-gpmcg" [56b71b6f-9ddf-43ca-9893-1895d0c71024] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0401 18:08:16.245478   18511 system_pods.go:89] "snapshot-controller-58dbcc7b99-rtgfk" [561da000-21ec-4e67-a8df-8aa9357a125f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0401 18:08:16.245483   18511 system_pods.go:89] "storage-provisioner" [2d770fd8-541f-4ea5-bbff-8bdba366a91b] Running
	I0401 18:08:16.245489   18511 system_pods.go:89] "tiller-deploy-7b677967b9-swl9s" [a6dccfe9-2e74-4db2-b2b9-a8e8e6abcf92] Running
	I0401 18:08:16.245498   18511 system_pods.go:126] duration metric: took 8.599119ms to wait for k8s-apps to be running ...
	I0401 18:08:16.245507   18511 system_svc.go:44] waiting for kubelet service to be running ....
	I0401 18:08:16.245547   18511 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 18:08:16.271316   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:16.296731   18511 system_svc.go:56] duration metric: took 51.218036ms WaitForService to wait for kubelet
	I0401 18:08:16.296761   18511 kubeadm.go:576] duration metric: took 47.195228282s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 18:08:16.296779   18511 node_conditions.go:102] verifying NodePressure condition ...
	I0401 18:08:16.300661   18511 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 18:08:16.300687   18511 node_conditions.go:123] node cpu capacity is 2
	I0401 18:08:16.300698   18511 node_conditions.go:105] duration metric: took 3.914545ms to run NodePressure ...
	I0401 18:08:16.300710   18511 start.go:240] waiting for startup goroutines ...
	I0401 18:08:16.472211   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:16.553492   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:16.768685   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:16.972103   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:17.052938   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:17.272098   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:17.471381   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:17.553149   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:17.768873   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:18.204964   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:18.208546   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:18.268539   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:18.471434   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:18.552098   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:18.769588   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:18.972561   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:19.105353   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:19.272826   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:19.472766   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:19.553165   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:19.769104   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:19.977664   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:20.056068   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:20.268882   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:20.472722   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:20.555881   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:20.776199   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:20.972181   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:21.053621   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:21.272062   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:21.472834   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:21.552745   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:21.768840   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:21.972858   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:22.053365   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:22.270205   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:22.471833   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:22.552797   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:22.769105   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:22.974067   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:23.054032   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:23.434978   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:23.472929   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:23.558129   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:23.768463   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:23.973365   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:24.056082   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:24.270671   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:24.472411   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:24.553323   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:24.769054   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:24.972613   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:25.052616   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:25.270217   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:25.471569   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:25.553030   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:25.768915   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:25.974054   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:26.053201   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:26.268591   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:26.471780   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:26.552858   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:26.768829   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:26.971994   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:27.053634   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:27.269167   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:27.476750   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:27.870079   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:27.874460   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:27.971779   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:28.053197   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:28.268836   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:28.471825   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:28.552776   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:28.773025   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:28.972681   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:29.052917   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:29.271304   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:29.471804   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:29.552717   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:29.768844   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:29.973056   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:30.055266   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:30.272854   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:30.472563   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:30.553737   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:30.768795   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:30.973132   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:31.053395   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:31.270480   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:31.472572   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:31.568572   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:32.172897   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:32.173322   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:32.182836   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:32.269186   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:32.471685   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:32.557515   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:32.769553   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:32.972275   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:33.053751   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:33.268457   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:33.471927   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:33.553096   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:33.769236   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:33.978329   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:34.054941   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:34.272927   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:34.472901   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:34.553074   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:34.770097   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:34.975388   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:35.068792   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:35.268835   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:35.473634   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:35.570056   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:35.775366   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:35.979204   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:36.056583   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:36.270012   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:36.472124   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:36.553314   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:36.769136   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:36.972095   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:37.052611   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:37.273404   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:37.472637   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:37.553150   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:37.768885   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:37.973146   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:38.052634   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:38.269987   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:38.473670   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:38.553660   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:38.768460   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:38.997743   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:39.068382   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:39.268190   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:39.472909   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:39.561692   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:39.769001   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:39.972462   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:40.054178   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:40.269442   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:40.473202   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:40.553195   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:40.774330   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:40.972248   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:41.052896   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:41.271221   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:41.472371   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:41.552999   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:41.770358   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:41.972065   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:42.052962   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:42.271043   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:42.473224   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:42.554590   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:42.768420   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:42.973175   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:43.054620   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:43.268169   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:43.473244   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:43.559942   18511 kapi.go:107] duration metric: took 1m2.013128055s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0401 18:08:43.771530   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:43.972213   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:44.269075   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:44.472966   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:44.768848   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:44.972510   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:45.269001   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:45.472330   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:45.769423   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:45.972558   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:46.723482   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:46.727533   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:46.770691   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:46.973089   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:47.269912   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:47.472437   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:47.768906   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:47.972545   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:48.270579   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:48.472588   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:48.768817   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:48.972380   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:49.268463   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:49.472322   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:49.769187   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:49.975462   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:50.270104   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:50.472563   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:50.768927   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:50.972939   18511 kapi.go:107] duration metric: took 1m12.00803701s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0401 18:08:51.269358   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:51.771838   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:52.268474   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:52.779509   18511 kapi.go:107] duration metric: took 1m9.514951252s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0401 18:08:52.781237   18511 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-881427 cluster.
	I0401 18:08:52.782621   18511 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0401 18:08:52.783976   18511 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0401 18:08:52.785369   18511 out.go:177] * Enabled addons: ingress-dns, default-storageclass, nvidia-device-plugin, storage-provisioner, cloud-spanner, helm-tiller, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0401 18:08:52.786806   18511 addons.go:505] duration metric: took 1m23.685254168s for enable addons: enabled=[ingress-dns default-storageclass nvidia-device-plugin storage-provisioner cloud-spanner helm-tiller inspektor-gadget metrics-server yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0401 18:08:52.786849   18511 start.go:245] waiting for cluster config update ...
	I0401 18:08:52.786867   18511 start.go:254] writing updated cluster config ...
	I0401 18:08:52.787090   18511 ssh_runner.go:195] Run: rm -f paused
	I0401 18:08:52.843163   18511 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0401 18:08:52.845001   18511 out.go:177] * Done! kubectl is now configured to use "addons-881427" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 01 18:11:59 addons-881427 crio[687]: time="2024-04-01 18:11:59.608260550Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711995119608156612,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:571855,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1a6e04bf-52fd-4b8a-8401-b8249b7adf6f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 18:11:59 addons-881427 crio[687]: time="2024-04-01 18:11:59.609298306Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7509ddae-8d30-4bbf-95d8-75210adaed1f name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:11:59 addons-881427 crio[687]: time="2024-04-01 18:11:59.609349516Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7509ddae-8d30-4bbf-95d8-75210adaed1f name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:11:59 addons-881427 crio[687]: time="2024-04-01 18:11:59.609674183Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7df3de8a7b886c85dce3217b35e727954687c940b4380928920c88153ed8dbd4,PodSandboxId:fd238aaa7e15d3041ac4c8d8e252e24447c4fe2fba89b7565bcb9c548048b256,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1711995111153039188,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-g9f4q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5c9531d7-9538-4c0a-a22b-82e5d3918495,},Annotations:map[string]string{io.kubernetes.container.hash: 198a8243,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e95587cc2e5f9ada2b4bd11d273faffb12f06f4763cd6fe989142a67ef0b0ead,PodSandboxId:f6ee23d762b03ae1dff138e931ed1f7d5e980c7e41fce854828b5a789324412c,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608,State:CONTAINER_RUNNING,CreatedAt:1711994968719301131,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ae675593-97ae-4a01-8cae-475396963c4b,},Annotations:map[string]string{io.kubern
etes.container.hash: e3d73d9a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e289602215f62c9630661f30f4246629f97515a923e97514e7f8c9b602990dc1,PodSandboxId:db5763a0603c72e723a8d56d26ccda479621fec8f20b7a82d39bde48b658fc83,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1711994962737017578,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5b77dbd7c4-ssqx5,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: 71fe80a0-1f83-4f16-908c-b2bf00b585ee,},Annotations:map[string]string{io.kubernetes.container.hash: 7745f5f4,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0125f0c6d4aacc178dbf9901b3023e0afc815152e540ad3c84a6083e43f9abca,PodSandboxId:33fad9b23a3f122c379da83a962230efb2751641f8e4c73312e7c3281d027f32,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1711994931706237294,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-7d69788767-bhk6q,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 2259bc97-3726-4970-8f34-e0b2e0465e3e,},Annotations:map[string]string{io.kubernetes.container.hash: debf27c3,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:098753116b336981b4c319ce089d6002b6770f52522172ddce33407332186c6a,PodSandboxId:0fde13a63c3c832b04d8a7c08b07f4df712a102feca2acd09788b583d5ad2948,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1711994908630186614,Labels:map[strin
g]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-wf88x,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f9e23dd5-de9a-4127-98c8-7095ea4a801f,},Annotations:map[string]string{io.kubernetes.container.hash: 68a3b0c1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f9b111e671c93c26ca5599786b77e68584d0332fd237a33007f027e1aa910ce,PodSandboxId:23b05486602dd4faff956afd2dc7aec978561f2cf08c38b42ab85c3aed6582d2,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:
1711994908506936788,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-82sh9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f780133b-a5c0-4eab-8f19-bd1181b15957,},Annotations:map[string]string{io.kubernetes.container.hash: 31ab90cd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c738a051f99c97d24e83233b7777445dfab452e800c256a6652be42d3476feca,PodSandboxId:84c4fd4e30f718a6e6d401a1ea8f368c28e48c70076adae7c696f2e63357f8da,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,
CreatedAt:1711994904129082239,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-n4pp4,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 85d661ab-6d0c-4c5d-80d7-5e87e8e096b0,},Annotations:map[string]string{io.kubernetes.container.hash: b71814d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d02f777dc4fb5d30481463a72a7f1514d457c51769db6f34c335fdc610985307,PodSandboxId:5af7e12bb8286138ae2697e329336eb97c6a67530e5f0a42b7a6d7e73847d235,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:538fb31f832e76c93f10035cb609c56fc5cd18b3cd85a3ba50699572c3c5dc50,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:1a9bd6f561b5c8cb73e4847b4f8044ef2d44a79008ee4cc46d71a87bbbebce32,State:CONTAINER_RUNNING,CreatedAt:1711994879975825975,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-5446596998-pvd79,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5144be1b-5f2f-4db7-8c66-bb679aa31a3f,},Annotations:map[string]string{io.kubernetes.container.hash: fb58b090,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d879594bec103909d539395a08207a09bcebce1a01b59adb744f55f6fc38269c,PodSandboxId:5802baa7237fc28883b3905cb7db5e7e518fc3198235a21327ba38e3a7d10928,Metadata:&ContainerM
etadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711994856582826816,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d770fd8-541f-4ea5-bbff-8bdba366a91b,},Annotations:map[string]string{io.kubernetes.container.hash: 314cca10,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7592332265d29603d706dd8ae6215012ab699095b4cb81b5a287cb3566a87f87,PodSandboxId:ecf893481468496183e00c27b0928aff583b346c96cef194ccdda81157cbec21,Metadata:&ContainerMetadata{Name
:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711994851527592538,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-7fhsg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e044680-92e0-46d9-aa37-6e95b606d9c6,},Annotations:map[string]string{io.kubernetes.container.hash: befc28bd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 3
0,},},&Container{Id:4364158240fbf7e504278f6465f4ca09aafa1f1add53cc175f8dfe119fce1326,PodSandboxId:8558ddf14bd58e21c04f7531200a71a09b899326b0d4218e33fd11d86c736cc1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711994849753251261,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fz2ml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6263627a-2781-45c7-b2a4-b06ab6c04879,},Annotations:map[string]string{io.kubernetes.container.hash: 98de96da,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac45958565c5a
ab2fa2b8390aeaf778faac10f25c756cb29e10b4afbcd107bd5,PodSandboxId:eb20eea5d33ff49ae7e9b03022f9891cf96444063f21b85bdf9b424fe286dc03,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711994830505372202,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-881427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e438293d0084e7b4bf6faae6a01bf5d8,},Annotations:map[string]string{io.kubernetes.container.hash: 36f5a6fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2d54581a0ef573e289f02ca7ad4f3eeb8b3f9014afdc78a9569a0c254bc
fb09,PodSandboxId:10409e55554a74bc18e3debddb4d100156f8c514ce7ad47ce71cb8cbe26b42cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711994830453287289,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-881427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fee49c26fd8f5049e6dcf4449cacb5b,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03fb53a7e5f85c59443d18637bfcbf0ffa22527f75cc75a784
5f585f87ee236d,PodSandboxId:4739cf05a2d89e506b62a362966c43cde11675026153960c48f79f290a804a94,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711994830448525577,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-881427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a2c1dd6e026812c08404e38be364fa4,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd6fdf952e5501e85339e87407a72c5550cda3b57b1bdca9f53b58f499f8b941,Po
dSandboxId:f34902d274111dc89384043635a3c135d86fe98a2df83385c9a9c456769aaff6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711994830419294201,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-881427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37fe97e449b1812962375c600235bf53,},Annotations:map[string]string{io.kubernetes.container.hash: e4d7eaf4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7509ddae-8d30-4bbf-95d8-75210adaed1f name=/runtim
e.v1.RuntimeService/ListContainers
	Apr 01 18:11:59 addons-881427 crio[687]: time="2024-04-01 18:11:59.653623676Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e1a248e6-6c88-4dc1-933b-ecf2154abeae name=/runtime.v1.RuntimeService/Version
	Apr 01 18:11:59 addons-881427 crio[687]: time="2024-04-01 18:11:59.653773306Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e1a248e6-6c88-4dc1-933b-ecf2154abeae name=/runtime.v1.RuntimeService/Version
	Apr 01 18:11:59 addons-881427 crio[687]: time="2024-04-01 18:11:59.656543539Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=77c8c8c9-f100-43c2-b078-93624c08b650 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 18:11:59 addons-881427 crio[687]: time="2024-04-01 18:11:59.660047286Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711995119659944230,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:571855,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=77c8c8c9-f100-43c2-b078-93624c08b650 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 18:11:59 addons-881427 crio[687]: time="2024-04-01 18:11:59.662990766Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e91ab6f1-8fc7-4ebe-9613-c1a5c2c851a4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:11:59 addons-881427 crio[687]: time="2024-04-01 18:11:59.663071616Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e91ab6f1-8fc7-4ebe-9613-c1a5c2c851a4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:11:59 addons-881427 crio[687]: time="2024-04-01 18:11:59.664140988Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7df3de8a7b886c85dce3217b35e727954687c940b4380928920c88153ed8dbd4,PodSandboxId:fd238aaa7e15d3041ac4c8d8e252e24447c4fe2fba89b7565bcb9c548048b256,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1711995111153039188,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-g9f4q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5c9531d7-9538-4c0a-a22b-82e5d3918495,},Annotations:map[string]string{io.kubernetes.container.hash: 198a8243,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e95587cc2e5f9ada2b4bd11d273faffb12f06f4763cd6fe989142a67ef0b0ead,PodSandboxId:f6ee23d762b03ae1dff138e931ed1f7d5e980c7e41fce854828b5a789324412c,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608,State:CONTAINER_RUNNING,CreatedAt:1711994968719301131,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ae675593-97ae-4a01-8cae-475396963c4b,},Annotations:map[string]string{io.kubern
etes.container.hash: e3d73d9a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e289602215f62c9630661f30f4246629f97515a923e97514e7f8c9b602990dc1,PodSandboxId:db5763a0603c72e723a8d56d26ccda479621fec8f20b7a82d39bde48b658fc83,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1711994962737017578,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5b77dbd7c4-ssqx5,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: 71fe80a0-1f83-4f16-908c-b2bf00b585ee,},Annotations:map[string]string{io.kubernetes.container.hash: 7745f5f4,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0125f0c6d4aacc178dbf9901b3023e0afc815152e540ad3c84a6083e43f9abca,PodSandboxId:33fad9b23a3f122c379da83a962230efb2751641f8e4c73312e7c3281d027f32,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1711994931706237294,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-7d69788767-bhk6q,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 2259bc97-3726-4970-8f34-e0b2e0465e3e,},Annotations:map[string]string{io.kubernetes.container.hash: debf27c3,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:098753116b336981b4c319ce089d6002b6770f52522172ddce33407332186c6a,PodSandboxId:0fde13a63c3c832b04d8a7c08b07f4df712a102feca2acd09788b583d5ad2948,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1711994908630186614,Labels:map[strin
g]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-wf88x,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f9e23dd5-de9a-4127-98c8-7095ea4a801f,},Annotations:map[string]string{io.kubernetes.container.hash: 68a3b0c1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f9b111e671c93c26ca5599786b77e68584d0332fd237a33007f027e1aa910ce,PodSandboxId:23b05486602dd4faff956afd2dc7aec978561f2cf08c38b42ab85c3aed6582d2,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:
1711994908506936788,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-82sh9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f780133b-a5c0-4eab-8f19-bd1181b15957,},Annotations:map[string]string{io.kubernetes.container.hash: 31ab90cd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c738a051f99c97d24e83233b7777445dfab452e800c256a6652be42d3476feca,PodSandboxId:84c4fd4e30f718a6e6d401a1ea8f368c28e48c70076adae7c696f2e63357f8da,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,
CreatedAt:1711994904129082239,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-n4pp4,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 85d661ab-6d0c-4c5d-80d7-5e87e8e096b0,},Annotations:map[string]string{io.kubernetes.container.hash: b71814d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d02f777dc4fb5d30481463a72a7f1514d457c51769db6f34c335fdc610985307,PodSandboxId:5af7e12bb8286138ae2697e329336eb97c6a67530e5f0a42b7a6d7e73847d235,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:538fb31f832e76c93f10035cb609c56fc5cd18b3cd85a3ba50699572c3c5dc50,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:1a9bd6f561b5c8cb73e4847b4f8044ef2d44a79008ee4cc46d71a87bbbebce32,State:CONTAINER_RUNNING,CreatedAt:1711994879975825975,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-5446596998-pvd79,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5144be1b-5f2f-4db7-8c66-bb679aa31a3f,},Annotations:map[string]string{io.kubernetes.container.hash: fb58b090,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d879594bec103909d539395a08207a09bcebce1a01b59adb744f55f6fc38269c,PodSandboxId:5802baa7237fc28883b3905cb7db5e7e518fc3198235a21327ba38e3a7d10928,Metadata:&ContainerM
etadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711994856582826816,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d770fd8-541f-4ea5-bbff-8bdba366a91b,},Annotations:map[string]string{io.kubernetes.container.hash: 314cca10,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7592332265d29603d706dd8ae6215012ab699095b4cb81b5a287cb3566a87f87,PodSandboxId:ecf893481468496183e00c27b0928aff583b346c96cef194ccdda81157cbec21,Metadata:&ContainerMetadata{Name
:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711994851527592538,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-7fhsg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e044680-92e0-46d9-aa37-6e95b606d9c6,},Annotations:map[string]string{io.kubernetes.container.hash: befc28bd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 3
0,},},&Container{Id:4364158240fbf7e504278f6465f4ca09aafa1f1add53cc175f8dfe119fce1326,PodSandboxId:8558ddf14bd58e21c04f7531200a71a09b899326b0d4218e33fd11d86c736cc1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711994849753251261,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fz2ml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6263627a-2781-45c7-b2a4-b06ab6c04879,},Annotations:map[string]string{io.kubernetes.container.hash: 98de96da,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac45958565c5a
ab2fa2b8390aeaf778faac10f25c756cb29e10b4afbcd107bd5,PodSandboxId:eb20eea5d33ff49ae7e9b03022f9891cf96444063f21b85bdf9b424fe286dc03,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711994830505372202,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-881427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e438293d0084e7b4bf6faae6a01bf5d8,},Annotations:map[string]string{io.kubernetes.container.hash: 36f5a6fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2d54581a0ef573e289f02ca7ad4f3eeb8b3f9014afdc78a9569a0c254bc
fb09,PodSandboxId:10409e55554a74bc18e3debddb4d100156f8c514ce7ad47ce71cb8cbe26b42cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711994830453287289,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-881427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fee49c26fd8f5049e6dcf4449cacb5b,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03fb53a7e5f85c59443d18637bfcbf0ffa22527f75cc75a784
5f585f87ee236d,PodSandboxId:4739cf05a2d89e506b62a362966c43cde11675026153960c48f79f290a804a94,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711994830448525577,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-881427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a2c1dd6e026812c08404e38be364fa4,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd6fdf952e5501e85339e87407a72c5550cda3b57b1bdca9f53b58f499f8b941,Po
dSandboxId:f34902d274111dc89384043635a3c135d86fe98a2df83385c9a9c456769aaff6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711994830419294201,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-881427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37fe97e449b1812962375c600235bf53,},Annotations:map[string]string{io.kubernetes.container.hash: e4d7eaf4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e91ab6f1-8fc7-4ebe-9613-c1a5c2c851a4 name=/runtim
e.v1.RuntimeService/ListContainers
	Apr 01 18:11:59 addons-881427 crio[687]: time="2024-04-01 18:11:59.704980572Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=74a357a4-65aa-41b1-8f26-981a6388e462 name=/runtime.v1.RuntimeService/Version
	Apr 01 18:11:59 addons-881427 crio[687]: time="2024-04-01 18:11:59.705082918Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=74a357a4-65aa-41b1-8f26-981a6388e462 name=/runtime.v1.RuntimeService/Version
	Apr 01 18:11:59 addons-881427 crio[687]: time="2024-04-01 18:11:59.706531528Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=45532c90-e199-46dd-95f3-22462d83eb2d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 18:11:59 addons-881427 crio[687]: time="2024-04-01 18:11:59.707898225Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711995119707871558,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:571855,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=45532c90-e199-46dd-95f3-22462d83eb2d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 18:11:59 addons-881427 crio[687]: time="2024-04-01 18:11:59.708472508Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9a0567b6-cb19-4cc4-a1c3-f15ef8e6f67a name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:11:59 addons-881427 crio[687]: time="2024-04-01 18:11:59.708529740Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9a0567b6-cb19-4cc4-a1c3-f15ef8e6f67a name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:11:59 addons-881427 crio[687]: time="2024-04-01 18:11:59.709119591Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7df3de8a7b886c85dce3217b35e727954687c940b4380928920c88153ed8dbd4,PodSandboxId:fd238aaa7e15d3041ac4c8d8e252e24447c4fe2fba89b7565bcb9c548048b256,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1711995111153039188,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-g9f4q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5c9531d7-9538-4c0a-a22b-82e5d3918495,},Annotations:map[string]string{io.kubernetes.container.hash: 198a8243,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e95587cc2e5f9ada2b4bd11d273faffb12f06f4763cd6fe989142a67ef0b0ead,PodSandboxId:f6ee23d762b03ae1dff138e931ed1f7d5e980c7e41fce854828b5a789324412c,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608,State:CONTAINER_RUNNING,CreatedAt:1711994968719301131,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ae675593-97ae-4a01-8cae-475396963c4b,},Annotations:map[string]string{io.kubern
etes.container.hash: e3d73d9a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e289602215f62c9630661f30f4246629f97515a923e97514e7f8c9b602990dc1,PodSandboxId:db5763a0603c72e723a8d56d26ccda479621fec8f20b7a82d39bde48b658fc83,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1711994962737017578,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5b77dbd7c4-ssqx5,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: 71fe80a0-1f83-4f16-908c-b2bf00b585ee,},Annotations:map[string]string{io.kubernetes.container.hash: 7745f5f4,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0125f0c6d4aacc178dbf9901b3023e0afc815152e540ad3c84a6083e43f9abca,PodSandboxId:33fad9b23a3f122c379da83a962230efb2751641f8e4c73312e7c3281d027f32,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1711994931706237294,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-7d69788767-bhk6q,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 2259bc97-3726-4970-8f34-e0b2e0465e3e,},Annotations:map[string]string{io.kubernetes.container.hash: debf27c3,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:098753116b336981b4c319ce089d6002b6770f52522172ddce33407332186c6a,PodSandboxId:0fde13a63c3c832b04d8a7c08b07f4df712a102feca2acd09788b583d5ad2948,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1711994908630186614,Labels:map[strin
g]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-wf88x,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f9e23dd5-de9a-4127-98c8-7095ea4a801f,},Annotations:map[string]string{io.kubernetes.container.hash: 68a3b0c1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f9b111e671c93c26ca5599786b77e68584d0332fd237a33007f027e1aa910ce,PodSandboxId:23b05486602dd4faff956afd2dc7aec978561f2cf08c38b42ab85c3aed6582d2,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:
1711994908506936788,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-82sh9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f780133b-a5c0-4eab-8f19-bd1181b15957,},Annotations:map[string]string{io.kubernetes.container.hash: 31ab90cd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c738a051f99c97d24e83233b7777445dfab452e800c256a6652be42d3476feca,PodSandboxId:84c4fd4e30f718a6e6d401a1ea8f368c28e48c70076adae7c696f2e63357f8da,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,
CreatedAt:1711994904129082239,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-n4pp4,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 85d661ab-6d0c-4c5d-80d7-5e87e8e096b0,},Annotations:map[string]string{io.kubernetes.container.hash: b71814d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d02f777dc4fb5d30481463a72a7f1514d457c51769db6f34c335fdc610985307,PodSandboxId:5af7e12bb8286138ae2697e329336eb97c6a67530e5f0a42b7a6d7e73847d235,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:538fb31f832e76c93f10035cb609c56fc5cd18b3cd85a3ba50699572c3c5dc50,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:1a9bd6f561b5c8cb73e4847b4f8044ef2d44a79008ee4cc46d71a87bbbebce32,State:CONTAINER_RUNNING,CreatedAt:1711994879975825975,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-5446596998-pvd79,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5144be1b-5f2f-4db7-8c66-bb679aa31a3f,},Annotations:map[string]string{io.kubernetes.container.hash: fb58b090,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d879594bec103909d539395a08207a09bcebce1a01b59adb744f55f6fc38269c,PodSandboxId:5802baa7237fc28883b3905cb7db5e7e518fc3198235a21327ba38e3a7d10928,Metadata:&ContainerM
etadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711994856582826816,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d770fd8-541f-4ea5-bbff-8bdba366a91b,},Annotations:map[string]string{io.kubernetes.container.hash: 314cca10,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7592332265d29603d706dd8ae6215012ab699095b4cb81b5a287cb3566a87f87,PodSandboxId:ecf893481468496183e00c27b0928aff583b346c96cef194ccdda81157cbec21,Metadata:&ContainerMetadata{Name
:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711994851527592538,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-7fhsg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e044680-92e0-46d9-aa37-6e95b606d9c6,},Annotations:map[string]string{io.kubernetes.container.hash: befc28bd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 3
0,},},&Container{Id:4364158240fbf7e504278f6465f4ca09aafa1f1add53cc175f8dfe119fce1326,PodSandboxId:8558ddf14bd58e21c04f7531200a71a09b899326b0d4218e33fd11d86c736cc1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711994849753251261,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fz2ml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6263627a-2781-45c7-b2a4-b06ab6c04879,},Annotations:map[string]string{io.kubernetes.container.hash: 98de96da,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac45958565c5a
ab2fa2b8390aeaf778faac10f25c756cb29e10b4afbcd107bd5,PodSandboxId:eb20eea5d33ff49ae7e9b03022f9891cf96444063f21b85bdf9b424fe286dc03,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711994830505372202,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-881427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e438293d0084e7b4bf6faae6a01bf5d8,},Annotations:map[string]string{io.kubernetes.container.hash: 36f5a6fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2d54581a0ef573e289f02ca7ad4f3eeb8b3f9014afdc78a9569a0c254bc
fb09,PodSandboxId:10409e55554a74bc18e3debddb4d100156f8c514ce7ad47ce71cb8cbe26b42cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711994830453287289,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-881427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fee49c26fd8f5049e6dcf4449cacb5b,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03fb53a7e5f85c59443d18637bfcbf0ffa22527f75cc75a784
5f585f87ee236d,PodSandboxId:4739cf05a2d89e506b62a362966c43cde11675026153960c48f79f290a804a94,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711994830448525577,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-881427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a2c1dd6e026812c08404e38be364fa4,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd6fdf952e5501e85339e87407a72c5550cda3b57b1bdca9f53b58f499f8b941,Po
dSandboxId:f34902d274111dc89384043635a3c135d86fe98a2df83385c9a9c456769aaff6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711994830419294201,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-881427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37fe97e449b1812962375c600235bf53,},Annotations:map[string]string{io.kubernetes.container.hash: e4d7eaf4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9a0567b6-cb19-4cc4-a1c3-f15ef8e6f67a name=/runtim
e.v1.RuntimeService/ListContainers
	Apr 01 18:11:59 addons-881427 crio[687]: time="2024-04-01 18:11:59.749622712Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4300db1c-47a1-4cc0-a957-92e9c538e667 name=/runtime.v1.RuntimeService/Version
	Apr 01 18:11:59 addons-881427 crio[687]: time="2024-04-01 18:11:59.749786794Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4300db1c-47a1-4cc0-a957-92e9c538e667 name=/runtime.v1.RuntimeService/Version
	Apr 01 18:11:59 addons-881427 crio[687]: time="2024-04-01 18:11:59.751518286Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fddbf68d-fd79-4786-b132-7c92de5adf91 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 18:11:59 addons-881427 crio[687]: time="2024-04-01 18:11:59.753138720Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711995119753114837,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:571855,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fddbf68d-fd79-4786-b132-7c92de5adf91 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 18:11:59 addons-881427 crio[687]: time="2024-04-01 18:11:59.753838900Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=beda80dc-1242-43d5-b839-dc147699b124 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:11:59 addons-881427 crio[687]: time="2024-04-01 18:11:59.753922548Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=beda80dc-1242-43d5-b839-dc147699b124 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:11:59 addons-881427 crio[687]: time="2024-04-01 18:11:59.754249769Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7df3de8a7b886c85dce3217b35e727954687c940b4380928920c88153ed8dbd4,PodSandboxId:fd238aaa7e15d3041ac4c8d8e252e24447c4fe2fba89b7565bcb9c548048b256,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1711995111153039188,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-g9f4q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5c9531d7-9538-4c0a-a22b-82e5d3918495,},Annotations:map[string]string{io.kubernetes.container.hash: 198a8243,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e95587cc2e5f9ada2b4bd11d273faffb12f06f4763cd6fe989142a67ef0b0ead,PodSandboxId:f6ee23d762b03ae1dff138e931ed1f7d5e980c7e41fce854828b5a789324412c,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608,State:CONTAINER_RUNNING,CreatedAt:1711994968719301131,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ae675593-97ae-4a01-8cae-475396963c4b,},Annotations:map[string]string{io.kubern
etes.container.hash: e3d73d9a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e289602215f62c9630661f30f4246629f97515a923e97514e7f8c9b602990dc1,PodSandboxId:db5763a0603c72e723a8d56d26ccda479621fec8f20b7a82d39bde48b658fc83,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1711994962737017578,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5b77dbd7c4-ssqx5,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: 71fe80a0-1f83-4f16-908c-b2bf00b585ee,},Annotations:map[string]string{io.kubernetes.container.hash: 7745f5f4,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0125f0c6d4aacc178dbf9901b3023e0afc815152e540ad3c84a6083e43f9abca,PodSandboxId:33fad9b23a3f122c379da83a962230efb2751641f8e4c73312e7c3281d027f32,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1711994931706237294,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-7d69788767-bhk6q,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 2259bc97-3726-4970-8f34-e0b2e0465e3e,},Annotations:map[string]string{io.kubernetes.container.hash: debf27c3,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:098753116b336981b4c319ce089d6002b6770f52522172ddce33407332186c6a,PodSandboxId:0fde13a63c3c832b04d8a7c08b07f4df712a102feca2acd09788b583d5ad2948,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1711994908630186614,Labels:map[strin
g]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-wf88x,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f9e23dd5-de9a-4127-98c8-7095ea4a801f,},Annotations:map[string]string{io.kubernetes.container.hash: 68a3b0c1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f9b111e671c93c26ca5599786b77e68584d0332fd237a33007f027e1aa910ce,PodSandboxId:23b05486602dd4faff956afd2dc7aec978561f2cf08c38b42ab85c3aed6582d2,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:
1711994908506936788,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-82sh9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f780133b-a5c0-4eab-8f19-bd1181b15957,},Annotations:map[string]string{io.kubernetes.container.hash: 31ab90cd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c738a051f99c97d24e83233b7777445dfab452e800c256a6652be42d3476feca,PodSandboxId:84c4fd4e30f718a6e6d401a1ea8f368c28e48c70076adae7c696f2e63357f8da,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,
CreatedAt:1711994904129082239,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-n4pp4,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 85d661ab-6d0c-4c5d-80d7-5e87e8e096b0,},Annotations:map[string]string{io.kubernetes.container.hash: b71814d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d02f777dc4fb5d30481463a72a7f1514d457c51769db6f34c335fdc610985307,PodSandboxId:5af7e12bb8286138ae2697e329336eb97c6a67530e5f0a42b7a6d7e73847d235,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:538fb31f832e76c93f10035cb609c56fc5cd18b3cd85a3ba50699572c3c5dc50,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:1a9bd6f561b5c8cb73e4847b4f8044ef2d44a79008ee4cc46d71a87bbbebce32,State:CONTAINER_RUNNING,CreatedAt:1711994879975825975,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-5446596998-pvd79,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5144be1b-5f2f-4db7-8c66-bb679aa31a3f,},Annotations:map[string]string{io.kubernetes.container.hash: fb58b090,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d879594bec103909d539395a08207a09bcebce1a01b59adb744f55f6fc38269c,PodSandboxId:5802baa7237fc28883b3905cb7db5e7e518fc3198235a21327ba38e3a7d10928,Metadata:&ContainerM
etadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711994856582826816,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d770fd8-541f-4ea5-bbff-8bdba366a91b,},Annotations:map[string]string{io.kubernetes.container.hash: 314cca10,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7592332265d29603d706dd8ae6215012ab699095b4cb81b5a287cb3566a87f87,PodSandboxId:ecf893481468496183e00c27b0928aff583b346c96cef194ccdda81157cbec21,Metadata:&ContainerMetadata{Name
:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711994851527592538,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-7fhsg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e044680-92e0-46d9-aa37-6e95b606d9c6,},Annotations:map[string]string{io.kubernetes.container.hash: befc28bd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 3
0,},},&Container{Id:4364158240fbf7e504278f6465f4ca09aafa1f1add53cc175f8dfe119fce1326,PodSandboxId:8558ddf14bd58e21c04f7531200a71a09b899326b0d4218e33fd11d86c736cc1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711994849753251261,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fz2ml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6263627a-2781-45c7-b2a4-b06ab6c04879,},Annotations:map[string]string{io.kubernetes.container.hash: 98de96da,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac45958565c5a
ab2fa2b8390aeaf778faac10f25c756cb29e10b4afbcd107bd5,PodSandboxId:eb20eea5d33ff49ae7e9b03022f9891cf96444063f21b85bdf9b424fe286dc03,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711994830505372202,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-881427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e438293d0084e7b4bf6faae6a01bf5d8,},Annotations:map[string]string{io.kubernetes.container.hash: 36f5a6fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2d54581a0ef573e289f02ca7ad4f3eeb8b3f9014afdc78a9569a0c254bc
fb09,PodSandboxId:10409e55554a74bc18e3debddb4d100156f8c514ce7ad47ce71cb8cbe26b42cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711994830453287289,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-881427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fee49c26fd8f5049e6dcf4449cacb5b,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03fb53a7e5f85c59443d18637bfcbf0ffa22527f75cc75a784
5f585f87ee236d,PodSandboxId:4739cf05a2d89e506b62a362966c43cde11675026153960c48f79f290a804a94,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711994830448525577,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-881427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a2c1dd6e026812c08404e38be364fa4,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd6fdf952e5501e85339e87407a72c5550cda3b57b1bdca9f53b58f499f8b941,Po
dSandboxId:f34902d274111dc89384043635a3c135d86fe98a2df83385c9a9c456769aaff6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711994830419294201,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-881427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37fe97e449b1812962375c600235bf53,},Annotations:map[string]string{io.kubernetes.container.hash: e4d7eaf4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=beda80dc-1242-43d5-b839-dc147699b124 name=/runtim
e.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7df3de8a7b886       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      8 seconds ago       Running             hello-world-app           0                   fd238aaa7e15d       hello-world-app-5d77478584-g9f4q
	e95587cc2e5f9       docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742                              2 minutes ago       Running             nginx                     0                   f6ee23d762b03       nginx
	e289602215f62       ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f                        2 minutes ago       Running             headlamp                  0                   db5763a0603c7       headlamp-5b77dbd7c4-ssqx5
	0125f0c6d4aac       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 3 minutes ago       Running             gcp-auth                  0                   33fad9b23a3f1       gcp-auth-7d69788767-bhk6q
	098753116b336       b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135                                                             3 minutes ago       Exited              patch                     1                   0fde13a63c3c8       ingress-nginx-admission-patch-wf88x
	2f9b111e671c9       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023   3 minutes ago       Exited              create                    0                   23b05486602dd       ingress-nginx-admission-create-82sh9
	c738a051f99c9       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              3 minutes ago       Running             yakd                      0                   84c4fd4e30f71       yakd-dashboard-9947fc6bf-n4pp4
	d02f777dc4fb5       gcr.io/cloud-spanner-emulator/emulator@sha256:538fb31f832e76c93f10035cb609c56fc5cd18b3cd85a3ba50699572c3c5dc50               3 minutes ago       Running             cloud-spanner-emulator    0                   5af7e12bb8286       cloud-spanner-emulator-5446596998-pvd79
	d879594bec103       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   5802baa7237fc       storage-provisioner
	7592332265d29       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             4 minutes ago       Running             coredns                   0                   ecf8934814684       coredns-76f75df574-7fhsg
	4364158240fbf       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                                             4 minutes ago       Running             kube-proxy                0                   8558ddf14bd58       kube-proxy-fz2ml
	ac45958565c5a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                             4 minutes ago       Running             etcd                      0                   eb20eea5d33ff       etcd-addons-881427
	c2d54581a0ef5       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                                             4 minutes ago       Running             kube-controller-manager   0                   10409e55554a7       kube-controller-manager-addons-881427
	03fb53a7e5f85       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                                             4 minutes ago       Running             kube-scheduler            0                   4739cf05a2d89       kube-scheduler-addons-881427
	bd6fdf952e550       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                                             4 minutes ago       Running             kube-apiserver            0                   f34902d274111       kube-apiserver-addons-881427
	
	
	==> coredns [7592332265d29603d706dd8ae6215012ab699095b4cb81b5a287cb3566a87f87] <==
	[INFO] 10.244.0.8:35944 - 58706 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000054988s
	[INFO] 10.244.0.8:55356 - 50083 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000042495s
	[INFO] 10.244.0.8:55356 - 9150 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000033261s
	[INFO] 10.244.0.8:52077 - 63713 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000035099s
	[INFO] 10.244.0.8:52077 - 62179 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000050102s
	[INFO] 10.244.0.8:47439 - 4739 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000046024s
	[INFO] 10.244.0.8:47439 - 64190 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000077618s
	[INFO] 10.244.0.8:39389 - 16057 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000167273s
	[INFO] 10.244.0.8:39389 - 1972 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000037108s
	[INFO] 10.244.0.8:54310 - 63209 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000132577s
	[INFO] 10.244.0.8:54310 - 37871 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00006181s
	[INFO] 10.244.0.8:45038 - 47355 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000186191s
	[INFO] 10.244.0.8:45038 - 47865 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00015532s
	[INFO] 10.244.0.8:59837 - 12201 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000119047s
	[INFO] 10.244.0.8:59837 - 37047 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00037651s
	[INFO] 10.244.0.22:60318 - 31283 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000235588s
	[INFO] 10.244.0.22:44948 - 39610 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00009288s
	[INFO] 10.244.0.22:43484 - 19852 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000091583s
	[INFO] 10.244.0.22:52045 - 35786 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.005156998s
	[INFO] 10.244.0.22:42144 - 25424 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000110329s
	[INFO] 10.244.0.22:55856 - 39525 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00035729s
	[INFO] 10.244.0.22:58614 - 52153 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001151698s
	[INFO] 10.244.0.22:43549 - 63757 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 420 0.002544635s
	[INFO] 10.244.0.25:48343 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000259044s
	[INFO] 10.244.0.25:36941 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000109538s
	
	
	==> describe nodes <==
	Name:               addons-881427
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-881427
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2
	                    minikube.k8s.io/name=addons-881427
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_01T18_07_16_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-881427
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Apr 2024 18:07:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-881427
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Apr 2024 18:11:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Apr 2024 18:09:50 +0000   Mon, 01 Apr 2024 18:07:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Apr 2024 18:09:50 +0000   Mon, 01 Apr 2024 18:07:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Apr 2024 18:09:50 +0000   Mon, 01 Apr 2024 18:07:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Apr 2024 18:09:50 +0000   Mon, 01 Apr 2024 18:07:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.214
	  Hostname:    addons-881427
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 13a0cedeceb1427eafc5b915b829cb6d
	  System UUID:                13a0cede-ceb1-427e-afc5-b915b829cb6d
	  Boot ID:                    63021f69-00f1-4f7e-867e-931aaaef5107
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-5446596998-pvd79    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m27s
	  default                     hello-world-app-5d77478584-g9f4q           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m34s
	  gcp-auth                    gcp-auth-7d69788767-bhk6q                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  headlamp                    headlamp-5b77dbd7c4-ssqx5                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m43s
	  kube-system                 coredns-76f75df574-7fhsg                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m31s
	  kube-system                 etcd-addons-881427                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m44s
	  kube-system                 kube-apiserver-addons-881427               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m44s
	  kube-system                 kube-controller-manager-addons-881427      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m44s
	  kube-system                 kube-proxy-fz2ml                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m32s
	  kube-system                 kube-scheduler-addons-881427               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m44s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m26s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-n4pp4             0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     4m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m29s  kube-proxy       
	  Normal  Starting                 4m44s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m44s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m44s  kubelet          Node addons-881427 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m44s  kubelet          Node addons-881427 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m44s  kubelet          Node addons-881427 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m43s  kubelet          Node addons-881427 status is now: NodeReady
	  Normal  RegisteredNode           4m32s  node-controller  Node addons-881427 event: Registered Node addons-881427 in Controller
	
	
	==> dmesg <==
	[  +0.101044] kauditd_printk_skb: 41 callbacks suppressed
	[ +12.851326] systemd-fstab-generator[1511]: Ignoring "noauto" option for root device
	[  +0.008371] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.711170] kauditd_printk_skb: 92 callbacks suppressed
	[  +5.025776] kauditd_printk_skb: 122 callbacks suppressed
	[  +6.622733] kauditd_printk_skb: 46 callbacks suppressed
	[  +8.672226] kauditd_printk_skb: 19 callbacks suppressed
	[Apr 1 18:08] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.074093] kauditd_printk_skb: 7 callbacks suppressed
	[ +11.791362] kauditd_printk_skb: 30 callbacks suppressed
	[  +5.759552] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.082966] kauditd_printk_skb: 62 callbacks suppressed
	[  +6.109031] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.096641] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.230632] kauditd_printk_skb: 21 callbacks suppressed
	[  +6.067503] kauditd_printk_skb: 19 callbacks suppressed
	[Apr 1 18:09] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.370253] kauditd_printk_skb: 41 callbacks suppressed
	[  +7.442438] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.259348] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.130926] kauditd_printk_skb: 19 callbacks suppressed
	[ +11.731613] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.348446] kauditd_printk_skb: 31 callbacks suppressed
	[Apr 1 18:11] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.003310] kauditd_printk_skb: 17 callbacks suppressed
	
	
	==> etcd [ac45958565c5aab2fa2b8390aeaf778faac10f25c756cb29e10b4afbcd107bd5] <==
	{"level":"info","ts":"2024-04-01T18:08:32.154277Z","caller":"traceutil/trace.go:171","msg":"trace[2096374394] range","detail":"{range_begin:/registry/endpointslices/; range_end:/registry/endpointslices0; response_count:0; response_revision:1028; }","duration":"118.632393ms","start":"2024-04-01T18:08:32.03564Z","end":"2024-04-01T18:08:32.154272Z","steps":["trace[2096374394] 'agreement among raft nodes before linearized reading'  (duration: 116.534231ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-01T18:08:32.152591Z","caller":"traceutil/trace.go:171","msg":"trace[1510734342] transaction","detail":"{read_only:false; response_revision:1028; number_of_response:1; }","duration":"120.023509ms","start":"2024-04-01T18:08:32.032558Z","end":"2024-04-01T18:08:32.152581Z","steps":["trace[1510734342] 'process raft request'  (duration: 117.414335ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-01T18:08:32.153288Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.111609ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85321"}
	{"level":"info","ts":"2024-04-01T18:08:32.155574Z","caller":"traceutil/trace.go:171","msg":"trace[1412238551] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1028; }","duration":"117.416552ms","start":"2024-04-01T18:08:32.038148Z","end":"2024-04-01T18:08:32.155565Z","steps":["trace[1412238551] 'agreement among raft nodes before linearized reading'  (duration: 114.648979ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-01T18:08:46.708575Z","caller":"traceutil/trace.go:171","msg":"trace[1833287120] linearizableReadLoop","detail":"{readStateIndex:1163; appliedIndex:1162; }","duration":"452.511698ms","start":"2024-04-01T18:08:46.256051Z","end":"2024-04-01T18:08:46.708562Z","steps":["trace[1833287120] 'read index received'  (duration: 452.38851ms)","trace[1833287120] 'applied index is now lower than readState.Index'  (duration: 122.747µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-01T18:08:46.708968Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"452.901243ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11453"}
	{"level":"info","ts":"2024-04-01T18:08:46.709001Z","caller":"traceutil/trace.go:171","msg":"trace[1790539493] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1132; }","duration":"452.946706ms","start":"2024-04-01T18:08:46.256047Z","end":"2024-04-01T18:08:46.708994Z","steps":["trace[1790539493] 'agreement among raft nodes before linearized reading'  (duration: 452.839826ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-01T18:08:46.70902Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-01T18:08:46.256015Z","time spent":"453.000724ms","remote":"127.0.0.1:50404","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":3,"response size":11476,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	{"level":"info","ts":"2024-04-01T18:08:46.709207Z","caller":"traceutil/trace.go:171","msg":"trace[745578692] transaction","detail":"{read_only:false; response_revision:1132; number_of_response:1; }","duration":"466.920783ms","start":"2024-04-01T18:08:46.242278Z","end":"2024-04-01T18:08:46.709199Z","steps":["trace[745578692] 'process raft request'  (duration: 466.198693ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-01T18:08:46.708968Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"250.792971ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14363"}
	{"level":"info","ts":"2024-04-01T18:08:46.709257Z","caller":"traceutil/trace.go:171","msg":"trace[208484155] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1132; }","duration":"251.089944ms","start":"2024-04-01T18:08:46.458157Z","end":"2024-04-01T18:08:46.709247Z","steps":["trace[208484155] 'agreement among raft nodes before linearized reading'  (duration: 250.697337ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-01T18:08:46.709259Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-01T18:08:46.242264Z","time spent":"466.957691ms","remote":"127.0.0.1:50480","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":482,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1114 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:419 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"info","ts":"2024-04-01T18:08:49.200277Z","caller":"traceutil/trace.go:171","msg":"trace[871224223] transaction","detail":"{read_only:false; response_revision:1136; number_of_response:1; }","duration":"147.845564ms","start":"2024-04-01T18:08:49.052411Z","end":"2024-04-01T18:08:49.200257Z","steps":["trace[871224223] 'process raft request'  (duration: 147.299539ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-01T18:09:06.803152Z","caller":"traceutil/trace.go:171","msg":"trace[1279358777] linearizableReadLoop","detail":"{readStateIndex:1342; appliedIndex:1341; }","duration":"214.879381ms","start":"2024-04-01T18:09:06.588254Z","end":"2024-04-01T18:09:06.803133Z","steps":["trace[1279358777] 'read index received'  (duration: 214.701702ms)","trace[1279358777] 'applied index is now lower than readState.Index'  (duration: 176.798µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-01T18:09:06.803248Z","caller":"traceutil/trace.go:171","msg":"trace[1034489834] transaction","detail":"{read_only:false; response_revision:1304; number_of_response:1; }","duration":"302.392676ms","start":"2024-04-01T18:09:06.500849Z","end":"2024-04-01T18:09:06.803241Z","steps":["trace[1034489834] 'process raft request'  (duration: 302.147284ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-01T18:09:06.803336Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-01T18:09:06.500833Z","time spent":"302.432055ms","remote":"127.0.0.1:50480","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":486,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" mod_revision:1196 > success:<request_put:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" value_size:427 >> failure:<request_range:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" > >"}
	{"level":"warn","ts":"2024-04-01T18:09:06.803402Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.043415ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/yakd-dashboard/\" range_end:\"/registry/pods/yakd-dashboard0\" ","response":"range_response_count:1 size:4325"}
	{"level":"info","ts":"2024-04-01T18:09:06.803464Z","caller":"traceutil/trace.go:171","msg":"trace[742409902] range","detail":"{range_begin:/registry/pods/yakd-dashboard/; range_end:/registry/pods/yakd-dashboard0; response_count:1; response_revision:1304; }","duration":"140.130642ms","start":"2024-04-01T18:09:06.663316Z","end":"2024-04-01T18:09:06.803447Z","steps":["trace[742409902] 'agreement among raft nodes before linearized reading'  (duration: 140.009579ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-01T18:09:06.803614Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.891474ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/\" range_end:\"/registry/csinodes0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-04-01T18:09:06.803632Z","caller":"traceutil/trace.go:171","msg":"trace[241735526] range","detail":"{range_begin:/registry/csinodes/; range_end:/registry/csinodes0; response_count:0; response_revision:1304; }","duration":"136.986883ms","start":"2024-04-01T18:09:06.66664Z","end":"2024-04-01T18:09:06.803627Z","steps":["trace[241735526] 'agreement among raft nodes before linearized reading'  (duration: 136.949004ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-01T18:09:06.803637Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"215.384506ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllers/kube-system/registry\" ","response":"range_response_count:1 size:2820"}
	{"level":"info","ts":"2024-04-01T18:09:06.803657Z","caller":"traceutil/trace.go:171","msg":"trace[278867364] range","detail":"{range_begin:/registry/controllers/kube-system/registry; range_end:; response_count:1; response_revision:1304; }","duration":"215.430599ms","start":"2024-04-01T18:09:06.58822Z","end":"2024-04-01T18:09:06.80365Z","steps":["trace[278867364] 'agreement among raft nodes before linearized reading'  (duration: 215.384476ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-01T18:09:22.447427Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"207.352817ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-01T18:09:22.447488Z","caller":"traceutil/trace.go:171","msg":"trace[1158523049] range","detail":"{range_begin:/registry/volumeattachments/; range_end:/registry/volumeattachments0; response_count:0; response_revision:1490; }","duration":"207.483126ms","start":"2024-04-01T18:09:22.239991Z","end":"2024-04-01T18:09:22.447474Z","steps":["trace[1158523049] 'count revisions from in-memory index tree'  (duration: 207.301879ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-01T18:09:53.351516Z","caller":"traceutil/trace.go:171","msg":"trace[1389805816] transaction","detail":"{read_only:false; response_revision:1730; number_of_response:1; }","duration":"157.69148ms","start":"2024-04-01T18:09:53.193793Z","end":"2024-04-01T18:09:53.351485Z","steps":["trace[1389805816] 'process raft request'  (duration: 157.546596ms)"],"step_count":1}
	
	
	==> gcp-auth [0125f0c6d4aacc178dbf9901b3023e0afc815152e540ad3c84a6083e43f9abca] <==
	2024/04/01 18:08:51 GCP Auth Webhook started!
	2024/04/01 18:08:58 Ready to marshal response ...
	2024/04/01 18:08:58 Ready to write response ...
	2024/04/01 18:09:00 Ready to marshal response ...
	2024/04/01 18:09:00 Ready to write response ...
	2024/04/01 18:09:00 Ready to marshal response ...
	2024/04/01 18:09:00 Ready to write response ...
	2024/04/01 18:09:03 Ready to marshal response ...
	2024/04/01 18:09:03 Ready to write response ...
	2024/04/01 18:09:09 Ready to marshal response ...
	2024/04/01 18:09:09 Ready to write response ...
	2024/04/01 18:09:12 Ready to marshal response ...
	2024/04/01 18:09:12 Ready to write response ...
	2024/04/01 18:09:17 Ready to marshal response ...
	2024/04/01 18:09:17 Ready to write response ...
	2024/04/01 18:09:17 Ready to marshal response ...
	2024/04/01 18:09:17 Ready to write response ...
	2024/04/01 18:09:17 Ready to marshal response ...
	2024/04/01 18:09:17 Ready to write response ...
	2024/04/01 18:09:26 Ready to marshal response ...
	2024/04/01 18:09:26 Ready to write response ...
	2024/04/01 18:09:32 Ready to marshal response ...
	2024/04/01 18:09:32 Ready to write response ...
	2024/04/01 18:11:48 Ready to marshal response ...
	2024/04/01 18:11:48 Ready to write response ...
	
	
	==> kernel <==
	 18:12:00 up 5 min,  0 users,  load average: 0.58, 1.18, 0.63
	Linux addons-881427 5.10.207 #1 SMP Wed Mar 27 22:02:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [bd6fdf952e5501e85339e87407a72c5550cda3b57b1bdca9f53b58f499f8b941] <==
	E0401 18:09:10.713814       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0401 18:09:10.720162       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0401 18:09:10.727257       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0401 18:09:16.976863       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0401 18:09:17.361541       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.170.201"}
	I0401 18:09:20.330086       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0401 18:09:20.591469       1 handler.go:275] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0401 18:09:21.639177       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0401 18:09:25.741398       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0401 18:09:26.149712       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0401 18:09:26.328599       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.47.165"}
	I0401 18:09:48.586317       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0401 18:09:48.589984       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0401 18:09:48.613293       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0401 18:09:48.613364       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0401 18:09:48.621614       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0401 18:09:48.621687       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0401 18:09:48.632197       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0401 18:09:48.632875       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0401 18:09:48.679461       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0401 18:09:48.679524       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0401 18:09:49.622599       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0401 18:09:49.679974       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0401 18:09:49.685379       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0401 18:11:48.643135       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.101.76.232"}
	
	
	==> kube-controller-manager [c2d54581a0ef573e289f02ca7ad4f3eeb8b3f9014afdc78a9569a0c254bcfb09] <==
	W0401 18:11:00.214394       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0401 18:11:00.214458       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0401 18:11:06.703574       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0401 18:11:06.703691       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0401 18:11:21.847283       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0401 18:11:21.847368       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0401 18:11:25.256612       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0401 18:11:25.256859       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0401 18:11:48.455986       1 event.go:376] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0401 18:11:48.518371       1 event.go:376] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-g9f4q"
	I0401 18:11:48.541416       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="85.07186ms"
	I0401 18:11:48.582097       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="40.49569ms"
	I0401 18:11:48.603555       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="21.381771ms"
	I0401 18:11:48.603943       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="220.623µs"
	I0401 18:11:51.669584       1 job_controller.go:554] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0401 18:11:51.674988       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-65496f9567" duration="5.826µs"
	I0401 18:11:51.679415       1 job_controller.go:554] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0401 18:11:51.887424       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="8.514351ms"
	I0401 18:11:51.887835       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="116.038µs"
	W0401 18:11:53.362573       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0401 18:11:53.362704       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0401 18:11:57.876953       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0401 18:11:57.877062       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0401 18:11:58.674527       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0401 18:11:58.674642       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [4364158240fbf7e504278f6465f4ca09aafa1f1add53cc175f8dfe119fce1326] <==
	I0401 18:07:30.526214       1 server_others.go:72] "Using iptables proxy"
	I0401 18:07:30.551484       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.214"]
	I0401 18:07:30.637175       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0401 18:07:30.637193       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0401 18:07:30.637205       1 server_others.go:168] "Using iptables Proxier"
	I0401 18:07:30.644006       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0401 18:07:30.644182       1 server.go:865] "Version info" version="v1.29.3"
	I0401 18:07:30.644220       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 18:07:30.647365       1 config.go:188] "Starting service config controller"
	I0401 18:07:30.647413       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0401 18:07:30.647439       1 config.go:97] "Starting endpoint slice config controller"
	I0401 18:07:30.647443       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0401 18:07:30.648204       1 config.go:315] "Starting node config controller"
	I0401 18:07:30.648241       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0401 18:07:30.747919       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0401 18:07:30.748153       1 shared_informer.go:318] Caches are synced for service config
	I0401 18:07:30.748367       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [03fb53a7e5f85c59443d18637bfcbf0ffa22527f75cc75a7845f585f87ee236d] <==
	W0401 18:07:14.082231       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0401 18:07:14.082321       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0401 18:07:14.100481       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0401 18:07:14.102890       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0401 18:07:14.137795       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0401 18:07:14.138021       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0401 18:07:14.191511       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0401 18:07:14.191702       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0401 18:07:14.198218       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0401 18:07:14.198304       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0401 18:07:14.283781       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0401 18:07:14.284624       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0401 18:07:14.284580       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0401 18:07:14.285819       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0401 18:07:14.389013       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0401 18:07:14.389565       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0401 18:07:14.409029       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0401 18:07:14.409251       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0401 18:07:14.416193       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0401 18:07:14.416258       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0401 18:07:14.448321       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0401 18:07:14.448617       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0401 18:07:14.671805       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0401 18:07:14.671935       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0401 18:07:17.572681       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 01 18:11:48 addons-881427 kubelet[1292]: I0401 18:11:48.533861    1292 memory_manager.go:354] "RemoveStaleState removing state" podUID="56b71b6f-9ddf-43ca-9893-1895d0c71024" containerName="volume-snapshot-controller"
	Apr 01 18:11:48 addons-881427 kubelet[1292]: I0401 18:11:48.533899    1292 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f9b358f-3334-45d6-bf37-8b9d4a5cdf22" containerName="node-driver-registrar"
	Apr 01 18:11:48 addons-881427 kubelet[1292]: I0401 18:11:48.533933    1292 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f9b358f-3334-45d6-bf37-8b9d4a5cdf22" containerName="liveness-probe"
	Apr 01 18:11:48 addons-881427 kubelet[1292]: I0401 18:11:48.533967    1292 memory_manager.go:354] "RemoveStaleState removing state" podUID="f64f7572-e225-467c-ab07-def542d15d28" containerName="csi-attacher"
	Apr 01 18:11:48 addons-881427 kubelet[1292]: I0401 18:11:48.534002    1292 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f9b358f-3334-45d6-bf37-8b9d4a5cdf22" containerName="csi-provisioner"
	Apr 01 18:11:48 addons-881427 kubelet[1292]: I0401 18:11:48.559917    1292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5c9531d7-9538-4c0a-a22b-82e5d3918495-gcp-creds\") pod \"hello-world-app-5d77478584-g9f4q\" (UID: \"5c9531d7-9538-4c0a-a22b-82e5d3918495\") " pod="default/hello-world-app-5d77478584-g9f4q"
	Apr 01 18:11:48 addons-881427 kubelet[1292]: I0401 18:11:48.560617    1292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k8vq\" (UniqueName: \"kubernetes.io/projected/5c9531d7-9538-4c0a-a22b-82e5d3918495-kube-api-access-5k8vq\") pod \"hello-world-app-5d77478584-g9f4q\" (UID: \"5c9531d7-9538-4c0a-a22b-82e5d3918495\") " pod="default/hello-world-app-5d77478584-g9f4q"
	Apr 01 18:11:50 addons-881427 kubelet[1292]: I0401 18:11:50.277854    1292 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxk8p\" (UniqueName: \"kubernetes.io/projected/2f402a8d-9920-4ab9-b8f5-b24ff9528a04-kube-api-access-cxk8p\") pod \"2f402a8d-9920-4ab9-b8f5-b24ff9528a04\" (UID: \"2f402a8d-9920-4ab9-b8f5-b24ff9528a04\") "
	Apr 01 18:11:50 addons-881427 kubelet[1292]: I0401 18:11:50.287003    1292 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f402a8d-9920-4ab9-b8f5-b24ff9528a04-kube-api-access-cxk8p" (OuterVolumeSpecName: "kube-api-access-cxk8p") pod "2f402a8d-9920-4ab9-b8f5-b24ff9528a04" (UID: "2f402a8d-9920-4ab9-b8f5-b24ff9528a04"). InnerVolumeSpecName "kube-api-access-cxk8p". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 01 18:11:50 addons-881427 kubelet[1292]: I0401 18:11:50.378690    1292 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-cxk8p\" (UniqueName: \"kubernetes.io/projected/2f402a8d-9920-4ab9-b8f5-b24ff9528a04-kube-api-access-cxk8p\") on node \"addons-881427\" DevicePath \"\""
	Apr 01 18:11:50 addons-881427 kubelet[1292]: I0401 18:11:50.855846    1292 scope.go:117] "RemoveContainer" containerID="754bc383c8378e0117710a75def84081e0a5360ffebec2354063cd37e4fe1f8e"
	Apr 01 18:11:52 addons-881427 kubelet[1292]: I0401 18:11:52.756628    1292 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f402a8d-9920-4ab9-b8f5-b24ff9528a04" path="/var/lib/kubelet/pods/2f402a8d-9920-4ab9-b8f5-b24ff9528a04/volumes"
	Apr 01 18:11:52 addons-881427 kubelet[1292]: I0401 18:11:52.757236    1292 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f780133b-a5c0-4eab-8f19-bd1181b15957" path="/var/lib/kubelet/pods/f780133b-a5c0-4eab-8f19-bd1181b15957/volumes"
	Apr 01 18:11:52 addons-881427 kubelet[1292]: I0401 18:11:52.757656    1292 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9e23dd5-de9a-4127-98c8-7095ea4a801f" path="/var/lib/kubelet/pods/f9e23dd5-de9a-4127-98c8-7095ea4a801f/volumes"
	Apr 01 18:11:54 addons-881427 kubelet[1292]: I0401 18:11:54.892003    1292 scope.go:117] "RemoveContainer" containerID="f8ae3a6404aa433dcedeab0122887f32d6cea3ea80348a8b0f59d3cb90fea2b9"
	Apr 01 18:11:54 addons-881427 kubelet[1292]: I0401 18:11:54.916046    1292 scope.go:117] "RemoveContainer" containerID="f8ae3a6404aa433dcedeab0122887f32d6cea3ea80348a8b0f59d3cb90fea2b9"
	Apr 01 18:11:54 addons-881427 kubelet[1292]: E0401 18:11:54.916692    1292 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8ae3a6404aa433dcedeab0122887f32d6cea3ea80348a8b0f59d3cb90fea2b9\": container with ID starting with f8ae3a6404aa433dcedeab0122887f32d6cea3ea80348a8b0f59d3cb90fea2b9 not found: ID does not exist" containerID="f8ae3a6404aa433dcedeab0122887f32d6cea3ea80348a8b0f59d3cb90fea2b9"
	Apr 01 18:11:54 addons-881427 kubelet[1292]: I0401 18:11:54.916804    1292 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8ae3a6404aa433dcedeab0122887f32d6cea3ea80348a8b0f59d3cb90fea2b9"} err="failed to get container status \"f8ae3a6404aa433dcedeab0122887f32d6cea3ea80348a8b0f59d3cb90fea2b9\": rpc error: code = NotFound desc = could not find container \"f8ae3a6404aa433dcedeab0122887f32d6cea3ea80348a8b0f59d3cb90fea2b9\": container with ID starting with f8ae3a6404aa433dcedeab0122887f32d6cea3ea80348a8b0f59d3cb90fea2b9 not found: ID does not exist"
	Apr 01 18:11:54 addons-881427 kubelet[1292]: I0401 18:11:54.921420    1292 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5nzl\" (UniqueName: \"kubernetes.io/projected/dab0ec67-96a1-49ff-9bf6-69aed7931052-kube-api-access-q5nzl\") pod \"dab0ec67-96a1-49ff-9bf6-69aed7931052\" (UID: \"dab0ec67-96a1-49ff-9bf6-69aed7931052\") "
	Apr 01 18:11:54 addons-881427 kubelet[1292]: I0401 18:11:54.921458    1292 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dab0ec67-96a1-49ff-9bf6-69aed7931052-webhook-cert\") pod \"dab0ec67-96a1-49ff-9bf6-69aed7931052\" (UID: \"dab0ec67-96a1-49ff-9bf6-69aed7931052\") "
	Apr 01 18:11:54 addons-881427 kubelet[1292]: I0401 18:11:54.926097    1292 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dab0ec67-96a1-49ff-9bf6-69aed7931052-kube-api-access-q5nzl" (OuterVolumeSpecName: "kube-api-access-q5nzl") pod "dab0ec67-96a1-49ff-9bf6-69aed7931052" (UID: "dab0ec67-96a1-49ff-9bf6-69aed7931052"). InnerVolumeSpecName "kube-api-access-q5nzl". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 01 18:11:54 addons-881427 kubelet[1292]: I0401 18:11:54.927958    1292 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dab0ec67-96a1-49ff-9bf6-69aed7931052-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "dab0ec67-96a1-49ff-9bf6-69aed7931052" (UID: "dab0ec67-96a1-49ff-9bf6-69aed7931052"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Apr 01 18:11:55 addons-881427 kubelet[1292]: I0401 18:11:55.021913    1292 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dab0ec67-96a1-49ff-9bf6-69aed7931052-webhook-cert\") on node \"addons-881427\" DevicePath \"\""
	Apr 01 18:11:55 addons-881427 kubelet[1292]: I0401 18:11:55.021991    1292 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-q5nzl\" (UniqueName: \"kubernetes.io/projected/dab0ec67-96a1-49ff-9bf6-69aed7931052-kube-api-access-q5nzl\") on node \"addons-881427\" DevicePath \"\""
	Apr 01 18:11:56 addons-881427 kubelet[1292]: I0401 18:11:56.756499    1292 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dab0ec67-96a1-49ff-9bf6-69aed7931052" path="/var/lib/kubelet/pods/dab0ec67-96a1-49ff-9bf6-69aed7931052/volumes"
	
	
	==> storage-provisioner [d879594bec103909d539395a08207a09bcebce1a01b59adb744f55f6fc38269c] <==
	I0401 18:07:38.113467       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0401 18:07:38.146685       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0401 18:07:38.146834       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0401 18:07:38.172460       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0401 18:07:38.173624       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"164f0e07-77c0-48d6-94da-cd0defe92d84", APIVersion:"v1", ResourceVersion:"656", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-881427_71c47355-bb7e-409b-b2c6-6ccfbd39fc62 became leader
	I0401 18:07:38.173680       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-881427_71c47355-bb7e-409b-b2c6-6ccfbd39fc62!
	I0401 18:07:38.277290       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-881427_71c47355-bb7e-409b-b2c6-6ccfbd39fc62!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-881427 -n addons-881427
helpers_test.go:261: (dbg) Run:  kubectl --context addons-881427 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (155.08s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (7.77s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5446596998-pvd79" [5144be1b-5f2f-4db7-8c66-bb679aa31a3f] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003946928s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-881427
addons_test.go:860: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable cloud-spanner -p addons-881427: exit status 11 (409.580779ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-04-01T18:09:14Z" level=error msg="stat /run/runc/fc750c93a5b22a3323f8dc49ffecc7d320dbe4eba6894a49e8a348443bae2bf2: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:861: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 addons disable cloud-spanner -p addons-881427" : exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-881427 -n addons-881427
helpers_test.go:244: <<< TestAddons/parallel/CloudSpanner FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/CloudSpanner]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-881427 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-881427 logs -n 25: (1.546138386s)
helpers_test.go:252: TestAddons/parallel/CloudSpanner logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-794994 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:06 UTC |                     |
	|         | -p download-only-794994                                                                     |                      |         |                |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                      |         |                |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |                |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |                |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |                |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:06 UTC | 01 Apr 24 18:06 UTC |
	| delete  | -p download-only-794994                                                                     | download-only-794994 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:06 UTC | 01 Apr 24 18:06 UTC |
	| start   | -o=json --download-only                                                                     | download-only-591417 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:06 UTC |                     |
	|         | -p download-only-591417                                                                     |                      |         |                |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                                                                |                      |         |                |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |                |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |                |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |                |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:06 UTC | 01 Apr 24 18:06 UTC |
	| delete  | -p download-only-591417                                                                     | download-only-591417 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:06 UTC | 01 Apr 24 18:06 UTC |
	| start   | -o=json --download-only                                                                     | download-only-040534 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:06 UTC |                     |
	|         | -p download-only-040534                                                                     |                      |         |                |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.0                                                           |                      |         |                |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |                |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |                |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |                |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:06 UTC | 01 Apr 24 18:06 UTC |
	| delete  | -p download-only-040534                                                                     | download-only-040534 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:06 UTC | 01 Apr 24 18:06 UTC |
	| delete  | -p download-only-794994                                                                     | download-only-794994 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:06 UTC | 01 Apr 24 18:06 UTC |
	| delete  | -p download-only-591417                                                                     | download-only-591417 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:06 UTC | 01 Apr 24 18:06 UTC |
	| delete  | -p download-only-040534                                                                     | download-only-040534 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:06 UTC | 01 Apr 24 18:06 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-431770 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:06 UTC |                     |
	|         | binary-mirror-431770                                                                        |                      |         |                |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |                |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |                |                     |                     |
	|         | http://127.0.0.1:37617                                                                      |                      |         |                |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |                |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |                |                     |                     |
	| delete  | -p binary-mirror-431770                                                                     | binary-mirror-431770 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:06 UTC | 01 Apr 24 18:06 UTC |
	| addons  | disable dashboard -p                                                                        | addons-881427        | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:06 UTC |                     |
	|         | addons-881427                                                                               |                      |         |                |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-881427        | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:06 UTC |                     |
	|         | addons-881427                                                                               |                      |         |                |                     |                     |
	| start   | -p addons-881427 --wait=true                                                                | addons-881427        | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:06 UTC | 01 Apr 24 18:08 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |                |                     |                     |
	|         | --addons=registry                                                                           |                      |         |                |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |                |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |                |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |                |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |                |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |                |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |                |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |                |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |                |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |                |                     |                     |
	|         |  --container-runtime=crio                                                                   |                      |         |                |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |                |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |                |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |                |                     |                     |
	| addons  | addons-881427 addons                                                                        | addons-881427        | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:08 UTC | 01 Apr 24 18:08 UTC |
	|         | disable metrics-server                                                                      |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| addons  | addons-881427 addons disable                                                                | addons-881427        | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:09 UTC | 01 Apr 24 18:09 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |                |                     |                     |
	|         | -v=1                                                                                        |                      |         |                |                     |                     |
	| ip      | addons-881427 ip                                                                            | addons-881427        | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:09 UTC | 01 Apr 24 18:09 UTC |
	| addons  | addons-881427 addons disable                                                                | addons-881427        | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:09 UTC | 01 Apr 24 18:09 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |                |                     |                     |
	|         | -v=1                                                                                        |                      |         |                |                     |                     |
	| ssh     | addons-881427 ssh cat                                                                       | addons-881427        | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:09 UTC | 01 Apr 24 18:09 UTC |
	|         | /opt/local-path-provisioner/pvc-de16cdd6-519d-46fd-98d1-b0afa2a23e43_default_test-pvc/file1 |                      |         |                |                     |                     |
	| addons  | addons-881427 addons disable                                                                | addons-881427        | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:09 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-881427        | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:09 UTC | 01 Apr 24 18:09 UTC |
	|         | -p addons-881427                                                                            |                      |         |                |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-881427        | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:09 UTC |                     |
	|         | addons-881427                                                                               |                      |         |                |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/01 18:06:31
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 18:06:31.336010   18511 out.go:291] Setting OutFile to fd 1 ...
	I0401 18:06:31.336125   18511 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 18:06:31.336134   18511 out.go:304] Setting ErrFile to fd 2...
	I0401 18:06:31.336137   18511 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 18:06:31.336314   18511 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-10493/.minikube/bin
	I0401 18:06:31.336904   18511 out.go:298] Setting JSON to false
	I0401 18:06:31.337701   18511 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2943,"bootTime":1711991848,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 18:06:31.337757   18511 start.go:139] virtualization: kvm guest
	I0401 18:06:31.340030   18511 out.go:177] * [addons-881427] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 18:06:31.341811   18511 out.go:177]   - MINIKUBE_LOCATION=18233
	I0401 18:06:31.341817   18511 notify.go:220] Checking for updates...
	I0401 18:06:31.343264   18511 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 18:06:31.344735   18511 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 18:06:31.346051   18511 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-10493/.minikube
	I0401 18:06:31.347365   18511 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 18:06:31.348652   18511 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 18:06:31.349959   18511 driver.go:392] Setting default libvirt URI to qemu:///system
	I0401 18:06:31.380779   18511 out.go:177] * Using the kvm2 driver based on user configuration
	I0401 18:06:31.382111   18511 start.go:297] selected driver: kvm2
	I0401 18:06:31.382124   18511 start.go:901] validating driver "kvm2" against <nil>
	I0401 18:06:31.382135   18511 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 18:06:31.383029   18511 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 18:06:31.383129   18511 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18233-10493/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0401 18:06:31.397195   18511 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0401 18:06:31.397246   18511 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0401 18:06:31.397511   18511 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 18:06:31.397587   18511 cni.go:84] Creating CNI manager for ""
	I0401 18:06:31.397604   18511 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 18:06:31.397618   18511 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0401 18:06:31.397698   18511 start.go:340] cluster config:
	{Name:addons-881427 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-881427 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 18:06:31.397855   18511 iso.go:125] acquiring lock: {Name:mka511ffe42ecd86bd7f46e7a17ddcdd3e5e4327 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 18:06:31.399782   18511 out.go:177] * Starting "addons-881427" primary control-plane node in "addons-881427" cluster
	I0401 18:06:31.401116   18511 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0401 18:06:31.401148   18511 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0401 18:06:31.401160   18511 cache.go:56] Caching tarball of preloaded images
	I0401 18:06:31.401250   18511 preload.go:173] Found /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 18:06:31.401265   18511 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0401 18:06:31.401591   18511 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/config.json ...
	I0401 18:06:31.401612   18511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/config.json: {Name:mk87f7613d29992512bc1caf86d7db9ba76178bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:06:31.401751   18511 start.go:360] acquireMachinesLock for addons-881427: {Name:mk6b7472209a8db5f40be4c2f0565da7e0094c19 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 18:06:31.401807   18511 start.go:364] duration metric: took 36.308µs to acquireMachinesLock for "addons-881427"
	I0401 18:06:31.401825   18511 start.go:93] Provisioning new machine with config: &{Name:addons-881427 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:addons-881427 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 18:06:31.401882   18511 start.go:125] createHost starting for "" (driver="kvm2")
	I0401 18:06:31.403540   18511 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0401 18:06:31.403651   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:06:31.403691   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:06:31.417980   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36879
	I0401 18:06:31.419320   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:06:31.419885   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:06:31.419907   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:06:31.420205   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:06:31.420394   18511 main.go:141] libmachine: (addons-881427) Calling .GetMachineName
	I0401 18:06:31.420534   18511 main.go:141] libmachine: (addons-881427) Calling .DriverName
	I0401 18:06:31.420675   18511 start.go:159] libmachine.API.Create for "addons-881427" (driver="kvm2")
	I0401 18:06:31.420698   18511 client.go:168] LocalClient.Create starting
	I0401 18:06:31.420737   18511 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem
	I0401 18:06:31.674150   18511 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem
	I0401 18:06:31.838327   18511 main.go:141] libmachine: Running pre-create checks...
	I0401 18:06:31.838350   18511 main.go:141] libmachine: (addons-881427) Calling .PreCreateCheck
	I0401 18:06:31.838904   18511 main.go:141] libmachine: (addons-881427) Calling .GetConfigRaw
	I0401 18:06:31.839386   18511 main.go:141] libmachine: Creating machine...
	I0401 18:06:31.839401   18511 main.go:141] libmachine: (addons-881427) Calling .Create
	I0401 18:06:31.839579   18511 main.go:141] libmachine: (addons-881427) Creating KVM machine...
	I0401 18:06:31.840943   18511 main.go:141] libmachine: (addons-881427) DBG | found existing default KVM network
	I0401 18:06:31.841749   18511 main.go:141] libmachine: (addons-881427) DBG | I0401 18:06:31.841556   18533 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015330}
	I0401 18:06:31.841776   18511 main.go:141] libmachine: (addons-881427) DBG | created network xml: 
	I0401 18:06:31.841787   18511 main.go:141] libmachine: (addons-881427) DBG | <network>
	I0401 18:06:31.841792   18511 main.go:141] libmachine: (addons-881427) DBG |   <name>mk-addons-881427</name>
	I0401 18:06:31.841805   18511 main.go:141] libmachine: (addons-881427) DBG |   <dns enable='no'/>
	I0401 18:06:31.841813   18511 main.go:141] libmachine: (addons-881427) DBG |   
	I0401 18:06:31.841824   18511 main.go:141] libmachine: (addons-881427) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0401 18:06:31.841832   18511 main.go:141] libmachine: (addons-881427) DBG |     <dhcp>
	I0401 18:06:31.841843   18511 main.go:141] libmachine: (addons-881427) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0401 18:06:31.841852   18511 main.go:141] libmachine: (addons-881427) DBG |     </dhcp>
	I0401 18:06:31.841858   18511 main.go:141] libmachine: (addons-881427) DBG |   </ip>
	I0401 18:06:31.841862   18511 main.go:141] libmachine: (addons-881427) DBG |   
	I0401 18:06:31.841867   18511 main.go:141] libmachine: (addons-881427) DBG | </network>
	I0401 18:06:31.841876   18511 main.go:141] libmachine: (addons-881427) DBG | 
	I0401 18:06:31.847141   18511 main.go:141] libmachine: (addons-881427) DBG | trying to create private KVM network mk-addons-881427 192.168.39.0/24...
	I0401 18:06:31.909113   18511 main.go:141] libmachine: (addons-881427) Setting up store path in /home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427 ...
	I0401 18:06:31.909135   18511 main.go:141] libmachine: (addons-881427) DBG | private KVM network mk-addons-881427 192.168.39.0/24 created
	I0401 18:06:31.909150   18511 main.go:141] libmachine: (addons-881427) Building disk image from file:///home/jenkins/minikube-integration/18233-10493/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso
	I0401 18:06:31.909192   18511 main.go:141] libmachine: (addons-881427) DBG | I0401 18:06:31.909058   18533 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18233-10493/.minikube
	I0401 18:06:31.909226   18511 main.go:141] libmachine: (addons-881427) Downloading /home/jenkins/minikube-integration/18233-10493/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18233-10493/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso...
	I0401 18:06:32.131505   18511 main.go:141] libmachine: (addons-881427) DBG | I0401 18:06:32.131386   18533 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427/id_rsa...
	I0401 18:06:32.202759   18511 main.go:141] libmachine: (addons-881427) DBG | I0401 18:06:32.202634   18533 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427/addons-881427.rawdisk...
	I0401 18:06:32.202786   18511 main.go:141] libmachine: (addons-881427) DBG | Writing magic tar header
	I0401 18:06:32.202797   18511 main.go:141] libmachine: (addons-881427) DBG | Writing SSH key tar header
	I0401 18:06:32.202805   18511 main.go:141] libmachine: (addons-881427) DBG | I0401 18:06:32.202736   18533 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427 ...
	I0401 18:06:32.202817   18511 main.go:141] libmachine: (addons-881427) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427
	I0401 18:06:32.202881   18511 main.go:141] libmachine: (addons-881427) Setting executable bit set on /home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427 (perms=drwx------)
	I0401 18:06:32.202905   18511 main.go:141] libmachine: (addons-881427) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18233-10493/.minikube/machines
	I0401 18:06:32.202916   18511 main.go:141] libmachine: (addons-881427) Setting executable bit set on /home/jenkins/minikube-integration/18233-10493/.minikube/machines (perms=drwxr-xr-x)
	I0401 18:06:32.202926   18511 main.go:141] libmachine: (addons-881427) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18233-10493/.minikube
	I0401 18:06:32.202940   18511 main.go:141] libmachine: (addons-881427) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18233-10493
	I0401 18:06:32.202951   18511 main.go:141] libmachine: (addons-881427) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0401 18:06:32.202964   18511 main.go:141] libmachine: (addons-881427) DBG | Checking permissions on dir: /home/jenkins
	I0401 18:06:32.202976   18511 main.go:141] libmachine: (addons-881427) DBG | Checking permissions on dir: /home
	I0401 18:06:32.202987   18511 main.go:141] libmachine: (addons-881427) DBG | Skipping /home - not owner
	I0401 18:06:32.203002   18511 main.go:141] libmachine: (addons-881427) Setting executable bit set on /home/jenkins/minikube-integration/18233-10493/.minikube (perms=drwxr-xr-x)
	I0401 18:06:32.203018   18511 main.go:141] libmachine: (addons-881427) Setting executable bit set on /home/jenkins/minikube-integration/18233-10493 (perms=drwxrwxr-x)
	I0401 18:06:32.203031   18511 main.go:141] libmachine: (addons-881427) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0401 18:06:32.203043   18511 main.go:141] libmachine: (addons-881427) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0401 18:06:32.203057   18511 main.go:141] libmachine: (addons-881427) Creating domain...
	I0401 18:06:32.203964   18511 main.go:141] libmachine: (addons-881427) define libvirt domain using xml: 
	I0401 18:06:32.203996   18511 main.go:141] libmachine: (addons-881427) <domain type='kvm'>
	I0401 18:06:32.204004   18511 main.go:141] libmachine: (addons-881427)   <name>addons-881427</name>
	I0401 18:06:32.204010   18511 main.go:141] libmachine: (addons-881427)   <memory unit='MiB'>4000</memory>
	I0401 18:06:32.204019   18511 main.go:141] libmachine: (addons-881427)   <vcpu>2</vcpu>
	I0401 18:06:32.204027   18511 main.go:141] libmachine: (addons-881427)   <features>
	I0401 18:06:32.204036   18511 main.go:141] libmachine: (addons-881427)     <acpi/>
	I0401 18:06:32.204047   18511 main.go:141] libmachine: (addons-881427)     <apic/>
	I0401 18:06:32.204055   18511 main.go:141] libmachine: (addons-881427)     <pae/>
	I0401 18:06:32.204061   18511 main.go:141] libmachine: (addons-881427)     
	I0401 18:06:32.204068   18511 main.go:141] libmachine: (addons-881427)   </features>
	I0401 18:06:32.204075   18511 main.go:141] libmachine: (addons-881427)   <cpu mode='host-passthrough'>
	I0401 18:06:32.204080   18511 main.go:141] libmachine: (addons-881427)   
	I0401 18:06:32.204092   18511 main.go:141] libmachine: (addons-881427)   </cpu>
	I0401 18:06:32.204101   18511 main.go:141] libmachine: (addons-881427)   <os>
	I0401 18:06:32.204112   18511 main.go:141] libmachine: (addons-881427)     <type>hvm</type>
	I0401 18:06:32.204125   18511 main.go:141] libmachine: (addons-881427)     <boot dev='cdrom'/>
	I0401 18:06:32.204137   18511 main.go:141] libmachine: (addons-881427)     <boot dev='hd'/>
	I0401 18:06:32.204146   18511 main.go:141] libmachine: (addons-881427)     <bootmenu enable='no'/>
	I0401 18:06:32.204163   18511 main.go:141] libmachine: (addons-881427)   </os>
	I0401 18:06:32.204171   18511 main.go:141] libmachine: (addons-881427)   <devices>
	I0401 18:06:32.204178   18511 main.go:141] libmachine: (addons-881427)     <disk type='file' device='cdrom'>
	I0401 18:06:32.204187   18511 main.go:141] libmachine: (addons-881427)       <source file='/home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427/boot2docker.iso'/>
	I0401 18:06:32.204192   18511 main.go:141] libmachine: (addons-881427)       <target dev='hdc' bus='scsi'/>
	I0401 18:06:32.204196   18511 main.go:141] libmachine: (addons-881427)       <readonly/>
	I0401 18:06:32.204202   18511 main.go:141] libmachine: (addons-881427)     </disk>
	I0401 18:06:32.204211   18511 main.go:141] libmachine: (addons-881427)     <disk type='file' device='disk'>
	I0401 18:06:32.204223   18511 main.go:141] libmachine: (addons-881427)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0401 18:06:32.204244   18511 main.go:141] libmachine: (addons-881427)       <source file='/home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427/addons-881427.rawdisk'/>
	I0401 18:06:32.204258   18511 main.go:141] libmachine: (addons-881427)       <target dev='hda' bus='virtio'/>
	I0401 18:06:32.204265   18511 main.go:141] libmachine: (addons-881427)     </disk>
	I0401 18:06:32.204279   18511 main.go:141] libmachine: (addons-881427)     <interface type='network'>
	I0401 18:06:32.204287   18511 main.go:141] libmachine: (addons-881427)       <source network='mk-addons-881427'/>
	I0401 18:06:32.204296   18511 main.go:141] libmachine: (addons-881427)       <model type='virtio'/>
	I0401 18:06:32.204315   18511 main.go:141] libmachine: (addons-881427)     </interface>
	I0401 18:06:32.204324   18511 main.go:141] libmachine: (addons-881427)     <interface type='network'>
	I0401 18:06:32.204329   18511 main.go:141] libmachine: (addons-881427)       <source network='default'/>
	I0401 18:06:32.204334   18511 main.go:141] libmachine: (addons-881427)       <model type='virtio'/>
	I0401 18:06:32.204361   18511 main.go:141] libmachine: (addons-881427)     </interface>
	I0401 18:06:32.204386   18511 main.go:141] libmachine: (addons-881427)     <serial type='pty'>
	I0401 18:06:32.204402   18511 main.go:141] libmachine: (addons-881427)       <target port='0'/>
	I0401 18:06:32.204415   18511 main.go:141] libmachine: (addons-881427)     </serial>
	I0401 18:06:32.204426   18511 main.go:141] libmachine: (addons-881427)     <console type='pty'>
	I0401 18:06:32.204447   18511 main.go:141] libmachine: (addons-881427)       <target type='serial' port='0'/>
	I0401 18:06:32.204472   18511 main.go:141] libmachine: (addons-881427)     </console>
	I0401 18:06:32.204489   18511 main.go:141] libmachine: (addons-881427)     <rng model='virtio'>
	I0401 18:06:32.204499   18511 main.go:141] libmachine: (addons-881427)       <backend model='random'>/dev/random</backend>
	I0401 18:06:32.204524   18511 main.go:141] libmachine: (addons-881427)     </rng>
	I0401 18:06:32.204536   18511 main.go:141] libmachine: (addons-881427)     
	I0401 18:06:32.204543   18511 main.go:141] libmachine: (addons-881427)     
	I0401 18:06:32.204557   18511 main.go:141] libmachine: (addons-881427)   </devices>
	I0401 18:06:32.204565   18511 main.go:141] libmachine: (addons-881427) </domain>
	I0401 18:06:32.204586   18511 main.go:141] libmachine: (addons-881427) 
	I0401 18:06:32.210422   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:ca:06:61 in network default
	I0401 18:06:32.210973   18511 main.go:141] libmachine: (addons-881427) Ensuring networks are active...
	I0401 18:06:32.210993   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:32.211788   18511 main.go:141] libmachine: (addons-881427) Ensuring network default is active
	I0401 18:06:32.212125   18511 main.go:141] libmachine: (addons-881427) Ensuring network mk-addons-881427 is active
	I0401 18:06:32.212756   18511 main.go:141] libmachine: (addons-881427) Getting domain xml...
	I0401 18:06:32.213674   18511 main.go:141] libmachine: (addons-881427) Creating domain...
	I0401 18:06:33.603244   18511 main.go:141] libmachine: (addons-881427) Waiting to get IP...
	I0401 18:06:33.604106   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:33.604487   18511 main.go:141] libmachine: (addons-881427) DBG | unable to find current IP address of domain addons-881427 in network mk-addons-881427
	I0401 18:06:33.604541   18511 main.go:141] libmachine: (addons-881427) DBG | I0401 18:06:33.604469   18533 retry.go:31] will retry after 269.634126ms: waiting for machine to come up
	I0401 18:06:33.876114   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:33.876608   18511 main.go:141] libmachine: (addons-881427) DBG | unable to find current IP address of domain addons-881427 in network mk-addons-881427
	I0401 18:06:33.876630   18511 main.go:141] libmachine: (addons-881427) DBG | I0401 18:06:33.876568   18533 retry.go:31] will retry after 251.291846ms: waiting for machine to come up
	I0401 18:06:34.128922   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:34.129395   18511 main.go:141] libmachine: (addons-881427) DBG | unable to find current IP address of domain addons-881427 in network mk-addons-881427
	I0401 18:06:34.129428   18511 main.go:141] libmachine: (addons-881427) DBG | I0401 18:06:34.129359   18533 retry.go:31] will retry after 376.203226ms: waiting for machine to come up
	I0401 18:06:34.506843   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:34.507208   18511 main.go:141] libmachine: (addons-881427) DBG | unable to find current IP address of domain addons-881427 in network mk-addons-881427
	I0401 18:06:34.507230   18511 main.go:141] libmachine: (addons-881427) DBG | I0401 18:06:34.507167   18533 retry.go:31] will retry after 566.832821ms: waiting for machine to come up
	I0401 18:06:35.076133   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:35.076592   18511 main.go:141] libmachine: (addons-881427) DBG | unable to find current IP address of domain addons-881427 in network mk-addons-881427
	I0401 18:06:35.076616   18511 main.go:141] libmachine: (addons-881427) DBG | I0401 18:06:35.076547   18533 retry.go:31] will retry after 563.222713ms: waiting for machine to come up
	I0401 18:06:35.641330   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:35.641730   18511 main.go:141] libmachine: (addons-881427) DBG | unable to find current IP address of domain addons-881427 in network mk-addons-881427
	I0401 18:06:35.641751   18511 main.go:141] libmachine: (addons-881427) DBG | I0401 18:06:35.641696   18533 retry.go:31] will retry after 580.091131ms: waiting for machine to come up
	I0401 18:06:36.223563   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:36.223941   18511 main.go:141] libmachine: (addons-881427) DBG | unable to find current IP address of domain addons-881427 in network mk-addons-881427
	I0401 18:06:36.223961   18511 main.go:141] libmachine: (addons-881427) DBG | I0401 18:06:36.223898   18533 retry.go:31] will retry after 918.869733ms: waiting for machine to come up
	I0401 18:06:37.144830   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:37.145276   18511 main.go:141] libmachine: (addons-881427) DBG | unable to find current IP address of domain addons-881427 in network mk-addons-881427
	I0401 18:06:37.145309   18511 main.go:141] libmachine: (addons-881427) DBG | I0401 18:06:37.145231   18533 retry.go:31] will retry after 1.003351501s: waiting for machine to come up
	I0401 18:06:38.150466   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:38.150866   18511 main.go:141] libmachine: (addons-881427) DBG | unable to find current IP address of domain addons-881427 in network mk-addons-881427
	I0401 18:06:38.150892   18511 main.go:141] libmachine: (addons-881427) DBG | I0401 18:06:38.150829   18533 retry.go:31] will retry after 1.861805809s: waiting for machine to come up
	I0401 18:06:40.013871   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:40.014294   18511 main.go:141] libmachine: (addons-881427) DBG | unable to find current IP address of domain addons-881427 in network mk-addons-881427
	I0401 18:06:40.014360   18511 main.go:141] libmachine: (addons-881427) DBG | I0401 18:06:40.014235   18533 retry.go:31] will retry after 1.635650648s: waiting for machine to come up
	I0401 18:06:41.651847   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:41.652367   18511 main.go:141] libmachine: (addons-881427) DBG | unable to find current IP address of domain addons-881427 in network mk-addons-881427
	I0401 18:06:41.652390   18511 main.go:141] libmachine: (addons-881427) DBG | I0401 18:06:41.652341   18533 retry.go:31] will retry after 2.723353102s: waiting for machine to come up
	I0401 18:06:44.379239   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:44.379762   18511 main.go:141] libmachine: (addons-881427) DBG | unable to find current IP address of domain addons-881427 in network mk-addons-881427
	I0401 18:06:44.379798   18511 main.go:141] libmachine: (addons-881427) DBG | I0401 18:06:44.379716   18533 retry.go:31] will retry after 3.188371174s: waiting for machine to come up
	I0401 18:06:47.569608   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:47.570021   18511 main.go:141] libmachine: (addons-881427) DBG | unable to find current IP address of domain addons-881427 in network mk-addons-881427
	I0401 18:06:47.570047   18511 main.go:141] libmachine: (addons-881427) DBG | I0401 18:06:47.569969   18533 retry.go:31] will retry after 3.319364247s: waiting for machine to come up
	I0401 18:06:50.890567   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:50.890960   18511 main.go:141] libmachine: (addons-881427) DBG | unable to find current IP address of domain addons-881427 in network mk-addons-881427
	I0401 18:06:50.891011   18511 main.go:141] libmachine: (addons-881427) DBG | I0401 18:06:50.890935   18533 retry.go:31] will retry after 5.529010547s: waiting for machine to come up
	I0401 18:06:56.425077   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:56.425557   18511 main.go:141] libmachine: (addons-881427) Found IP for machine: 192.168.39.214
	I0401 18:06:56.425576   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has current primary IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:56.425582   18511 main.go:141] libmachine: (addons-881427) Reserving static IP address...
	I0401 18:06:56.425997   18511 main.go:141] libmachine: (addons-881427) DBG | unable to find host DHCP lease matching {name: "addons-881427", mac: "52:54:00:4b:04:cb", ip: "192.168.39.214"} in network mk-addons-881427
	I0401 18:06:56.495162   18511 main.go:141] libmachine: (addons-881427) DBG | Getting to WaitForSSH function...
	I0401 18:06:56.495192   18511 main.go:141] libmachine: (addons-881427) Reserved static IP address: 192.168.39.214
	I0401 18:06:56.495205   18511 main.go:141] libmachine: (addons-881427) Waiting for SSH to be available...
	I0401 18:06:56.497821   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:56.498243   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:minikube Clientid:01:52:54:00:4b:04:cb}
	I0401 18:06:56.498272   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:56.498483   18511 main.go:141] libmachine: (addons-881427) DBG | Using SSH client type: external
	I0401 18:06:56.498510   18511 main.go:141] libmachine: (addons-881427) DBG | Using SSH private key: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427/id_rsa (-rw-------)
	I0401 18:06:56.498531   18511 main.go:141] libmachine: (addons-881427) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.214 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0401 18:06:56.498541   18511 main.go:141] libmachine: (addons-881427) DBG | About to run SSH command:
	I0401 18:06:56.498549   18511 main.go:141] libmachine: (addons-881427) DBG | exit 0
	I0401 18:06:56.629751   18511 main.go:141] libmachine: (addons-881427) DBG | SSH cmd err, output: <nil>: 
	I0401 18:06:56.630104   18511 main.go:141] libmachine: (addons-881427) KVM machine creation complete!
	I0401 18:06:56.630332   18511 main.go:141] libmachine: (addons-881427) Calling .GetConfigRaw
	I0401 18:06:56.630940   18511 main.go:141] libmachine: (addons-881427) Calling .DriverName
	I0401 18:06:56.631133   18511 main.go:141] libmachine: (addons-881427) Calling .DriverName
	I0401 18:06:56.631288   18511 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0401 18:06:56.631381   18511 main.go:141] libmachine: (addons-881427) Calling .GetState
	I0401 18:06:56.632681   18511 main.go:141] libmachine: Detecting operating system of created instance...
	I0401 18:06:56.632697   18511 main.go:141] libmachine: Waiting for SSH to be available...
	I0401 18:06:56.632702   18511 main.go:141] libmachine: Getting to WaitForSSH function...
	I0401 18:06:56.632708   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHHostname
	I0401 18:06:56.634803   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:56.635105   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:06:56.635135   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:56.635248   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHPort
	I0401 18:06:56.635415   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:06:56.635536   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:06:56.635646   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHUsername
	I0401 18:06:56.635843   18511 main.go:141] libmachine: Using SSH client type: native
	I0401 18:06:56.636061   18511 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0401 18:06:56.636074   18511 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0401 18:06:56.733194   18511 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 18:06:56.733222   18511 main.go:141] libmachine: Detecting the provisioner...
	I0401 18:06:56.733232   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHHostname
	I0401 18:06:56.735785   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:56.736142   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:06:56.736175   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:56.736331   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHPort
	I0401 18:06:56.736539   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:06:56.736699   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:06:56.736843   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHUsername
	I0401 18:06:56.737032   18511 main.go:141] libmachine: Using SSH client type: native
	I0401 18:06:56.737240   18511 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0401 18:06:56.737254   18511 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0401 18:06:56.834783   18511 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0401 18:06:56.834861   18511 main.go:141] libmachine: found compatible host: buildroot
	I0401 18:06:56.834872   18511 main.go:141] libmachine: Provisioning with buildroot...
	I0401 18:06:56.834884   18511 main.go:141] libmachine: (addons-881427) Calling .GetMachineName
	I0401 18:06:56.835099   18511 buildroot.go:166] provisioning hostname "addons-881427"
	I0401 18:06:56.835119   18511 main.go:141] libmachine: (addons-881427) Calling .GetMachineName
	I0401 18:06:56.835322   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHHostname
	I0401 18:06:56.839440   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:56.839903   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:06:56.839933   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:56.840108   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHPort
	I0401 18:06:56.840296   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:06:56.840472   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:06:56.840626   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHUsername
	I0401 18:06:56.840787   18511 main.go:141] libmachine: Using SSH client type: native
	I0401 18:06:56.840987   18511 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0401 18:06:56.841000   18511 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-881427 && echo "addons-881427" | sudo tee /etc/hostname
	I0401 18:06:56.954022   18511 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-881427
	
	I0401 18:06:56.954048   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHHostname
	I0401 18:06:56.956470   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:56.956784   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:06:56.956821   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:56.956952   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHPort
	I0401 18:06:56.957144   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:06:56.957303   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:06:56.957441   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHUsername
	I0401 18:06:56.957576   18511 main.go:141] libmachine: Using SSH client type: native
	I0401 18:06:56.957789   18511 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0401 18:06:56.957814   18511 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-881427' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-881427/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-881427' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 18:06:57.069089   18511 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 18:06:57.069120   18511 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18233-10493/.minikube CaCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18233-10493/.minikube}
	I0401 18:06:57.069162   18511 buildroot.go:174] setting up certificates
	I0401 18:06:57.069172   18511 provision.go:84] configureAuth start
	I0401 18:06:57.069183   18511 main.go:141] libmachine: (addons-881427) Calling .GetMachineName
	I0401 18:06:57.069466   18511 main.go:141] libmachine: (addons-881427) Calling .GetIP
	I0401 18:06:57.071861   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:57.072158   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:06:57.072180   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:57.072319   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHHostname
	I0401 18:06:57.074467   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:57.074781   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:06:57.074822   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:57.074917   18511 provision.go:143] copyHostCerts
	I0401 18:06:57.075069   18511 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem (1123 bytes)
	I0401 18:06:57.075232   18511 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem (1679 bytes)
	I0401 18:06:57.075365   18511 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem (1082 bytes)
	I0401 18:06:57.075430   18511 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem org=jenkins.addons-881427 san=[127.0.0.1 192.168.39.214 addons-881427 localhost minikube]
	I0401 18:06:57.326408   18511 provision.go:177] copyRemoteCerts
	I0401 18:06:57.326469   18511 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 18:06:57.326494   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHHostname
	I0401 18:06:57.329132   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:57.329606   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:06:57.329634   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:57.329810   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHPort
	I0401 18:06:57.330018   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:06:57.330206   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHUsername
	I0401 18:06:57.330330   18511 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427/id_rsa Username:docker}
	I0401 18:06:57.408714   18511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 18:06:57.438543   18511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0401 18:06:57.466550   18511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 18:06:57.492919   18511 provision.go:87] duration metric: took 423.731573ms to configureAuth
	I0401 18:06:57.492949   18511 buildroot.go:189] setting minikube options for container-runtime
	I0401 18:06:57.493165   18511 config.go:182] Loaded profile config "addons-881427": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 18:06:57.493246   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHHostname
	I0401 18:06:57.495893   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:57.496265   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:06:57.496293   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:57.496447   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHPort
	I0401 18:06:57.496636   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:06:57.496804   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:06:57.496939   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHUsername
	I0401 18:06:57.497115   18511 main.go:141] libmachine: Using SSH client type: native
	I0401 18:06:57.497274   18511 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0401 18:06:57.497288   18511 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 18:06:57.760942   18511 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 18:06:57.760966   18511 main.go:141] libmachine: Checking connection to Docker...
	I0401 18:06:57.760975   18511 main.go:141] libmachine: (addons-881427) Calling .GetURL
	I0401 18:06:57.762135   18511 main.go:141] libmachine: (addons-881427) DBG | Using libvirt version 6000000
	I0401 18:06:57.763972   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:57.764292   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:06:57.764318   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:57.764441   18511 main.go:141] libmachine: Docker is up and running!
	I0401 18:06:57.764458   18511 main.go:141] libmachine: Reticulating splines...
	I0401 18:06:57.764465   18511 client.go:171] duration metric: took 26.343759362s to LocalClient.Create
	I0401 18:06:57.764484   18511 start.go:167] duration metric: took 26.343809774s to libmachine.API.Create "addons-881427"
	I0401 18:06:57.764500   18511 start.go:293] postStartSetup for "addons-881427" (driver="kvm2")
	I0401 18:06:57.764512   18511 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 18:06:57.764527   18511 main.go:141] libmachine: (addons-881427) Calling .DriverName
	I0401 18:06:57.764731   18511 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 18:06:57.764749   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHHostname
	I0401 18:06:57.766751   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:57.767077   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:06:57.767102   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:57.767195   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHPort
	I0401 18:06:57.767348   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:06:57.767528   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHUsername
	I0401 18:06:57.767647   18511 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427/id_rsa Username:docker}
	I0401 18:06:57.848902   18511 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 18:06:57.853610   18511 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 18:06:57.853630   18511 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/addons for local assets ...
	I0401 18:06:57.853727   18511 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/files for local assets ...
	I0401 18:06:57.853763   18511 start.go:296] duration metric: took 89.253531ms for postStartSetup
	I0401 18:06:57.853809   18511 main.go:141] libmachine: (addons-881427) Calling .GetConfigRaw
	I0401 18:06:57.854353   18511 main.go:141] libmachine: (addons-881427) Calling .GetIP
	I0401 18:06:57.856573   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:57.856865   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:06:57.856910   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:57.857089   18511 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/config.json ...
	I0401 18:06:57.857258   18511 start.go:128] duration metric: took 26.45536674s to createHost
	I0401 18:06:57.857279   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHHostname
	I0401 18:06:57.859364   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:57.859674   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:06:57.859696   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:57.859804   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHPort
	I0401 18:06:57.859981   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:06:57.860176   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:06:57.860359   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHUsername
	I0401 18:06:57.860541   18511 main.go:141] libmachine: Using SSH client type: native
	I0401 18:06:57.860736   18511 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I0401 18:06:57.860751   18511 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 18:06:57.962928   18511 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711994817.935069932
	
	I0401 18:06:57.962946   18511 fix.go:216] guest clock: 1711994817.935069932
	I0401 18:06:57.962954   18511 fix.go:229] Guest: 2024-04-01 18:06:57.935069932 +0000 UTC Remote: 2024-04-01 18:06:57.857269466 +0000 UTC m=+26.565030106 (delta=77.800466ms)
	I0401 18:06:57.962982   18511 fix.go:200] guest clock delta is within tolerance: 77.800466ms
	I0401 18:06:57.962990   18511 start.go:83] releasing machines lock for "addons-881427", held for 26.561171605s
	I0401 18:06:57.963012   18511 main.go:141] libmachine: (addons-881427) Calling .DriverName
	I0401 18:06:57.963268   18511 main.go:141] libmachine: (addons-881427) Calling .GetIP
	I0401 18:06:57.965732   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:57.966118   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:06:57.966143   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:57.966317   18511 main.go:141] libmachine: (addons-881427) Calling .DriverName
	I0401 18:06:57.966822   18511 main.go:141] libmachine: (addons-881427) Calling .DriverName
	I0401 18:06:57.967003   18511 main.go:141] libmachine: (addons-881427) Calling .DriverName
	I0401 18:06:57.967073   18511 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 18:06:57.967116   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHHostname
	I0401 18:06:57.967207   18511 ssh_runner.go:195] Run: cat /version.json
	I0401 18:06:57.967231   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHHostname
	I0401 18:06:57.969873   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:57.969944   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:57.970239   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:06:57.970265   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:57.970294   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:06:57.970314   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:57.970411   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHPort
	I0401 18:06:57.970559   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHPort
	I0401 18:06:57.970575   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:06:57.970700   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:06:57.970782   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHUsername
	I0401 18:06:57.970858   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHUsername
	I0401 18:06:57.970949   18511 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427/id_rsa Username:docker}
	I0401 18:06:57.970962   18511 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427/id_rsa Username:docker}
	I0401 18:06:58.096265   18511 ssh_runner.go:195] Run: systemctl --version
	I0401 18:06:58.103073   18511 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 18:06:58.264811   18511 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 18:06:58.272290   18511 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 18:06:58.272344   18511 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 18:06:58.289978   18511 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 18:06:58.290000   18511 start.go:494] detecting cgroup driver to use...
	I0401 18:06:58.290065   18511 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 18:06:58.310938   18511 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 18:06:58.326098   18511 docker.go:217] disabling cri-docker service (if available) ...
	I0401 18:06:58.326164   18511 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 18:06:58.340541   18511 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 18:06:58.354770   18511 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 18:06:58.465319   18511 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 18:06:58.628021   18511 docker.go:233] disabling docker service ...
	I0401 18:06:58.628089   18511 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 18:06:58.645291   18511 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 18:06:58.659716   18511 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 18:06:58.790924   18511 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 18:06:58.930510   18511 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 18:06:58.946330   18511 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 18:06:58.967234   18511 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0401 18:06:58.967300   18511 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:06:58.979110   18511 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 18:06:58.979189   18511 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:06:58.992418   18511 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:06:59.004874   18511 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:06:59.017051   18511 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 18:06:59.029202   18511 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:06:59.041379   18511 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:06:59.062802   18511 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:06:59.074454   18511 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 18:06:59.084719   18511 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0401 18:06:59.084790   18511 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0401 18:06:59.099017   18511 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 18:06:59.110059   18511 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 18:06:59.240883   18511 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 18:06:59.580154   18511 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 18:06:59.580246   18511 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 18:06:59.586428   18511 start.go:562] Will wait 60s for crictl version
	I0401 18:06:59.586488   18511 ssh_runner.go:195] Run: which crictl
	I0401 18:06:59.590851   18511 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 18:06:59.628704   18511 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 18:06:59.628811   18511 ssh_runner.go:195] Run: crio --version
	I0401 18:06:59.663479   18511 ssh_runner.go:195] Run: crio --version
	I0401 18:06:59.701027   18511 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0401 18:06:59.702608   18511 main.go:141] libmachine: (addons-881427) Calling .GetIP
	I0401 18:06:59.705394   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:59.705774   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:06:59.705802   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:06:59.705986   18511 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0401 18:06:59.711025   18511 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 18:06:59.726160   18511 kubeadm.go:877] updating cluster {Name:addons-881427 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.
3 ClusterName:addons-881427 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.214 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 18:06:59.726280   18511 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0401 18:06:59.726331   18511 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 18:06:59.766249   18511 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0401 18:06:59.766329   18511 ssh_runner.go:195] Run: which lz4
	I0401 18:06:59.770813   18511 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0401 18:06:59.775569   18511 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 18:06:59.775599   18511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0401 18:07:01.356805   18511 crio.go:462] duration metric: took 1.586015191s to copy over tarball
	I0401 18:07:01.356865   18511 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 18:07:04.132930   18511 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.776038335s)
	I0401 18:07:04.132964   18511 crio.go:469] duration metric: took 2.776134141s to extract the tarball
	I0401 18:07:04.132978   18511 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 18:07:04.173217   18511 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 18:07:04.223493   18511 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 18:07:04.223520   18511 cache_images.go:84] Images are preloaded, skipping loading
	I0401 18:07:04.223530   18511 kubeadm.go:928] updating node { 192.168.39.214 8443 v1.29.3 crio true true} ...
	I0401 18:07:04.223665   18511 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-881427 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.214
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:addons-881427 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 18:07:04.223759   18511 ssh_runner.go:195] Run: crio config
	I0401 18:07:04.271009   18511 cni.go:84] Creating CNI manager for ""
	I0401 18:07:04.271031   18511 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 18:07:04.271042   18511 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 18:07:04.271062   18511 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.214 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-881427 NodeName:addons-881427 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.214"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.214 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 18:07:04.271640   18511 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.214
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-881427"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.214
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.214"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 18:07:04.271700   18511 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0401 18:07:04.283256   18511 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 18:07:04.283343   18511 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 18:07:04.294600   18511 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0401 18:07:04.312993   18511 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 18:07:04.330955   18511 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0401 18:07:04.349108   18511 ssh_runner.go:195] Run: grep 192.168.39.214	control-plane.minikube.internal$ /etc/hosts
	I0401 18:07:04.353316   18511 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.214	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 18:07:04.367102   18511 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 18:07:04.494094   18511 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 18:07:04.514302   18511 certs.go:68] Setting up /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427 for IP: 192.168.39.214
	I0401 18:07:04.514322   18511 certs.go:194] generating shared ca certs ...
	I0401 18:07:04.514338   18511 certs.go:226] acquiring lock for ca certs: {Name:mk348b3e250c104b662139cd7212c6c6dfda3180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:07:04.514473   18511 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key
	I0401 18:07:04.645847   18511 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt ...
	I0401 18:07:04.645873   18511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt: {Name:mk76b7abd1f080e01a5a32c74a7791d486abaeb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:07:04.646016   18511 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key ...
	I0401 18:07:04.646027   18511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key: {Name:mk81827d88412cbe70f5da178f51bf43ab58da51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:07:04.646114   18511 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key
	I0401 18:07:04.935408   18511 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt ...
	I0401 18:07:04.935445   18511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt: {Name:mkd287c026a1a401803f9827d16e8f4e5e8f5f0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:07:04.935612   18511 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key ...
	I0401 18:07:04.935624   18511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key: {Name:mk9d68668cde28d3ba6d892d6bb735ded03ae541 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:07:04.935710   18511 certs.go:256] generating profile certs ...
	I0401 18:07:04.935777   18511 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/client.key
	I0401 18:07:04.935796   18511 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/client.crt with IP's: []
	I0401 18:07:05.192514   18511 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/client.crt ...
	I0401 18:07:05.192545   18511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/client.crt: {Name:mk43b268f304c81517331551fc83c26cef5077dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:07:05.192712   18511 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/client.key ...
	I0401 18:07:05.192724   18511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/client.key: {Name:mk521c9659c2d31aeaf7ef5eba85e7639ea4a9ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:07:05.192800   18511 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/apiserver.key.76ae8eb6
	I0401 18:07:05.192820   18511 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/apiserver.crt.76ae8eb6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.214]
	I0401 18:07:05.415868   18511 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/apiserver.crt.76ae8eb6 ...
	I0401 18:07:05.415895   18511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/apiserver.crt.76ae8eb6: {Name:mkdf827ea04983b9ec6dae9f2126b6d3c6a70025 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:07:05.416043   18511 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/apiserver.key.76ae8eb6 ...
	I0401 18:07:05.416056   18511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/apiserver.key.76ae8eb6: {Name:mk70beceff4cc257f113f900a07485a1e95d03e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:07:05.416121   18511 certs.go:381] copying /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/apiserver.crt.76ae8eb6 -> /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/apiserver.crt
	I0401 18:07:05.416186   18511 certs.go:385] copying /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/apiserver.key.76ae8eb6 -> /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/apiserver.key
	I0401 18:07:05.416236   18511 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/proxy-client.key
	I0401 18:07:05.416252   18511 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/proxy-client.crt with IP's: []
	I0401 18:07:05.475322   18511 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/proxy-client.crt ...
	I0401 18:07:05.475347   18511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/proxy-client.crt: {Name:mk58715cca1aa62d818cadf2b60d5389ae79761a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:07:05.475550   18511 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/proxy-client.key ...
	I0401 18:07:05.475565   18511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/proxy-client.key: {Name:mk5b3f57d5c512ff6cafbf2df8ea8175d43b554f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:07:05.475751   18511 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem (1675 bytes)
	I0401 18:07:05.475784   18511 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem (1082 bytes)
	I0401 18:07:05.475803   18511 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem (1123 bytes)
	I0401 18:07:05.475822   18511 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem (1679 bytes)
	I0401 18:07:05.476349   18511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 18:07:05.503975   18511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 18:07:05.530240   18511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 18:07:05.556910   18511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 18:07:05.584874   18511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0401 18:07:05.611692   18511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0401 18:07:05.638522   18511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 18:07:05.665043   18511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 18:07:05.692137   18511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 18:07:05.718444   18511 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0401 18:07:05.737181   18511 ssh_runner.go:195] Run: openssl version
	I0401 18:07:05.744505   18511 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 18:07:05.757839   18511 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 18:07:05.763242   18511 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 18:07 /usr/share/ca-certificates/minikubeCA.pem
	I0401 18:07:05.763306   18511 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 18:07:05.769801   18511 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 18:07:05.782859   18511 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 18:07:05.787685   18511 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 18:07:05.787749   18511 kubeadm.go:391] StartCluster: {Name:addons-881427 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 C
lusterName:addons-881427 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.214 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 18:07:05.787847   18511 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 18:07:05.787887   18511 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 18:07:05.836129   18511 cri.go:89] found id: ""
	I0401 18:07:05.836200   18511 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 18:07:05.849302   18511 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 18:07:05.864275   18511 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 18:07:05.875661   18511 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 18:07:05.875686   18511 kubeadm.go:156] found existing configuration files:
	
	I0401 18:07:05.875736   18511 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 18:07:05.886555   18511 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 18:07:05.886618   18511 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 18:07:05.897483   18511 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 18:07:05.908101   18511 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 18:07:05.908153   18511 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 18:07:05.919791   18511 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 18:07:05.930719   18511 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 18:07:05.930769   18511 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 18:07:05.942183   18511 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 18:07:05.952972   18511 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 18:07:05.953031   18511 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 18:07:05.965428   18511 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 18:07:06.173908   18511 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 18:07:16.671545   18511 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0401 18:07:16.671592   18511 kubeadm.go:309] [preflight] Running pre-flight checks
	I0401 18:07:16.671650   18511 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 18:07:16.671767   18511 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 18:07:16.671851   18511 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 18:07:16.671917   18511 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 18:07:16.673748   18511 out.go:204]   - Generating certificates and keys ...
	I0401 18:07:16.673826   18511 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0401 18:07:16.673892   18511 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0401 18:07:16.673957   18511 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 18:07:16.674035   18511 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0401 18:07:16.674109   18511 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0401 18:07:16.674154   18511 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0401 18:07:16.674243   18511 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0401 18:07:16.674392   18511 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-881427 localhost] and IPs [192.168.39.214 127.0.0.1 ::1]
	I0401 18:07:16.674484   18511 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0401 18:07:16.674679   18511 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-881427 localhost] and IPs [192.168.39.214 127.0.0.1 ::1]
	I0401 18:07:16.674781   18511 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 18:07:16.674870   18511 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 18:07:16.674943   18511 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0401 18:07:16.675021   18511 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 18:07:16.675094   18511 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 18:07:16.675161   18511 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 18:07:16.675210   18511 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 18:07:16.675261   18511 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 18:07:16.675306   18511 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 18:07:16.675398   18511 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 18:07:16.675494   18511 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 18:07:16.677214   18511 out.go:204]   - Booting up control plane ...
	I0401 18:07:16.677322   18511 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 18:07:16.677444   18511 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 18:07:16.677540   18511 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 18:07:16.677715   18511 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 18:07:16.677838   18511 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 18:07:16.677912   18511 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0401 18:07:16.678098   18511 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 18:07:16.678211   18511 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003194 seconds
	I0401 18:07:16.678387   18511 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 18:07:16.678567   18511 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 18:07:16.678658   18511 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 18:07:16.678910   18511 kubeadm.go:309] [mark-control-plane] Marking the node addons-881427 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 18:07:16.678989   18511 kubeadm.go:309] [bootstrap-token] Using token: 3q7dw3.76ebyztxncoayojs
	I0401 18:07:16.681670   18511 out.go:204]   - Configuring RBAC rules ...
	I0401 18:07:16.681800   18511 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 18:07:16.681901   18511 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 18:07:16.682095   18511 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 18:07:16.682321   18511 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 18:07:16.682495   18511 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 18:07:16.682630   18511 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 18:07:16.682794   18511 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 18:07:16.682860   18511 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0401 18:07:16.682925   18511 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0401 18:07:16.682936   18511 kubeadm.go:309] 
	I0401 18:07:16.683021   18511 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0401 18:07:16.683034   18511 kubeadm.go:309] 
	I0401 18:07:16.683131   18511 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0401 18:07:16.683151   18511 kubeadm.go:309] 
	I0401 18:07:16.683187   18511 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0401 18:07:16.683259   18511 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 18:07:16.683331   18511 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 18:07:16.683344   18511 kubeadm.go:309] 
	I0401 18:07:16.683423   18511 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0401 18:07:16.683433   18511 kubeadm.go:309] 
	I0401 18:07:16.683553   18511 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 18:07:16.683570   18511 kubeadm.go:309] 
	I0401 18:07:16.683642   18511 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0401 18:07:16.683756   18511 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 18:07:16.683845   18511 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 18:07:16.683861   18511 kubeadm.go:309] 
	I0401 18:07:16.683997   18511 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 18:07:16.684136   18511 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0401 18:07:16.684143   18511 kubeadm.go:309] 
	I0401 18:07:16.684211   18511 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 3q7dw3.76ebyztxncoayojs \
	I0401 18:07:16.684339   18511 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b8a0197ad47aa27a5800307c57228d22e61e4d31af785fa8a896f2b7fab267b8 \
	I0401 18:07:16.684372   18511 kubeadm.go:309] 	--control-plane 
	I0401 18:07:16.684381   18511 kubeadm.go:309] 
	I0401 18:07:16.684511   18511 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0401 18:07:16.684521   18511 kubeadm.go:309] 
	I0401 18:07:16.684616   18511 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 3q7dw3.76ebyztxncoayojs \
	I0401 18:07:16.684785   18511 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b8a0197ad47aa27a5800307c57228d22e61e4d31af785fa8a896f2b7fab267b8 
	I0401 18:07:16.684813   18511 cni.go:84] Creating CNI manager for ""
	I0401 18:07:16.684823   18511 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 18:07:16.686434   18511 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0401 18:07:16.687792   18511 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0401 18:07:16.752772   18511 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0401 18:07:16.818717   18511 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 18:07:16.818782   18511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:07:16.818800   18511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-881427 minikube.k8s.io/updated_at=2024_04_01T18_07_16_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2 minikube.k8s.io/name=addons-881427 minikube.k8s.io/primary=true
	I0401 18:07:16.983556   18511 ops.go:34] apiserver oom_adj: -16
	I0401 18:07:16.983691   18511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:07:17.484096   18511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:07:17.984252   18511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:07:18.484381   18511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:07:18.983683   18511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:07:19.483695   18511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:07:19.984231   18511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:07:20.484484   18511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:07:20.984087   18511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:07:21.484239   18511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:07:21.983753   18511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:07:22.484742   18511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:07:22.984746   18511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:07:23.483911   18511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:07:23.983861   18511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:07:24.483980   18511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:07:24.984107   18511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:07:25.484013   18511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:07:25.983841   18511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:07:26.483908   18511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:07:26.984545   18511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:07:27.484614   18511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:07:27.983880   18511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:07:28.484523   18511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:07:28.984436   18511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:07:29.100551   18511 kubeadm.go:1107] duration metric: took 12.281823694s to wait for elevateKubeSystemPrivileges
	W0401 18:07:29.100598   18511 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0401 18:07:29.100609   18511 kubeadm.go:393] duration metric: took 23.312863663s to StartCluster
	I0401 18:07:29.100630   18511 settings.go:142] acquiring lock: {Name:mk5cd3d9600680d3808ad7ff6310a5e71b09e71d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:07:29.100806   18511 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 18:07:29.101245   18511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/kubeconfig: {Name:mkbd988e40ba29769e9f8a43c4d876f38e957f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:07:29.101478   18511 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0401 18:07:29.101501   18511 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.214 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 18:07:29.103398   18511 out.go:177] * Verifying Kubernetes components...
	I0401 18:07:29.101552   18511 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0401 18:07:29.101697   18511 config.go:182] Loaded profile config "addons-881427": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 18:07:29.104835   18511 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 18:07:29.104849   18511 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-881427"
	I0401 18:07:29.104855   18511 addons.go:69] Setting yakd=true in profile "addons-881427"
	I0401 18:07:29.104912   18511 addons.go:69] Setting inspektor-gadget=true in profile "addons-881427"
	I0401 18:07:29.104932   18511 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-881427"
	I0401 18:07:29.104933   18511 addons.go:69] Setting default-storageclass=true in profile "addons-881427"
	I0401 18:07:29.104938   18511 addons.go:69] Setting gcp-auth=true in profile "addons-881427"
	I0401 18:07:29.104951   18511 addons.go:234] Setting addon inspektor-gadget=true in "addons-881427"
	I0401 18:07:29.104956   18511 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-881427"
	I0401 18:07:29.104952   18511 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-881427"
	I0401 18:07:29.104931   18511 addons.go:69] Setting helm-tiller=true in profile "addons-881427"
	I0401 18:07:29.105246   18511 addons.go:234] Setting addon helm-tiller=true in "addons-881427"
	I0401 18:07:29.105275   18511 host.go:66] Checking if "addons-881427" exists ...
	I0401 18:07:29.104958   18511 addons.go:69] Setting volumesnapshots=true in profile "addons-881427"
	I0401 18:07:29.105342   18511 addons.go:234] Setting addon volumesnapshots=true in "addons-881427"
	I0401 18:07:29.104922   18511 addons.go:69] Setting metrics-server=true in profile "addons-881427"
	I0401 18:07:29.105382   18511 host.go:66] Checking if "addons-881427" exists ...
	I0401 18:07:29.105397   18511 addons.go:234] Setting addon metrics-server=true in "addons-881427"
	I0401 18:07:29.105423   18511 host.go:66] Checking if "addons-881427" exists ...
	I0401 18:07:29.104919   18511 addons.go:69] Setting cloud-spanner=true in profile "addons-881427"
	I0401 18:07:29.105481   18511 addons.go:234] Setting addon cloud-spanner=true in "addons-881427"
	I0401 18:07:29.105484   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.105507   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.104927   18511 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-881427"
	I0401 18:07:29.105526   18511 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-881427"
	I0401 18:07:29.105543   18511 host.go:66] Checking if "addons-881427" exists ...
	I0401 18:07:29.105507   18511 host.go:66] Checking if "addons-881427" exists ...
	I0401 18:07:29.105685   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.105706   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.105802   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.105829   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.105890   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.105916   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.105958   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.104963   18511 addons.go:69] Setting storage-provisioner=true in profile "addons-881427"
	I0401 18:07:29.106003   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.106025   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.104925   18511 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-881427"
	I0401 18:07:29.106061   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.106068   18511 host.go:66] Checking if "addons-881427" exists ...
	I0401 18:07:29.104970   18511 mustload.go:65] Loading cluster: addons-881427
	I0401 18:07:29.106328   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.106346   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.106389   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.106399   18511 config.go:182] Loaded profile config "addons-881427": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 18:07:29.106409   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.106026   18511 addons.go:234] Setting addon storage-provisioner=true in "addons-881427"
	I0401 18:07:29.104916   18511 addons.go:234] Setting addon yakd=true in "addons-881427"
	I0401 18:07:29.104972   18511 addons.go:69] Setting ingress=true in profile "addons-881427"
	I0401 18:07:29.106488   18511 addons.go:234] Setting addon ingress=true in "addons-881427"
	I0401 18:07:29.106513   18511 host.go:66] Checking if "addons-881427" exists ...
	I0401 18:07:29.106617   18511 host.go:66] Checking if "addons-881427" exists ...
	I0401 18:07:29.106756   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.106789   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.104971   18511 addons.go:69] Setting registry=true in profile "addons-881427"
	I0401 18:07:29.106840   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.106848   18511 addons.go:234] Setting addon registry=true in "addons-881427"
	I0401 18:07:29.106866   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.106981   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.107012   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.104967   18511 addons.go:69] Setting ingress-dns=true in profile "addons-881427"
	I0401 18:07:29.107203   18511 addons.go:234] Setting addon ingress-dns=true in "addons-881427"
	I0401 18:07:29.107239   18511 host.go:66] Checking if "addons-881427" exists ...
	I0401 18:07:29.107564   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.107581   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.104994   18511 host.go:66] Checking if "addons-881427" exists ...
	I0401 18:07:29.107944   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.107959   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.110787   18511 host.go:66] Checking if "addons-881427" exists ...
	I0401 18:07:29.114111   18511 host.go:66] Checking if "addons-881427" exists ...
	I0401 18:07:29.114489   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.114520   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.126847   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34659
	I0401 18:07:29.126949   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45409
	I0401 18:07:29.127343   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.127410   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.127861   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.127887   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.127912   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.127927   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.128288   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.128297   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.128289   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43077
	I0401 18:07:29.128524   18511 main.go:141] libmachine: (addons-881427) Calling .GetState
	I0401 18:07:29.128915   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.128958   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.129448   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.130127   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.130144   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.130502   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.130898   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.130925   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.130936   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35095
	I0401 18:07:29.131260   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.132296   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.132315   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.132625   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.134065   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.134088   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.134616   18511 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-881427"
	I0401 18:07:29.134665   18511 host.go:66] Checking if "addons-881427" exists ...
	I0401 18:07:29.135028   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.135070   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.136086   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.136120   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.143957   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45287
	I0401 18:07:29.144735   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.145310   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.145328   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.145804   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.146377   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.146413   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.147545   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33601
	I0401 18:07:29.148863   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.149922   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.149950   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.150368   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.150915   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.150942   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.153935   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39489
	I0401 18:07:29.154558   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.155384   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.155407   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.155771   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.155950   18511 main.go:141] libmachine: (addons-881427) Calling .GetState
	I0401 18:07:29.156562   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37291
	I0401 18:07:29.157239   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.157604   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38059
	I0401 18:07:29.157737   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.157752   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.158226   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.158522   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.158956   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.158989   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.159405   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.159432   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.159621   18511 addons.go:234] Setting addon default-storageclass=true in "addons-881427"
	I0401 18:07:29.159658   18511 host.go:66] Checking if "addons-881427" exists ...
	I0401 18:07:29.159892   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.159984   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.159996   18511 main.go:141] libmachine: (addons-881427) Calling .GetState
	I0401 18:07:29.160014   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.161729   18511 main.go:141] libmachine: (addons-881427) Calling .DriverName
	I0401 18:07:29.164123   18511 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0401 18:07:29.165512   18511 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0401 18:07:29.165529   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0401 18:07:29.165549   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHHostname
	I0401 18:07:29.168943   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:29.169317   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:07:29.169340   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:29.169592   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHPort
	I0401 18:07:29.169787   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:07:29.169994   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHUsername
	I0401 18:07:29.170150   18511 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427/id_rsa Username:docker}
	I0401 18:07:29.171890   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35727
	I0401 18:07:29.172805   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32915
	I0401 18:07:29.173115   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.175255   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42637
	I0401 18:07:29.175521   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38693
	I0401 18:07:29.175575   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.175764   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.175786   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.175836   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.176131   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.176146   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.176270   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.176281   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.176398   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.176822   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.176848   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.177045   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.177060   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.177268   18511 main.go:141] libmachine: (addons-881427) Calling .GetState
	I0401 18:07:29.177327   18511 main.go:141] libmachine: (addons-881427) Calling .GetState
	I0401 18:07:29.179192   18511 main.go:141] libmachine: (addons-881427) Calling .DriverName
	I0401 18:07:29.179250   18511 host.go:66] Checking if "addons-881427" exists ...
	I0401 18:07:29.179627   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.179658   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.181839   18511 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0401 18:07:29.183101   18511 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0401 18:07:29.183118   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0401 18:07:29.183137   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHHostname
	I0401 18:07:29.180965   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.183357   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35169
	I0401 18:07:29.183805   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.184339   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.184353   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.184815   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.184834   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.185312   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.185507   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.185944   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.185972   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.186353   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:29.186831   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.186868   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.187061   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHPort
	I0401 18:07:29.187079   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:07:29.187103   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:29.187252   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:07:29.187418   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHUsername
	I0401 18:07:29.187570   18511 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427/id_rsa Username:docker}
	I0401 18:07:29.189946   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43237
	I0401 18:07:29.190462   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.190938   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.190957   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.191284   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.191786   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.191824   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.195768   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45375
	I0401 18:07:29.196238   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.196763   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.196778   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.197330   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.197528   18511 main.go:141] libmachine: (addons-881427) Calling .GetState
	I0401 18:07:29.198208   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44965
	I0401 18:07:29.199181   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.200986   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41177
	I0401 18:07:29.201535   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.201560   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.201706   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39541
	I0401 18:07:29.202117   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.202152   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.202670   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.202703   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.203011   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.203031   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.203094   18511 main.go:141] libmachine: (addons-881427) Calling .DriverName
	I0401 18:07:29.205053   18511 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0401 18:07:29.203548   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.203897   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.206722   18511 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0401 18:07:29.206734   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0401 18:07:29.206751   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHHostname
	I0401 18:07:29.207021   18511 main.go:141] libmachine: (addons-881427) Calling .DriverName
	I0401 18:07:29.208970   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.208986   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.209445   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.209590   18511 main.go:141] libmachine: (addons-881427) Calling .GetState
	I0401 18:07:29.209630   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:29.209870   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40445
	I0401 18:07:29.210033   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:07:29.210056   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:29.210461   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHPort
	I0401 18:07:29.210541   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.210616   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:07:29.210766   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHUsername
	I0401 18:07:29.211016   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.211035   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.211061   18511 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427/id_rsa Username:docker}
	I0401 18:07:29.211089   18511 main.go:141] libmachine: (addons-881427) Calling .DriverName
	I0401 18:07:29.213035   18511 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0401 18:07:29.211569   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.213551   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35841
	I0401 18:07:29.215818   18511 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0401 18:07:29.215000   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.215373   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.215984   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33861
	I0401 18:07:29.217186   18511 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0401 18:07:29.217209   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.217505   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.217796   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.218710   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.218771   18511 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0401 18:07:29.219123   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.219945   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.220658   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.220708   18511 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0401 18:07:29.222130   18511 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0401 18:07:29.221467   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.222130   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.221717   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44357
	I0401 18:07:29.222217   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.222468   18511 main.go:141] libmachine: (addons-881427) Calling .GetState
	I0401 18:07:29.222685   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.223806   18511 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0401 18:07:29.224323   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.224921   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45161
	I0401 18:07:29.225625   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42191
	I0401 18:07:29.225863   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37619
	I0401 18:07:29.226283   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43667
	I0401 18:07:29.226437   18511 main.go:141] libmachine: (addons-881427) Calling .DriverName
	I0401 18:07:29.227240   18511 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0401 18:07:29.228736   18511 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0401 18:07:29.227350   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.229815   18511 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0401 18:07:29.227840   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.227888   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.228190   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.228340   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.228589   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32927
	I0401 18:07:29.228755   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0401 18:07:29.229192   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.231047   18511 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0401 18:07:29.231065   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0401 18:07:29.231078   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHHostname
	I0401 18:07:29.231136   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHHostname
	I0401 18:07:29.231278   18511 main.go:141] libmachine: (addons-881427) Calling .GetState
	I0401 18:07:29.232341   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.232365   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.232435   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.232481   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.232498   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.232504   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.232517   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.232617   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.232636   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.232940   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.233202   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.233720   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.233739   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.233788   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.233808   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.233831   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.234445   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:29.234481   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:29.234692   18511 main.go:141] libmachine: (addons-881427) Calling .GetState
	I0401 18:07:29.234716   18511 main.go:141] libmachine: (addons-881427) Calling .GetState
	I0401 18:07:29.234732   18511 main.go:141] libmachine: (addons-881427) Calling .GetState
	I0401 18:07:29.234758   18511 main.go:141] libmachine: (addons-881427) Calling .GetState
	I0401 18:07:29.235081   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:29.236077   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:07:29.236093   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:29.236478   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHPort
	I0401 18:07:29.236621   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:07:29.236719   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHUsername
	I0401 18:07:29.236803   18511 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427/id_rsa Username:docker}
	I0401 18:07:29.237344   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:29.237513   18511 main.go:141] libmachine: (addons-881427) Calling .DriverName
	I0401 18:07:29.237793   18511 main.go:141] libmachine: (addons-881427) Calling .DriverName
	I0401 18:07:29.239492   18511 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0401 18:07:29.238327   18511 main.go:141] libmachine: (addons-881427) Calling .DriverName
	I0401 18:07:29.238350   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:07:29.238589   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHPort
	I0401 18:07:29.238747   18511 main.go:141] libmachine: (addons-881427) Calling .DriverName
	I0401 18:07:29.239323   18511 main.go:141] libmachine: (addons-881427) Calling .DriverName
	I0401 18:07:29.240886   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:29.240938   18511 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.26.0
	I0401 18:07:29.242340   18511 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0401 18:07:29.242365   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0401 18:07:29.242386   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHHostname
	I0401 18:07:29.240964   18511 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0401 18:07:29.242413   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0401 18:07:29.242430   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHHostname
	I0401 18:07:29.244178   18511 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0401 18:07:29.241396   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:07:29.245944   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:29.246191   18511 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 18:07:29.246202   18511 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0401 18:07:29.246298   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:29.246449   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHUsername
	I0401 18:07:29.246885   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHPort
	I0401 18:07:29.247512   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 18:07:29.247534   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHHostname
	I0401 18:07:29.247000   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHPort
	I0401 18:07:29.247306   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45289
	I0401 18:07:29.247568   18511 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0401 18:07:29.247673   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:07:29.247680   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:07:29.248396   18511 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427/id_rsa Username:docker}
	I0401 18:07:29.248425   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:07:29.248442   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:07:29.248647   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.249611   18511 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0401 18:07:29.249720   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:29.249763   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:29.250189   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHUsername
	I0401 18:07:29.250364   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHUsername
	I0401 18:07:29.250862   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.252118   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46209
	I0401 18:07:29.252141   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:29.252244   18511 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0401 18:07:29.252784   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHPort
	I0401 18:07:29.253542   18511 out.go:177]   - Using image docker.io/busybox:stable
	I0401 18:07:29.253638   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.254942   18511 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0401 18:07:29.253727   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:07:29.254953   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0401 18:07:29.254968   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:29.254969   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHHostname
	I0401 18:07:29.253964   18511 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427/id_rsa Username:docker}
	I0401 18:07:29.256516   18511 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0401 18:07:29.256529   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0401 18:07:29.256544   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHHostname
	I0401 18:07:29.254008   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:07:29.254164   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37773
	I0401 18:07:29.254268   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.254307   18511 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427/id_rsa Username:docker}
	I0401 18:07:29.255208   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.257477   18511 main.go:141] libmachine: (addons-881427) Calling .GetState
	I0401 18:07:29.257493   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHUsername
	I0401 18:07:29.257680   18511 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427/id_rsa Username:docker}
	I0401 18:07:29.257780   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.258111   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.258250   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.258705   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.258722   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.258781   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.259154   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.259333   18511 main.go:141] libmachine: (addons-881427) Calling .GetState
	I0401 18:07:29.259212   18511 main.go:141] libmachine: (addons-881427) Calling .GetState
	I0401 18:07:29.259959   18511 main.go:141] libmachine: (addons-881427) Calling .DriverName
	I0401 18:07:29.262069   18511 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 18:07:29.260524   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:29.260995   18511 main.go:141] libmachine: (addons-881427) Calling .DriverName
	I0401 18:07:29.261242   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHPort
	I0401 18:07:29.262649   18511 main.go:141] libmachine: (addons-881427) Calling .DriverName
	I0401 18:07:29.263305   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:29.263963   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHPort
	I0401 18:07:29.265875   18511 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 18:07:29.265887   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 18:07:29.265903   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHHostname
	I0401 18:07:29.265955   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:07:29.265977   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:29.266061   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:07:29.266079   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:29.268248   18511 out.go:177]   - Using image docker.io/registry:2.8.3
	I0401 18:07:29.266530   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44073
	I0401 18:07:29.266674   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:07:29.266694   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:07:29.271197   18511 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0401 18:07:29.269701   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:29.270162   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHUsername
	I0401 18:07:29.270312   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHPort
	I0401 18:07:29.270416   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHUsername
	I0401 18:07:29.273733   18511 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0401 18:07:29.273749   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0401 18:07:29.273777   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHHostname
	I0401 18:07:29.273808   18511 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427/id_rsa Username:docker}
	I0401 18:07:29.273832   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:07:29.273857   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:29.273865   18511 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0401 18:07:29.275780   18511 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0401 18:07:29.275796   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0401 18:07:29.275813   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHHostname
	I0401 18:07:29.274169   18511 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427/id_rsa Username:docker}
	I0401 18:07:29.274198   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:07:29.281886   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHUsername
	I0401 18:07:29.282019   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:29.282204   18511 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427/id_rsa Username:docker}
	I0401 18:07:29.283005   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:29.283023   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:29.283321   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:29.283400   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:29.283564   18511 main.go:141] libmachine: (addons-881427) Calling .GetState
	I0401 18:07:29.283791   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:07:29.283826   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:29.283985   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHPort
	I0401 18:07:29.284127   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:07:29.284272   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHUsername
	I0401 18:07:29.284625   18511 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427/id_rsa Username:docker}
	I0401 18:07:29.284654   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:29.285161   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:07:29.285181   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:29.285354   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHPort
	I0401 18:07:29.285497   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:07:29.285539   18511 main.go:141] libmachine: (addons-881427) Calling .DriverName
	I0401 18:07:29.285650   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHUsername
	I0401 18:07:29.285799   18511 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427/id_rsa Username:docker}
	I0401 18:07:29.285850   18511 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 18:07:29.285863   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 18:07:29.285877   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHHostname
	I0401 18:07:29.288557   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:29.288912   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:07:29.288931   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:29.289216   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHPort
	I0401 18:07:29.289396   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:07:29.289562   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHUsername
	I0401 18:07:29.289740   18511 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427/id_rsa Username:docker}
	W0401 18:07:29.291715   18511 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:39110->192.168.39.214:22: read: connection reset by peer
	I0401 18:07:29.291751   18511 retry.go:31] will retry after 133.062826ms: ssh: handshake failed: read tcp 192.168.39.1:39110->192.168.39.214:22: read: connection reset by peer
	I0401 18:07:29.601285   18511 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0401 18:07:29.601316   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0401 18:07:29.771193   18511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0401 18:07:29.827808   18511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 18:07:29.829199   18511 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0401 18:07:29.829222   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0401 18:07:29.832447   18511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 18:07:29.833654   18511 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0401 18:07:29.833672   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0401 18:07:29.854817   18511 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 18:07:29.854848   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0401 18:07:29.881750   18511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0401 18:07:29.899630   18511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0401 18:07:29.919065   18511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0401 18:07:29.921975   18511 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0401 18:07:29.922001   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0401 18:07:29.933421   18511 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0401 18:07:29.933441   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0401 18:07:29.937383   18511 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0401 18:07:29.937407   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0401 18:07:29.939080   18511 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0401 18:07:29.939101   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0401 18:07:29.943111   18511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0401 18:07:30.034695   18511 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0401 18:07:30.034724   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0401 18:07:30.043835   18511 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0401 18:07:30.043859   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0401 18:07:30.132722   18511 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0401 18:07:30.132749   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0401 18:07:30.158051   18511 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 18:07:30.158075   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 18:07:30.207629   18511 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0401 18:07:30.207660   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0401 18:07:30.232444   18511 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.130935041s)
	I0401 18:07:30.232518   18511 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.127656068s)
	I0401 18:07:30.232597   18511 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 18:07:30.232605   18511 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0401 18:07:30.236455   18511 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0401 18:07:30.236473   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0401 18:07:30.240946   18511 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0401 18:07:30.240964   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0401 18:07:30.272222   18511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0401 18:07:30.376582   18511 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0401 18:07:30.376608   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0401 18:07:30.447422   18511 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0401 18:07:30.447452   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0401 18:07:30.458544   18511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0401 18:07:30.547441   18511 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 18:07:30.547503   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0401 18:07:30.557080   18511 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0401 18:07:30.557109   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0401 18:07:30.609328   18511 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0401 18:07:30.609356   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0401 18:07:30.804735   18511 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0401 18:07:30.804760   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0401 18:07:30.810859   18511 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0401 18:07:30.810884   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0401 18:07:30.831095   18511 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0401 18:07:30.831121   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0401 18:07:30.934738   18511 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0401 18:07:30.934761   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0401 18:07:30.994149   18511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 18:07:31.164427   18511 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0401 18:07:31.164457   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0401 18:07:31.218895   18511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0401 18:07:31.310557   18511 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0401 18:07:31.310584   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0401 18:07:31.321942   18511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0401 18:07:31.596152   18511 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0401 18:07:31.596179   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0401 18:07:31.784906   18511 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0401 18:07:31.784928   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0401 18:07:32.046833   18511 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0401 18:07:32.046861   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0401 18:07:32.180706   18511 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0401 18:07:32.180733   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0401 18:07:32.372542   18511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0401 18:07:32.592144   18511 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0401 18:07:32.592173   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0401 18:07:33.085485   18511 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0401 18:07:33.085509   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0401 18:07:33.490965   18511 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0401 18:07:33.490991   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0401 18:07:33.772229   18511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0401 18:07:34.188302   18511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.360462515s)
	I0401 18:07:34.188347   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:34.188356   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:34.188633   18511 main.go:141] libmachine: (addons-881427) DBG | Closing plugin on server side
	I0401 18:07:34.188679   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:34.188688   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:34.188697   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:34.188703   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:34.188952   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:34.188969   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:34.188982   18511 main.go:141] libmachine: (addons-881427) DBG | Closing plugin on server side
	I0401 18:07:34.189416   18511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.418182636s)
	I0401 18:07:34.189448   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:34.189458   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:34.189665   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:34.189684   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:34.189692   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:34.189700   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:34.189910   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:34.189924   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:34.203835   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:34.203857   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:34.204196   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:34.204218   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:34.952018   18511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.119536913s)
	I0401 18:07:34.952054   18511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.070259733s)
	I0401 18:07:34.952068   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:34.952080   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:34.952088   18511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.052425015s)
	I0401 18:07:34.952119   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:34.952093   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:34.952137   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:34.952153   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:34.952518   18511 main.go:141] libmachine: (addons-881427) DBG | Closing plugin on server side
	I0401 18:07:34.952523   18511 main.go:141] libmachine: (addons-881427) DBG | Closing plugin on server side
	I0401 18:07:34.952531   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:34.952537   18511 main.go:141] libmachine: (addons-881427) DBG | Closing plugin on server side
	I0401 18:07:34.952545   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:34.952556   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:34.952564   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:34.952566   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:34.952603   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:34.952616   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:34.952643   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:34.952837   18511 main.go:141] libmachine: (addons-881427) DBG | Closing plugin on server side
	I0401 18:07:34.952847   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:34.952857   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:34.952905   18511 main.go:141] libmachine: (addons-881427) DBG | Closing plugin on server side
	I0401 18:07:34.952934   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:34.952942   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:34.954426   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:34.954457   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:34.954475   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:34.954495   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:34.954704   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:34.954720   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:36.043776   18511 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0401 18:07:36.043817   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHHostname
	I0401 18:07:36.046659   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:36.047072   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:07:36.047103   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:36.047228   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHPort
	I0401 18:07:36.047453   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:07:36.047622   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHUsername
	I0401 18:07:36.047769   18511 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427/id_rsa Username:docker}
	I0401 18:07:36.921790   18511 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0401 18:07:37.370425   18511 addons.go:234] Setting addon gcp-auth=true in "addons-881427"
	I0401 18:07:37.370482   18511 host.go:66] Checking if "addons-881427" exists ...
	I0401 18:07:37.370784   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:37.370813   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:37.386154   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34077
	I0401 18:07:37.386621   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:37.387075   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:37.387093   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:37.387491   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:37.387927   18511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:07:37.387956   18511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:07:37.402879   18511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38799
	I0401 18:07:37.403459   18511 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:07:37.403962   18511 main.go:141] libmachine: Using API Version  1
	I0401 18:07:37.403985   18511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:07:37.404332   18511 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:07:37.404516   18511 main.go:141] libmachine: (addons-881427) Calling .GetState
	I0401 18:07:37.406254   18511 main.go:141] libmachine: (addons-881427) Calling .DriverName
	I0401 18:07:37.406480   18511 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0401 18:07:37.406506   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHHostname
	I0401 18:07:37.409172   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:37.409715   18511 main.go:141] libmachine: (addons-881427) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:04:cb", ip: ""} in network mk-addons-881427: {Iface:virbr1 ExpiryTime:2024-04-01 19:06:48 +0000 UTC Type:0 Mac:52:54:00:4b:04:cb Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-881427 Clientid:01:52:54:00:4b:04:cb}
	I0401 18:07:37.409743   18511 main.go:141] libmachine: (addons-881427) DBG | domain addons-881427 has defined IP address 192.168.39.214 and MAC address 52:54:00:4b:04:cb in network mk-addons-881427
	I0401 18:07:37.409863   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHPort
	I0401 18:07:37.410054   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHKeyPath
	I0401 18:07:37.410214   18511 main.go:141] libmachine: (addons-881427) Calling .GetSSHUsername
	I0401 18:07:37.410328   18511 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/addons-881427/id_rsa Username:docker}
	I0401 18:07:38.955306   18511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.036196484s)
	I0401 18:07:38.955346   18511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.012208075s)
	I0401 18:07:38.955366   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:38.955375   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:38.955392   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:38.955427   18511 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.722795841s)
	I0401 18:07:38.955443   18511 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.722825819s)
	I0401 18:07:38.955379   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:38.955453   18511 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0401 18:07:38.955525   18511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.683269612s)
	I0401 18:07:38.955548   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:38.955555   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:38.955552   18511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.496983646s)
	I0401 18:07:38.955596   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:38.955604   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:38.955632   18511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.96145091s)
	I0401 18:07:38.955654   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:38.955670   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:38.955731   18511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.736796486s)
	I0401 18:07:38.955752   18511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.63378487s)
	I0401 18:07:38.955767   18511 main.go:141] libmachine: Making call to close driver server
	W0401 18:07:38.955757   18511 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0401 18:07:38.955790   18511 retry.go:31] will retry after 220.630061ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0401 18:07:38.955774   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:38.955841   18511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.583265242s)
	I0401 18:07:38.955857   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:38.955857   18511 main.go:141] libmachine: (addons-881427) DBG | Closing plugin on server side
	I0401 18:07:38.955865   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:38.955895   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:38.955907   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:38.955916   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:38.955923   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:38.956258   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:38.956270   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:38.956279   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:38.956287   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:38.956446   18511 node_ready.go:35] waiting up to 6m0s for node "addons-881427" to be "Ready" ...
	I0401 18:07:38.959120   18511 main.go:141] libmachine: (addons-881427) DBG | Closing plugin on server side
	I0401 18:07:38.959135   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:38.959141   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:38.959148   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:38.959148   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:38.959155   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:38.959163   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:38.959171   18511 main.go:141] libmachine: (addons-881427) DBG | Closing plugin on server side
	I0401 18:07:38.959172   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:38.959179   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:38.959190   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:38.959190   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:38.959198   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:38.959214   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:38.959183   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:38.959229   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:38.959198   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:38.959251   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:38.959202   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:38.959271   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:38.959153   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:38.959321   18511 main.go:141] libmachine: (addons-881427) DBG | Closing plugin on server side
	I0401 18:07:38.959322   18511 addons.go:470] Verifying addon ingress=true in "addons-881427"
	I0401 18:07:38.959371   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:38.959404   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:38.959424   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:38.959442   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:38.959164   18511 main.go:141] libmachine: (addons-881427) DBG | Closing plugin on server side
	I0401 18:07:38.962302   18511 out.go:177] * Verifying ingress addon...
	I0401 18:07:38.959128   18511 main.go:141] libmachine: (addons-881427) DBG | Closing plugin on server side
	I0401 18:07:38.959219   18511 main.go:141] libmachine: (addons-881427) DBG | Closing plugin on server side
	I0401 18:07:38.959259   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:38.959599   18511 main.go:141] libmachine: (addons-881427) DBG | Closing plugin on server side
	I0401 18:07:38.959621   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:38.959625   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:38.959638   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:38.959649   18511 main.go:141] libmachine: (addons-881427) DBG | Closing plugin on server side
	I0401 18:07:38.959657   18511 main.go:141] libmachine: (addons-881427) DBG | Closing plugin on server side
	I0401 18:07:38.959671   18511 main.go:141] libmachine: (addons-881427) DBG | Closing plugin on server side
	I0401 18:07:38.959702   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:38.962364   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:38.962375   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:38.962387   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:38.962401   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:38.964246   18511 addons.go:470] Verifying addon metrics-server=true in "addons-881427"
	I0401 18:07:38.962724   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:38.964266   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:38.962746   18511 main.go:141] libmachine: (addons-881427) DBG | Closing plugin on server side
	I0401 18:07:38.963849   18511 node_ready.go:49] node "addons-881427" has status "Ready":"True"
	I0401 18:07:38.965710   18511 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-881427 service yakd-dashboard -n yakd-dashboard
	
	I0401 18:07:38.964341   18511 node_ready.go:38] duration metric: took 7.854162ms for node "addons-881427" to be "Ready" ...
	I0401 18:07:38.964197   18511 addons.go:470] Verifying addon registry=true in "addons-881427"
	I0401 18:07:38.964899   18511 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0401 18:07:38.967018   18511 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 18:07:38.968433   18511 out.go:177] * Verifying registry addon...
	I0401 18:07:38.970649   18511 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0401 18:07:39.014890   18511 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0401 18:07:39.014916   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:39.016540   18511 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-7fhsg" in "kube-system" namespace to be "Ready" ...
	I0401 18:07:39.019862   18511 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0401 18:07:39.019880   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:39.023945   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:39.023960   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:39.024275   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:39.024297   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:39.024300   18511 main.go:141] libmachine: (addons-881427) DBG | Closing plugin on server side
	I0401 18:07:39.033757   18511 pod_ready.go:92] pod "coredns-76f75df574-7fhsg" in "kube-system" namespace has status "Ready":"True"
	I0401 18:07:39.033784   18511 pod_ready.go:81] duration metric: took 17.222317ms for pod "coredns-76f75df574-7fhsg" in "kube-system" namespace to be "Ready" ...
	I0401 18:07:39.033797   18511 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-fgjvr" in "kube-system" namespace to be "Ready" ...
	I0401 18:07:39.121401   18511 pod_ready.go:92] pod "coredns-76f75df574-fgjvr" in "kube-system" namespace has status "Ready":"True"
	I0401 18:07:39.121426   18511 pod_ready.go:81] duration metric: took 87.619988ms for pod "coredns-76f75df574-fgjvr" in "kube-system" namespace to be "Ready" ...
	I0401 18:07:39.121438   18511 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-881427" in "kube-system" namespace to be "Ready" ...
	I0401 18:07:39.141254   18511 pod_ready.go:92] pod "etcd-addons-881427" in "kube-system" namespace has status "Ready":"True"
	I0401 18:07:39.141282   18511 pod_ready.go:81] duration metric: took 19.83635ms for pod "etcd-addons-881427" in "kube-system" namespace to be "Ready" ...
	I0401 18:07:39.141294   18511 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-881427" in "kube-system" namespace to be "Ready" ...
	I0401 18:07:39.155198   18511 pod_ready.go:92] pod "kube-apiserver-addons-881427" in "kube-system" namespace has status "Ready":"True"
	I0401 18:07:39.155219   18511 pod_ready.go:81] duration metric: took 13.916644ms for pod "kube-apiserver-addons-881427" in "kube-system" namespace to be "Ready" ...
	I0401 18:07:39.155232   18511 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-881427" in "kube-system" namespace to be "Ready" ...
	I0401 18:07:39.177517   18511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0401 18:07:39.360176   18511 pod_ready.go:92] pod "kube-controller-manager-addons-881427" in "kube-system" namespace has status "Ready":"True"
	I0401 18:07:39.360213   18511 pod_ready.go:81] duration metric: took 204.974264ms for pod "kube-controller-manager-addons-881427" in "kube-system" namespace to be "Ready" ...
	I0401 18:07:39.360225   18511 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fz2ml" in "kube-system" namespace to be "Ready" ...
	I0401 18:07:39.462952   18511 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-881427" context rescaled to 1 replicas
	I0401 18:07:39.476395   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:39.487085   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:39.760703   18511 pod_ready.go:92] pod "kube-proxy-fz2ml" in "kube-system" namespace has status "Ready":"True"
	I0401 18:07:39.760731   18511 pod_ready.go:81] duration metric: took 400.497834ms for pod "kube-proxy-fz2ml" in "kube-system" namespace to be "Ready" ...
	I0401 18:07:39.760744   18511 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-881427" in "kube-system" namespace to be "Ready" ...
	I0401 18:07:39.983177   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:39.993106   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:40.160007   18511 pod_ready.go:92] pod "kube-scheduler-addons-881427" in "kube-system" namespace has status "Ready":"True"
	I0401 18:07:40.160029   18511 pod_ready.go:81] duration metric: took 399.277189ms for pod "kube-scheduler-addons-881427" in "kube-system" namespace to be "Ready" ...
	I0401 18:07:40.160039   18511 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-75d6c48ddd-s96px" in "kube-system" namespace to be "Ready" ...
	I0401 18:07:40.480719   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:40.488617   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:40.972907   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:40.982748   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:41.493521   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:41.500156   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:41.535253   18511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.762966614s)
	I0401 18:07:41.535313   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:41.535320   18511 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.12881588s)
	I0401 18:07:41.537264   18511 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0401 18:07:41.535326   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:41.540616   18511 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0401 18:07:41.539169   18511 main.go:141] libmachine: (addons-881427) DBG | Closing plugin on server side
	I0401 18:07:41.539186   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:41.542901   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:41.542911   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:41.542917   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:41.542958   18511 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0401 18:07:41.542982   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0401 18:07:41.543246   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:41.543252   18511 main.go:141] libmachine: (addons-881427) DBG | Closing plugin on server side
	I0401 18:07:41.543292   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:41.543313   18511 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-881427"
	I0401 18:07:41.544797   18511 out.go:177] * Verifying csi-hostpath-driver addon...
	I0401 18:07:41.546812   18511 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0401 18:07:41.592018   18511 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0401 18:07:41.592059   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:41.743440   18511 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0401 18:07:41.743464   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0401 18:07:41.799259   18511 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0401 18:07:41.799284   18511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0401 18:07:41.875310   18511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.697751489s)
	I0401 18:07:41.875362   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:41.875373   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:41.875664   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:41.875693   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:41.875704   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:41.875715   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:41.876593   18511 main.go:141] libmachine: (addons-881427) DBG | Closing plugin on server side
	I0401 18:07:41.876630   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:41.876649   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:41.936260   18511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0401 18:07:41.975391   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:41.978591   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:42.052926   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:42.168168   18511 pod_ready.go:102] pod "metrics-server-75d6c48ddd-s96px" in "kube-system" namespace has status "Ready":"False"
	I0401 18:07:42.475398   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:42.480694   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:42.553047   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:42.990743   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:42.991405   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:43.066454   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:43.257309   18511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.320998071s)
	I0401 18:07:43.257358   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:43.257370   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:43.257689   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:43.257705   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:43.257713   18511 main.go:141] libmachine: Making call to close driver server
	I0401 18:07:43.257721   18511 main.go:141] libmachine: (addons-881427) Calling .Close
	I0401 18:07:43.258001   18511 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:07:43.258022   18511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:07:43.258049   18511 main.go:141] libmachine: (addons-881427) DBG | Closing plugin on server side
	I0401 18:07:43.259119   18511 addons.go:470] Verifying addon gcp-auth=true in "addons-881427"
	I0401 18:07:43.261741   18511 out.go:177] * Verifying gcp-auth addon...
	I0401 18:07:43.264558   18511 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0401 18:07:43.291105   18511 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0401 18:07:43.291130   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:43.478768   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:43.501357   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:43.554154   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:43.769184   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:43.973120   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:43.976513   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:44.052357   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:44.268803   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:44.472450   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:44.475820   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:44.553253   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:44.668599   18511 pod_ready.go:102] pod "metrics-server-75d6c48ddd-s96px" in "kube-system" namespace has status "Ready":"False"
	I0401 18:07:44.769636   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:44.974586   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:44.975195   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:45.054396   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:45.268852   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:45.472446   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:45.476309   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:45.552921   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:45.769280   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:45.971976   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:45.975457   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:46.054227   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:46.268609   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:46.605539   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:46.609121   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:46.613268   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:46.698861   18511 pod_ready.go:102] pod "metrics-server-75d6c48ddd-s96px" in "kube-system" namespace has status "Ready":"False"
	I0401 18:07:46.769004   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:46.978030   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:46.978905   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:47.054971   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:47.268429   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:47.478054   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:47.480344   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:47.553386   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:47.769215   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:47.972728   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:47.976340   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:48.056843   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:48.268663   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:48.473446   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:48.476668   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:48.552810   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:48.770123   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:48.972285   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:48.976462   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:49.055137   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:49.168107   18511 pod_ready.go:102] pod "metrics-server-75d6c48ddd-s96px" in "kube-system" namespace has status "Ready":"False"
	I0401 18:07:49.268640   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:49.472960   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:49.475289   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:49.552588   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:49.769818   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:49.973337   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:49.976040   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:50.052885   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:50.269500   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:50.471379   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:50.475057   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:50.552871   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:50.769060   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:50.972354   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:50.976078   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:51.053068   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:51.268987   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:51.473198   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:51.475498   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:51.552451   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:51.671022   18511 pod_ready.go:102] pod "metrics-server-75d6c48ddd-s96px" in "kube-system" namespace has status "Ready":"False"
	I0401 18:07:51.772117   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:51.973132   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:51.976746   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:52.053811   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:52.268709   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:52.472182   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:52.476104   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:52.552475   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:52.769192   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:52.971729   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:52.975459   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:53.052612   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:53.268919   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:53.472647   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:53.476122   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:53.552011   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:53.768722   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:53.972423   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:53.976084   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:54.054791   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:54.165871   18511 pod_ready.go:102] pod "metrics-server-75d6c48ddd-s96px" in "kube-system" namespace has status "Ready":"False"
	I0401 18:07:54.272101   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:54.473070   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:54.478317   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:54.552326   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:54.768683   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:54.971988   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:54.975258   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:55.052709   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:55.274508   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:55.472041   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:55.475664   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:55.553038   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:55.772531   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:55.972358   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:55.978116   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:56.054555   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:56.168127   18511 pod_ready.go:102] pod "metrics-server-75d6c48ddd-s96px" in "kube-system" namespace has status "Ready":"False"
	I0401 18:07:56.269332   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:56.473782   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:56.476724   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:56.564078   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:56.768738   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:56.978415   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:56.986695   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:57.052971   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:57.269365   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:57.472544   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:57.484480   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:57.553702   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:57.768749   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:57.974059   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:57.976082   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:58.054083   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:58.168430   18511 pod_ready.go:102] pod "metrics-server-75d6c48ddd-s96px" in "kube-system" namespace has status "Ready":"False"
	I0401 18:07:58.271756   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:58.473804   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:58.475671   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:58.552775   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:58.770500   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:58.972162   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:58.975414   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:59.052602   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:59.289962   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:59.473109   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:59.476707   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:07:59.553404   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:07:59.768974   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:07:59.971939   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:07:59.975072   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:08:00.054418   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:00.269025   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:00.472065   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:00.474961   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:08:00.553526   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:00.672856   18511 pod_ready.go:102] pod "metrics-server-75d6c48ddd-s96px" in "kube-system" namespace has status "Ready":"False"
	I0401 18:08:00.768080   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:00.972753   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:00.976993   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:08:01.052540   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:01.269007   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:01.473529   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:01.476995   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:08:01.554524   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:01.769529   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:01.975348   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:01.978031   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:08:02.053217   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:02.270830   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:02.472246   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:02.475484   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:08:02.552672   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:02.770248   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:02.979654   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:02.981815   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:08:03.053145   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:03.167881   18511 pod_ready.go:102] pod "metrics-server-75d6c48ddd-s96px" in "kube-system" namespace has status "Ready":"False"
	I0401 18:08:03.268753   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:03.472071   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:03.476697   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:08:03.553069   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:03.768098   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:03.972902   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:03.980274   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:08:04.052340   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:04.269723   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:04.475079   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:04.477093   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:08:04.553116   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:04.769884   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:04.971784   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:04.975171   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:08:05.052818   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:05.168277   18511 pod_ready.go:102] pod "metrics-server-75d6c48ddd-s96px" in "kube-system" namespace has status "Ready":"False"
	I0401 18:08:05.269393   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:05.473769   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:05.476254   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:08:05.554582   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:05.768982   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:05.972510   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:05.975774   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:08:06.065116   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:06.269775   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:06.472543   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:06.476151   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:08:06.553575   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:06.772655   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:06.972446   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:06.975227   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:08:07.052728   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:07.269543   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:07.472290   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:07.475982   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:08:07.553135   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:07.676903   18511 pod_ready.go:102] pod "metrics-server-75d6c48ddd-s96px" in "kube-system" namespace has status "Ready":"False"
	I0401 18:08:07.770183   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:07.972891   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:07.976475   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:08:08.052964   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:08.268873   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:08.476615   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:08.479559   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:08:08.553608   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:08.769704   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:08.973009   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:08.976085   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:08:09.055121   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:09.269505   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:09.474985   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:09.478052   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:08:09.554915   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:09.769849   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:09.972356   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:09.976893   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:08:10.052816   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:10.167420   18511 pod_ready.go:102] pod "metrics-server-75d6c48ddd-s96px" in "kube-system" namespace has status "Ready":"False"
	I0401 18:08:10.268926   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:10.472924   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:10.477748   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:08:10.553031   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:10.948636   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:10.983814   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:08:11.004885   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:11.054650   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:11.271798   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:11.473385   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:11.476422   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:08:11.553366   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:11.769914   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:11.972184   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:11.976897   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 18:08:12.056710   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:12.198334   18511 pod_ready.go:102] pod "metrics-server-75d6c48ddd-s96px" in "kube-system" namespace has status "Ready":"False"
	I0401 18:08:12.269858   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:12.471730   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:12.475585   18511 kapi.go:107] duration metric: took 33.504934964s to wait for kubernetes.io/minikube-addons=registry ...
	I0401 18:08:12.553144   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:12.768783   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:12.972704   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:13.052617   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:13.272782   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:13.472491   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:13.553436   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:13.771063   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:13.972010   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:14.052725   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:14.269700   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:14.474356   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:14.557066   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:14.678672   18511 pod_ready.go:102] pod "metrics-server-75d6c48ddd-s96px" in "kube-system" namespace has status "Ready":"False"
	I0401 18:08:14.770768   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:14.978508   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:15.055156   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:15.271815   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:15.473052   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:15.553007   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:15.770333   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:15.988362   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:16.057792   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:16.166030   18511 pod_ready.go:92] pod "metrics-server-75d6c48ddd-s96px" in "kube-system" namespace has status "Ready":"True"
	I0401 18:08:16.166060   18511 pod_ready.go:81] duration metric: took 36.006014081s for pod "metrics-server-75d6c48ddd-s96px" in "kube-system" namespace to be "Ready" ...
	I0401 18:08:16.166072   18511 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-m86dq" in "kube-system" namespace to be "Ready" ...
	I0401 18:08:16.172695   18511 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-m86dq" in "kube-system" namespace has status "Ready":"True"
	I0401 18:08:16.172724   18511 pod_ready.go:81] duration metric: took 6.64372ms for pod "nvidia-device-plugin-daemonset-m86dq" in "kube-system" namespace to be "Ready" ...
	I0401 18:08:16.172748   18511 pod_ready.go:38] duration metric: took 37.205708904s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 18:08:16.172767   18511 api_server.go:52] waiting for apiserver process to appear ...
	I0401 18:08:16.172825   18511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 18:08:16.217961   18511 api_server.go:72] duration metric: took 47.116427575s to wait for apiserver process to appear ...
	I0401 18:08:16.217984   18511 api_server.go:88] waiting for apiserver healthz status ...
	I0401 18:08:16.218000   18511 api_server.go:253] Checking apiserver healthz at https://192.168.39.214:8443/healthz ...
	I0401 18:08:16.222594   18511 api_server.go:279] https://192.168.39.214:8443/healthz returned 200:
	ok
	I0401 18:08:16.223591   18511 api_server.go:141] control plane version: v1.29.3
	I0401 18:08:16.223619   18511 api_server.go:131] duration metric: took 5.629585ms to wait for apiserver health ...
	I0401 18:08:16.223627   18511 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 18:08:16.234779   18511 system_pods.go:59] 18 kube-system pods found
	I0401 18:08:16.234817   18511 system_pods.go:61] "coredns-76f75df574-7fhsg" [8e044680-92e0-46d9-aa37-6e95b606d9c6] Running
	I0401 18:08:16.234826   18511 system_pods.go:61] "csi-hostpath-attacher-0" [f64f7572-e225-467c-ab07-def542d15d28] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0401 18:08:16.234835   18511 system_pods.go:61] "csi-hostpath-resizer-0" [b630782c-3751-4074-92ca-f544f91651c3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0401 18:08:16.234843   18511 system_pods.go:61] "csi-hostpathplugin-fs5mb" [4f9b358f-3334-45d6-bf37-8b9d4a5cdf22] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0401 18:08:16.234851   18511 system_pods.go:61] "etcd-addons-881427" [06c62b43-d87e-4129-8091-f93fd58025e0] Running
	I0401 18:08:16.234856   18511 system_pods.go:61] "kube-apiserver-addons-881427" [1fa6dfbf-bb14-4943-bb7e-89b91749d25a] Running
	I0401 18:08:16.234862   18511 system_pods.go:61] "kube-controller-manager-addons-881427" [60bef674-d6a8-4ce8-bbe3-08d8ff1bd11e] Running
	I0401 18:08:16.234871   18511 system_pods.go:61] "kube-ingress-dns-minikube" [2f402a8d-9920-4ab9-b8f5-b24ff9528a04] Running
	I0401 18:08:16.234876   18511 system_pods.go:61] "kube-proxy-fz2ml" [6263627a-2781-45c7-b2a4-b06ab6c04879] Running
	I0401 18:08:16.234886   18511 system_pods.go:61] "kube-scheduler-addons-881427" [5576952d-82a5-40ba-a78e-91409f3a748f] Running
	I0401 18:08:16.234891   18511 system_pods.go:61] "metrics-server-75d6c48ddd-s96px" [ae3f8b9b-1cda-4f49-bb5d-a99466fe6135] Running
	I0401 18:08:16.234897   18511 system_pods.go:61] "nvidia-device-plugin-daemonset-m86dq" [dd4046ef-ce6a-48e2-9d0e-bf3aa98f9156] Running
	I0401 18:08:16.234903   18511 system_pods.go:61] "registry-9jpg9" [257b26ce-194a-4b12-b7f6-a5da0f9cf9e6] Running
	I0401 18:08:16.234907   18511 system_pods.go:61] "registry-proxy-hhmlr" [dae5e9cd-9b99-49cd-aa43-a0dd80d05e0f] Running
	I0401 18:08:16.234916   18511 system_pods.go:61] "snapshot-controller-58dbcc7b99-gpmcg" [56b71b6f-9ddf-43ca-9893-1895d0c71024] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0401 18:08:16.234933   18511 system_pods.go:61] "snapshot-controller-58dbcc7b99-rtgfk" [561da000-21ec-4e67-a8df-8aa9357a125f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0401 18:08:16.234938   18511 system_pods.go:61] "storage-provisioner" [2d770fd8-541f-4ea5-bbff-8bdba366a91b] Running
	I0401 18:08:16.234941   18511 system_pods.go:61] "tiller-deploy-7b677967b9-swl9s" [a6dccfe9-2e74-4db2-b2b9-a8e8e6abcf92] Running
	I0401 18:08:16.234948   18511 system_pods.go:74] duration metric: took 11.315063ms to wait for pod list to return data ...
	I0401 18:08:16.234957   18511 default_sa.go:34] waiting for default service account to be created ...
	I0401 18:08:16.236872   18511 default_sa.go:45] found service account: "default"
	I0401 18:08:16.236886   18511 default_sa.go:55] duration metric: took 1.923284ms for default service account to be created ...
	I0401 18:08:16.236893   18511 system_pods.go:116] waiting for k8s-apps to be running ...
	I0401 18:08:16.245335   18511 system_pods.go:86] 18 kube-system pods found
	I0401 18:08:16.245357   18511 system_pods.go:89] "coredns-76f75df574-7fhsg" [8e044680-92e0-46d9-aa37-6e95b606d9c6] Running
	I0401 18:08:16.245366   18511 system_pods.go:89] "csi-hostpath-attacher-0" [f64f7572-e225-467c-ab07-def542d15d28] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0401 18:08:16.245372   18511 system_pods.go:89] "csi-hostpath-resizer-0" [b630782c-3751-4074-92ca-f544f91651c3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0401 18:08:16.245380   18511 system_pods.go:89] "csi-hostpathplugin-fs5mb" [4f9b358f-3334-45d6-bf37-8b9d4a5cdf22] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0401 18:08:16.245421   18511 system_pods.go:89] "etcd-addons-881427" [06c62b43-d87e-4129-8091-f93fd58025e0] Running
	I0401 18:08:16.245428   18511 system_pods.go:89] "kube-apiserver-addons-881427" [1fa6dfbf-bb14-4943-bb7e-89b91749d25a] Running
	I0401 18:08:16.245433   18511 system_pods.go:89] "kube-controller-manager-addons-881427" [60bef674-d6a8-4ce8-bbe3-08d8ff1bd11e] Running
	I0401 18:08:16.245439   18511 system_pods.go:89] "kube-ingress-dns-minikube" [2f402a8d-9920-4ab9-b8f5-b24ff9528a04] Running
	I0401 18:08:16.245443   18511 system_pods.go:89] "kube-proxy-fz2ml" [6263627a-2781-45c7-b2a4-b06ab6c04879] Running
	I0401 18:08:16.245450   18511 system_pods.go:89] "kube-scheduler-addons-881427" [5576952d-82a5-40ba-a78e-91409f3a748f] Running
	I0401 18:08:16.245454   18511 system_pods.go:89] "metrics-server-75d6c48ddd-s96px" [ae3f8b9b-1cda-4f49-bb5d-a99466fe6135] Running
	I0401 18:08:16.245458   18511 system_pods.go:89] "nvidia-device-plugin-daemonset-m86dq" [dd4046ef-ce6a-48e2-9d0e-bf3aa98f9156] Running
	I0401 18:08:16.245462   18511 system_pods.go:89] "registry-9jpg9" [257b26ce-194a-4b12-b7f6-a5da0f9cf9e6] Running
	I0401 18:08:16.245466   18511 system_pods.go:89] "registry-proxy-hhmlr" [dae5e9cd-9b99-49cd-aa43-a0dd80d05e0f] Running
	I0401 18:08:16.245472   18511 system_pods.go:89] "snapshot-controller-58dbcc7b99-gpmcg" [56b71b6f-9ddf-43ca-9893-1895d0c71024] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0401 18:08:16.245478   18511 system_pods.go:89] "snapshot-controller-58dbcc7b99-rtgfk" [561da000-21ec-4e67-a8df-8aa9357a125f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0401 18:08:16.245483   18511 system_pods.go:89] "storage-provisioner" [2d770fd8-541f-4ea5-bbff-8bdba366a91b] Running
	I0401 18:08:16.245489   18511 system_pods.go:89] "tiller-deploy-7b677967b9-swl9s" [a6dccfe9-2e74-4db2-b2b9-a8e8e6abcf92] Running
	I0401 18:08:16.245498   18511 system_pods.go:126] duration metric: took 8.599119ms to wait for k8s-apps to be running ...
	I0401 18:08:16.245507   18511 system_svc.go:44] waiting for kubelet service to be running ....
	I0401 18:08:16.245547   18511 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 18:08:16.271316   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:16.296731   18511 system_svc.go:56] duration metric: took 51.218036ms WaitForService to wait for kubelet
	I0401 18:08:16.296761   18511 kubeadm.go:576] duration metric: took 47.195228282s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 18:08:16.296779   18511 node_conditions.go:102] verifying NodePressure condition ...
	I0401 18:08:16.300661   18511 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 18:08:16.300687   18511 node_conditions.go:123] node cpu capacity is 2
	I0401 18:08:16.300698   18511 node_conditions.go:105] duration metric: took 3.914545ms to run NodePressure ...
	I0401 18:08:16.300710   18511 start.go:240] waiting for startup goroutines ...
	I0401 18:08:16.472211   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:16.553492   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:16.768685   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:16.972103   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:17.052938   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:17.272098   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:17.471381   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:17.553149   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:17.768873   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:18.204964   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:18.208546   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:18.268539   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:18.471434   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:18.552098   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:18.769588   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:18.972561   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:19.105353   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:19.272826   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:19.472766   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:19.553165   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:19.769104   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:19.977664   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:20.056068   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:20.268882   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:20.472722   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:20.555881   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:20.776199   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:20.972181   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:21.053621   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:21.272062   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:21.472834   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:21.552745   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:21.768840   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:21.972858   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:22.053365   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:22.270205   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:22.471833   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:22.552797   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:22.769105   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:22.974067   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:23.054032   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:23.434978   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:23.472929   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:23.558129   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:23.768463   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:23.973365   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:24.056082   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:24.270671   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:24.472411   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:24.553323   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:24.769054   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:24.972613   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:25.052616   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:25.270217   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:25.471569   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:25.553030   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:25.768915   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:25.974054   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:26.053201   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:26.268591   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:26.471780   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:26.552858   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:26.768829   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:26.971994   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:27.053634   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:27.269167   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:27.476750   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:27.870079   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:27.874460   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:27.971779   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:28.053197   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:28.268836   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:28.471825   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:28.552776   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:28.773025   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:28.972681   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:29.052917   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:29.271304   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:29.471804   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:29.552717   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:29.768844   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:29.973056   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:30.055266   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:30.272854   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:30.472563   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:30.553737   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:30.768795   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:30.973132   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:31.053395   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:31.270480   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:31.472572   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:31.568572   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:32.172897   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:32.173322   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:32.182836   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:32.269186   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:32.471685   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:32.557515   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:32.769553   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:32.972275   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:33.053751   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:33.268457   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:33.471927   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:33.553096   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:33.769236   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:33.978329   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:34.054941   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:34.272927   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:34.472901   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:34.553074   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:34.770097   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:34.975388   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:35.068792   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:35.268835   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:35.473634   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:35.570056   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:35.775366   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:35.979204   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:36.056583   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:36.270012   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:36.472124   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:36.553314   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:36.769136   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:36.972095   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:37.052611   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:37.273404   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:37.472637   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:37.553150   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:37.768885   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:37.973146   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:38.052634   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:38.269987   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:38.473670   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:38.553660   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:38.768460   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:38.997743   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:39.068382   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:39.268190   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:39.472909   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:39.561692   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:39.769001   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:39.972462   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:40.054178   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:40.269442   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:40.473202   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:40.553195   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:40.774330   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:40.972248   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:41.052896   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:41.271221   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:41.472371   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:41.552999   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:41.770358   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:41.972065   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:42.052962   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:42.271043   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:42.473224   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:42.554590   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:42.768420   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:42.973175   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:43.054620   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 18:08:43.268169   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:43.473244   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:43.559942   18511 kapi.go:107] duration metric: took 1m2.013128055s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0401 18:08:43.771530   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:43.972213   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:44.269075   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:44.472966   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:44.768848   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:44.972510   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:45.269001   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:45.472330   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:45.769423   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:45.972558   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:46.723482   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:46.727533   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:46.770691   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:46.973089   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:47.269912   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:47.472437   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:47.768906   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:47.972545   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:48.270579   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:48.472588   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:48.768817   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:48.972380   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:49.268463   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:49.472322   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:49.769187   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:49.975462   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:50.270104   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:50.472563   18511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 18:08:50.768927   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:50.972939   18511 kapi.go:107] duration metric: took 1m12.00803701s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0401 18:08:51.269358   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:51.771838   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:52.268474   18511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 18:08:52.779509   18511 kapi.go:107] duration metric: took 1m9.514951252s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0401 18:08:52.781237   18511 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-881427 cluster.
	I0401 18:08:52.782621   18511 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0401 18:08:52.783976   18511 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0401 18:08:52.785369   18511 out.go:177] * Enabled addons: ingress-dns, default-storageclass, nvidia-device-plugin, storage-provisioner, cloud-spanner, helm-tiller, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0401 18:08:52.786806   18511 addons.go:505] duration metric: took 1m23.685254168s for enable addons: enabled=[ingress-dns default-storageclass nvidia-device-plugin storage-provisioner cloud-spanner helm-tiller inspektor-gadget metrics-server yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0401 18:08:52.786849   18511 start.go:245] waiting for cluster config update ...
	I0401 18:08:52.786867   18511 start.go:254] writing updated cluster config ...
	I0401 18:08:52.787090   18511 ssh_runner.go:195] Run: rm -f paused
	I0401 18:08:52.843163   18511 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0401 18:08:52.845001   18511 out.go:177] * Done! kubectl is now configured to use "addons-881427" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 01 18:09:14 addons-881427 crio[687]: time="2024-04-01 18:09:14.998046528Z" level=debug msg="Response: &PodSandboxStatusResponse{Status:&PodSandboxStatus{Id:eeb0e628c9b4eedcf8accc6a9b5af3d9e05e7bdd5d5528979002104d2bd4981e,Metadata:&PodSandboxMetadata{Name:nvidia-device-plugin-daemonset-m86dq,Uid:dd4046ef-ce6a-48e2-9d0e-bf3aa98f9156,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1711994852239412837,Network:&PodSandboxNetworkStatus{Ip:10.244.0.4,AdditionalIps:[]*PodIP{},},Linux:&LinuxPodSandboxStatus{Namespaces:&Namespace{Options:&NamespaceOption{Network:POD,Pid:CONTAINER,Ipc:POD,TargetId:,UsernsOptions:nil,},},},Labels:map[string]string{controller-revision-hash: 788f69cddc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: nvidia-device-plugin-daemonset-m86dq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd4046ef-ce6a-48e2-9d0e-bf3aa98f9156,name: nvidia-device-plugin-ds,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen:
2024-04-01T18:07:31.902870282Z,kubernetes.io/config.source: api,},RuntimeHandler:,},Info:map[string]string{},ContainersStatuses:[]*ContainerStatus{},Timestamp:0,}" file="otel-collector/interceptors.go:74" id=b41e61d8-6063-44d6-ac52-213bf72d5b13 name=/runtime.v1.RuntimeService/PodSandboxStatus
	Apr 01 18:09:14 addons-881427 crio[687]: time="2024-04-01 18:09:14.998466889Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.uid: dd4046ef-ce6a-48e2-9d0e-bf3aa98f9156,},},}" file="otel-collector/interceptors.go:62" id=1b2524ae-fc0e-4e5c-809c-7478d7be2347 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:09:14 addons-881427 crio[687]: time="2024-04-01 18:09:14.998555478Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1b2524ae-fc0e-4e5c-809c-7478d7be2347 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:09:14 addons-881427 crio[687]: time="2024-04-01 18:09:14.998642396Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1b2524ae-fc0e-4e5c-809c-7478d7be2347 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:09:15 addons-881427 crio[687]: time="2024-04-01 18:09:15.027638273Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1bc7a0c7-5fde-418e-bbb8-24a41fa39cc2 name=/runtime.v1.RuntimeService/Version
	Apr 01 18:09:15 addons-881427 crio[687]: time="2024-04-01 18:09:15.027791858Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1bc7a0c7-5fde-418e-bbb8-24a41fa39cc2 name=/runtime.v1.RuntimeService/Version
	Apr 01 18:09:15 addons-881427 crio[687]: time="2024-04-01 18:09:15.029281051Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c42ab208-f4ae-46fa-92a3-7c3df7dd658d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 18:09:15 addons-881427 crio[687]: time="2024-04-01 18:09:15.031039892Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711994955030939106,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:512318,},InodesUsed:&UInt64Value{Value:189,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c42ab208-f4ae-46fa-92a3-7c3df7dd658d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 18:09:15 addons-881427 crio[687]: time="2024-04-01 18:09:15.032016225Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d2d46c01-6693-4934-921b-d31ff78156eb name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:09:15 addons-881427 crio[687]: time="2024-04-01 18:09:15.032078343Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d2d46c01-6693-4934-921b-d31ff78156eb name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:09:15 addons-881427 crio[687]: time="2024-04-01 18:09:15.032877567Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:00d1a77ee0d728d62f413bb4a68324ab7ed4937c2d1944d7187b6634ebcfd936,PodSandboxId:fe11c103ac4320270b586ca42d1c1a3edbdfcfce3e94098335c737114b03ec8e,Metadata:&ContainerMetadata{Name:task-pv-container,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:52478f8cd6a142fd462f0a7614a7bb064e969a4c083648235d6943c786df8cc7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:92b11f67642b62bbb98e7e49169c346b30e20cd3c1c034d31087e46924b9312e,State:CONTAINER_RUNNING,CreatedAt:1711994953168926173,Labels:map[string]string{io.kubernetes.container.name: task-pv-container,io.kubernetes.pod.name: task-pv-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cc4381a4-b5ce-4478-a676-6d43d9ae14a3,},Annotations:map[string]string{io.kubernetes.container.hash: c89e62ef,io.kubernetes.container.ports: [{\"name\":\"htt
p-server\",\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94d38dc5d6cb3acef7bbfee136b8f74f854b43c45e20cd62c9b57af1dbe009d1,PodSandboxId:323d59b97b0e5c958879ed0f76feff2d8662a208370edda331c4848a9c5bd72b,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1711994950282622710,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-de16cdd6-519d-46fd-98d1-b0afa2a23e43,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 94226317-da40-4927-9343-b8066304e10d,}
,Annotations:map[string]string{io.kubernetes.container.hash: ee6f7986,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:873512ffa33ab552913f22374a7dfccb5cfc9ee48083496284dacb728bcb092f,PodSandboxId:ba301c3366c5baf01ce161906de957644b608dd58766ed956a3e5c85deb8f575,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:4be429a5fbb2e71ae7958bfa558bc637cf3a61baf40a708cb8fff532b39e52d0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ba5dc23f65d4cc4a4535bce55cf9e63b068eb02946e3422d3587e8ce803b6aab,State:CONTAINER_EXITED,CreatedAt:1711994947329536204,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 80642733-4707-42c6-8be7-d7f2bb1dc265,},Annotations
:map[string]string{io.kubernetes.container.hash: e769b8a5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5aac709d5a7980a1b61ba3594f3d9cf1b40ec9ba3705a5154c673d14fa4edbca,PodSandboxId:94438aba1762a769266866396044bb5b710d95b619f349c96d0b1b2ae557d7a9,Metadata:&ContainerMetadata{Name:helm-test,Attempt:0,},Image:&ImageSpec{Image:docker.io/alpine/helm@sha256:9d9fab00e0680f1328924429925595dfe96a68531c8a9c1518d05ee2ad45c36f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:98f6c3b32d565299b035cc773a15cee165942450c44e11cdcaaf370d2c26dc31,State:CONTAINER_EXITED,CreatedAt:1711994941256914775,Labels:map[string]string{io.kubernetes.container.name: helm-test,io.kubernetes.pod.name: helm-test,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1ee7c8c-1104-48a9-a831-574ebadcb997,},Annotations:map[string]st
ring{io.kubernetes.container.hash: e27156a1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:077ff767fcc7cab7483daece08c8c5368dc8420ea8295a74c8c45d55cd2aedb2,PodSandboxId:6c2154c9815141b331714e6332631c32f7e4c48d41de60114883d02fbd9765db,Metadata:&ContainerMetadata{Name:gadget,Attempt:2,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c6db0381372939368364efb98d8c90f4b0e3d86a5637682b85a01195937d9eff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:81f48f8d24e42642898d5669b6926805425c3577187c161e14dcdd4f857e1f8e,State:CONTAINER_EXITED,CreatedAt:1711994932919108187,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-x552z,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 19685406-f298-40bb-8bc8-1d4a0f011b1e,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 23131875,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0125f0c6d4aacc178dbf9901b3023e0afc815152e540ad3c84a6083e43f9abca,PodSandboxId:33fad9b23a3f122c379da83a962230efb2751641f8e4c73312e7c3281d027f32,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1711994931706237294,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-7d69788767-bhk6q,io.kubernetes.pod
.namespace: gcp-auth,io.kubernetes.pod.uid: 2259bc97-3726-4970-8f34-e0b2e0465e3e,},Annotations:map[string]string{io.kubernetes.container.hash: debf27c3,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8ae3a6404aa433dcedeab0122887f32d6cea3ea80348a8b0f59d3cb90fea2b9,PodSandboxId:d5902c0d68f4405ec7fde3cb4865219afed91ba77cccb76b4ae95d3fcdd432ea,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ffcc66479b5baa5a65f94b8b7c73c6ee5ed989ec0b7f8f9371999f335ce4f44c,State:CONTAINER_RUNNING,CreatedAt:1711994929873118697,Labels:map[string]string{io.kuber
netes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-65496f9567-2rjjb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: dab0ec67-96a1-49ff-9bf6-69aed7931052,},Annotations:map[string]string{io.kubernetes.container.hash: f7054f03,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a03d285d05a66f762678ceecfc0308f89eb66ce409aeefb74e4c3e12c0c926b3,PodSandboxId:f1e563ae45f7b9c814dc7d2dc96c9fadfdf62b04b832f9e077aaa237f4ac94bb,Metadata:&ContainerMetadata{Name:csi-
snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1711994922846847714,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-fs5mb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f9b358f-3334-45d6-bf37-8b9d4a5cdf22,},Annotations:map[string]string{io.kubernetes.container.hash: 41bb0f57,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22ef224683e545b78726585edbafdf9c3b80c7b83d088562b846340623b8253e,PodSandboxId:f1e563ae45f7b9c814dc7d2dc96c9fadfdf62b04b832f9e077aaa237f4ac94bb,M
etadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1711994921222153224,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-fs5mb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f9b358f-3334-45d6-bf37-8b9d4a5cdf22,},Annotations:map[string]string{io.kubernetes.container.hash: 9599748f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3faa5342d6428de38360e16637acb063c8c3cf1a92e0a8e622caba6c201b0179,PodSandboxId:f1e563ae45f7b9c814dc7d2dc96c9f
adfdf62b04b832f9e077aaa237f4ac94bb,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1711994918973440283,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-fs5mb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f9b358f-3334-45d6-bf37-8b9d4a5cdf22,},Annotations:map[string]string{io.kubernetes.container.hash: d58e7b3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2e227984749bb93fff0ff88eae92585185cc29994e2b3930bedaa886815f40a,PodSandboxId
:f1e563ae45f7b9c814dc7d2dc96c9fadfdf62b04b832f9e077aaa237f4ac94bb,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1711994917898946532,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-fs5mb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f9b358f-3334-45d6-bf37-8b9d4a5cdf22,},Annotations:map[string]string{io.kubernetes.container.hash: c6d91cb0,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.t
erminationGracePeriod: 30,},},&Container{Id:c934418b858002c23c1d84d6355410a61d6aaaa4dd3ac0d875fbfe8001a6d6df,PodSandboxId:f1e563ae45f7b9c814dc7d2dc96c9fadfdf62b04b832f9e077aaa237f4ac94bb,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1711994916091263120,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-fs5mb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f9b358f-3334-45d6-bf37-8b9d4a5cdf22,},Annotations:map[string]string{io.kubernetes.container.hash: 443bf787,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a9e53b6e259a95483b00d43f09340fc6cc7d02d1879784dead75b0cbea39c9c,PodSandboxId:d0447d80d95216b8d3a2be7a4fc89759ae380795ff8af281b591fd91bc174862,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1711994915447158393,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: gcp-auth-certs-patch-nfbsp,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 4f82e7e6-da53-4b94-8d54-77b063d994e6,},Annotations:map[string]string{io.kubernetes.container.hash: 66d4f610,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:705f1e37a6d00cd730235755a8d2b4bc0cd2a60f0bd04078ed7d1ba99f26d934,PodSandboxId:6b8a5331670695717c5d965895bb81d4fc08c4711c601185d152ed692d0ebb12,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1711994914130294755,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: gcp-auth-certs-create-wjwsh,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: f534b1b5-0dac-4293-81d9-0db5251a6e2e,},Annotations:map[string]string{io.kubernetes.container.hash: 1cadb88c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bea75c05375537df03ae94253a91625e1a3eec1ec21435d3984304a5be28d7f,PodSandboxId:31ea88edcf3332649acd59d7425e8517caf3cc0ad6f57cba6e017aae051d734c,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1711994913889245772,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f64f7572-e225-467c-ab07-def542d15d28,},Annotations:map[string]string{io.kubernetes.container.hash: fa195f7f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7e0ad8452cc41133914d205170c711cc56fc253bec7f1c813c73f6391ff36c0,PodSandboxId:f1e563ae45f7b9c814dc7d2dc96c9fadfdf62b04b832f9e077aaa237f4ac94bb,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1711994912284199530,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-fs5mb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f9b358f-3334-45d6-bf37-8b9d4a5cdf22,},Annotations:map[string]string{io.kubernetes.container.hash: aab1007e,io.kube
rnetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c96d7a284f1658126a51b795f141b9c9addbff857ee7c9e4cb68285d78abfef,PodSandboxId:672bb4852a1b99921af72a16b8ca9aff195451d4f635ffece9c58095e8ae68e6,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1711994910087429820,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b630782c-3751-4074-92ca-f544f91651c3,},Annotations:map[string]string{io.kubernetes.container.
hash: 5b2f0cd8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:098753116b336981b4c319ce089d6002b6770f52522172ddce33407332186c6a,PodSandboxId:0fde13a63c3c832b04d8a7c08b07f4df712a102feca2acd09788b583d5ad2948,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1711994908630186614,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-wf88x,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f9e23dd5-de9a-4127-98c8-7095ea4a801f,},Annotations:map[string]string{io.kubernetes.container.hash: 68a3b0c1,io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f9b111e671c93c26ca5599786b77e68584d0332fd237a33007f027e1aa910ce,PodSandboxId:23b05486602dd4faff956afd2dc7aec978561f2cf08c38b42ab85c3aed6582d2,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1711994908506936788,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-82sh9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f780133b-a5c0-4eab-8f19-bd1181b15957,},Annotations:map[string]string{io.kubern
etes.container.hash: 31ab90cd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50e175f9186328af059065586730b2ec090e49b21a21cb0fcdb196d380307b88,PodSandboxId:e5c0e7ac8b7fd18223cecf3c1239a3fc6536fc32efc5fd104624d6b6555eb6a8,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1711994906061597015,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-gpmcg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56b7
1b6f-9ddf-43ca-9893-1895d0c71024,},Annotations:map[string]string{io.kubernetes.container.hash: 84f0394a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c960c36158e755ee83fa70ef03cc977ffa9f22ddb7dd14d7ec07d13c652a631c,PodSandboxId:6fdadeba48bc383f87f8f4d5cb87d1264cfdca6c9c2aa50c4b579b8234f95df6,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1711994905921665552,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-
rtgfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 561da000-21ec-4e67-a8df-8aa9357a125f,},Annotations:map[string]string{io.kubernetes.container.hash: e7971b2e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c738a051f99c97d24e83233b7777445dfab452e800c256a6652be42d3476feca,PodSandboxId:84c4fd4e30f718a6e6d401a1ea8f368c28e48c70076adae7c696f2e63357f8da,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1711994904129082239,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-
n4pp4,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 85d661ab-6d0c-4c5d-80d7-5e87e8e096b0,},Annotations:map[string]string{io.kubernetes.container.hash: b71814d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61d5d77980e334cf94c083f97f1da43d08ada0ccac59d168a909e40a40f81441,PodSandboxId:692cd322213ca4c984b78386a2827ddd84db52088a37d98c674f9d33f132110c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,Crea
tedAt:1711994883598157417,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-vj7b5,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: a64cbbac-5366-4a37-ae93-33b625f92465,},Annotations:map[string]string{io.kubernetes.container.hash: d9b7c7ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d02f777dc4fb5d30481463a72a7f1514d457c51769db6f34c335fdc610985307,PodSandboxId:5af7e12bb8286138ae2697e329336eb97c6a67530e5f0a42b7a6d7e73847d235,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:538fb31f832e76c93f10035cb609c56fc5cd18b3cd85a3ba50699572c3c5dc50,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1a9bd6f561b5c8cb73e4847b4f
8044ef2d44a79008ee4cc46d71a87bbbebce32,State:CONTAINER_RUNNING,CreatedAt:1711994879975825975,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-5446596998-pvd79,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5144be1b-5f2f-4db7-8c66-bb679aa31a3f,},Annotations:map[string]string{io.kubernetes.container.hash: fb58b090,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:754bc383c8378e0117710a75def84081e0a5360ffebec2354063cd37e4fe1f8e,PodSandboxId:1f074153fdff81588d9c61d749dbf860c0d27fdaf4dedbb54422207c27799154,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:g
cr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1499ed4fbd0aa6ea742ab6bce25603aa33556e1ac0e2f24a4901a675247e538a,State:CONTAINER_RUNNING,CreatedAt:1711994875273684874,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f402a8d-9920-4ab9-b8f5-b24ff9528a04,},Annotations:map[string]string{io.kubernetes.container.hash: 15e79633,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d879594bec103909d539395a08207a09bcebce1a01b59adb744f55f6fc38269c,PodSandboxId:5802baa7237fc28883
b3905cb7db5e7e518fc3198235a21327ba38e3a7d10928,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711994856582826816,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d770fd8-541f-4ea5-bbff-8bdba366a91b,},Annotations:map[string]string{io.kubernetes.container.hash: 314cca10,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7592332265d29603d706dd8ae6215012ab699095b4cb81b5a287cb3566a87f87,PodSandboxId:ecf893481468496183e00c27b0928a
ff583b346c96cef194ccdda81157cbec21,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711994851527592538,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-7fhsg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e044680-92e0-46d9-aa37-6e95b606d9c6,},Annotations:map[string]string{io.kubernetes.container.hash: befc28bd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4364158240fbf7e504278f6465f4ca09aafa1f1add53cc175f8dfe119fce1326,PodSandboxId:8558ddf14bd58e21c04f7531200a71a09b899326b0d4218e33fd11d86c736cc1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711994849753251261,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fz2ml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6263627a-2781-45c7-b2a4-b06ab6c04879,},Annotations:map[string]string{io.kubernetes.container.hash: 98de96da,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubern
etes.pod.terminationGracePeriod: 30,},},&Container{Id:ac45958565c5aab2fa2b8390aeaf778faac10f25c756cb29e10b4afbcd107bd5,PodSandboxId:eb20eea5d33ff49ae7e9b03022f9891cf96444063f21b85bdf9b424fe286dc03,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711994830505372202,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-881427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e438293d0084e7b4bf6faae6a01bf5d8,},Annotations:map[string]string{io.kubernetes.container.hash: 36f5a6fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:c2d54581a0ef573e289f02ca7ad4f3eeb8b3f9014afdc78a9569a0c254bcfb09,PodSandboxId:10409e55554a74bc18e3debddb4d100156f8c514ce7ad47ce71cb8cbe26b42cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711994830453287289,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-881427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fee49c26fd8f5049e6dcf4449cacb5b,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,}
,},&Container{Id:03fb53a7e5f85c59443d18637bfcbf0ffa22527f75cc75a7845f585f87ee236d,PodSandboxId:4739cf05a2d89e506b62a362966c43cde11675026153960c48f79f290a804a94,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711994830448525577,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-881427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a2c1dd6e026812c08404e38be364fa4,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:
bd6fdf952e5501e85339e87407a72c5550cda3b57b1bdca9f53b58f499f8b941,PodSandboxId:f34902d274111dc89384043635a3c135d86fe98a2df83385c9a9c456769aaff6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711994830419294201,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-881427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37fe97e449b1812962375c600235bf53,},Annotations:map[string]string{io.kubernetes.container.hash: e4d7eaf4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/inter
ceptors.go:74" id=d2d46c01-6693-4934-921b-d31ff78156eb name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:09:15 addons-881427 crio[687]: time="2024-04-01 18:09:15.083701943Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=177d13b7-2c15-49aa-8f70-7598166b68cf name=/runtime.v1.RuntimeService/Version
	Apr 01 18:09:15 addons-881427 crio[687]: time="2024-04-01 18:09:15.083858413Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=177d13b7-2c15-49aa-8f70-7598166b68cf name=/runtime.v1.RuntimeService/Version
	Apr 01 18:09:15 addons-881427 crio[687]: time="2024-04-01 18:09:15.085011928Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fee682c6-9229-47e4-9104-2ef771ab52b1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 18:09:15 addons-881427 crio[687]: time="2024-04-01 18:09:15.086102734Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711994955086078803,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:512318,},InodesUsed:&UInt64Value{Value:189,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fee682c6-9229-47e4-9104-2ef771ab52b1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 18:09:15 addons-881427 crio[687]: time="2024-04-01 18:09:15.086695379Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=95f83820-e8ec-49d8-b258-204c705fbd69 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:09:15 addons-881427 crio[687]: time="2024-04-01 18:09:15.086968195Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=95f83820-e8ec-49d8-b258-204c705fbd69 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:09:15 addons-881427 crio[687]: time="2024-04-01 18:09:15.087618595Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:00d1a77ee0d728d62f413bb4a68324ab7ed4937c2d1944d7187b6634ebcfd936,PodSandboxId:fe11c103ac4320270b586ca42d1c1a3edbdfcfce3e94098335c737114b03ec8e,Metadata:&ContainerMetadata{Name:task-pv-container,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:52478f8cd6a142fd462f0a7614a7bb064e969a4c083648235d6943c786df8cc7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:92b11f67642b62bbb98e7e49169c346b30e20cd3c1c034d31087e46924b9312e,State:CONTAINER_RUNNING,CreatedAt:1711994953168926173,Labels:map[string]string{io.kubernetes.container.name: task-pv-container,io.kubernetes.pod.name: task-pv-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cc4381a4-b5ce-4478-a676-6d43d9ae14a3,},Annotations:map[string]string{io.kubernetes.container.hash: c89e62ef,io.kubernetes.container.ports: [{\"name\":\"htt
p-server\",\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94d38dc5d6cb3acef7bbfee136b8f74f854b43c45e20cd62c9b57af1dbe009d1,PodSandboxId:323d59b97b0e5c958879ed0f76feff2d8662a208370edda331c4848a9c5bd72b,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1711994950282622710,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-de16cdd6-519d-46fd-98d1-b0afa2a23e43,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 94226317-da40-4927-9343-b8066304e10d,}
,Annotations:map[string]string{io.kubernetes.container.hash: ee6f7986,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:873512ffa33ab552913f22374a7dfccb5cfc9ee48083496284dacb728bcb092f,PodSandboxId:ba301c3366c5baf01ce161906de957644b608dd58766ed956a3e5c85deb8f575,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:4be429a5fbb2e71ae7958bfa558bc637cf3a61baf40a708cb8fff532b39e52d0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ba5dc23f65d4cc4a4535bce55cf9e63b068eb02946e3422d3587e8ce803b6aab,State:CONTAINER_EXITED,CreatedAt:1711994947329536204,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 80642733-4707-42c6-8be7-d7f2bb1dc265,},Annotations
:map[string]string{io.kubernetes.container.hash: e769b8a5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5aac709d5a7980a1b61ba3594f3d9cf1b40ec9ba3705a5154c673d14fa4edbca,PodSandboxId:94438aba1762a769266866396044bb5b710d95b619f349c96d0b1b2ae557d7a9,Metadata:&ContainerMetadata{Name:helm-test,Attempt:0,},Image:&ImageSpec{Image:docker.io/alpine/helm@sha256:9d9fab00e0680f1328924429925595dfe96a68531c8a9c1518d05ee2ad45c36f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:98f6c3b32d565299b035cc773a15cee165942450c44e11cdcaaf370d2c26dc31,State:CONTAINER_EXITED,CreatedAt:1711994941256914775,Labels:map[string]string{io.kubernetes.container.name: helm-test,io.kubernetes.pod.name: helm-test,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1ee7c8c-1104-48a9-a831-574ebadcb997,},Annotations:map[string]st
ring{io.kubernetes.container.hash: e27156a1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:077ff767fcc7cab7483daece08c8c5368dc8420ea8295a74c8c45d55cd2aedb2,PodSandboxId:6c2154c9815141b331714e6332631c32f7e4c48d41de60114883d02fbd9765db,Metadata:&ContainerMetadata{Name:gadget,Attempt:2,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c6db0381372939368364efb98d8c90f4b0e3d86a5637682b85a01195937d9eff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:81f48f8d24e42642898d5669b6926805425c3577187c161e14dcdd4f857e1f8e,State:CONTAINER_EXITED,CreatedAt:1711994932919108187,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-x552z,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 19685406-f298-40bb-8bc8-1d4a0f011b1e,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 23131875,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0125f0c6d4aacc178dbf9901b3023e0afc815152e540ad3c84a6083e43f9abca,PodSandboxId:33fad9b23a3f122c379da83a962230efb2751641f8e4c73312e7c3281d027f32,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1711994931706237294,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-7d69788767-bhk6q,io.kubernetes.pod
.namespace: gcp-auth,io.kubernetes.pod.uid: 2259bc97-3726-4970-8f34-e0b2e0465e3e,},Annotations:map[string]string{io.kubernetes.container.hash: debf27c3,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8ae3a6404aa433dcedeab0122887f32d6cea3ea80348a8b0f59d3cb90fea2b9,PodSandboxId:d5902c0d68f4405ec7fde3cb4865219afed91ba77cccb76b4ae95d3fcdd432ea,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ffcc66479b5baa5a65f94b8b7c73c6ee5ed989ec0b7f8f9371999f335ce4f44c,State:CONTAINER_RUNNING,CreatedAt:1711994929873118697,Labels:map[string]string{io.kuber
netes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-65496f9567-2rjjb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: dab0ec67-96a1-49ff-9bf6-69aed7931052,},Annotations:map[string]string{io.kubernetes.container.hash: f7054f03,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a03d285d05a66f762678ceecfc0308f89eb66ce409aeefb74e4c3e12c0c926b3,PodSandboxId:f1e563ae45f7b9c814dc7d2dc96c9fadfdf62b04b832f9e077aaa237f4ac94bb,Metadata:&ContainerMetadata{Name:csi-
snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1711994922846847714,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-fs5mb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f9b358f-3334-45d6-bf37-8b9d4a5cdf22,},Annotations:map[string]string{io.kubernetes.container.hash: 41bb0f57,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22ef224683e545b78726585edbafdf9c3b80c7b83d088562b846340623b8253e,PodSandboxId:f1e563ae45f7b9c814dc7d2dc96c9fadfdf62b04b832f9e077aaa237f4ac94bb,M
etadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1711994921222153224,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-fs5mb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f9b358f-3334-45d6-bf37-8b9d4a5cdf22,},Annotations:map[string]string{io.kubernetes.container.hash: 9599748f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3faa5342d6428de38360e16637acb063c8c3cf1a92e0a8e622caba6c201b0179,PodSandboxId:f1e563ae45f7b9c814dc7d2dc96c9f
adfdf62b04b832f9e077aaa237f4ac94bb,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1711994918973440283,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-fs5mb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f9b358f-3334-45d6-bf37-8b9d4a5cdf22,},Annotations:map[string]string{io.kubernetes.container.hash: d58e7b3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2e227984749bb93fff0ff88eae92585185cc29994e2b3930bedaa886815f40a,PodSandboxId
:f1e563ae45f7b9c814dc7d2dc96c9fadfdf62b04b832f9e077aaa237f4ac94bb,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1711994917898946532,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-fs5mb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f9b358f-3334-45d6-bf37-8b9d4a5cdf22,},Annotations:map[string]string{io.kubernetes.container.hash: c6d91cb0,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.t
erminationGracePeriod: 30,},},&Container{Id:c934418b858002c23c1d84d6355410a61d6aaaa4dd3ac0d875fbfe8001a6d6df,PodSandboxId:f1e563ae45f7b9c814dc7d2dc96c9fadfdf62b04b832f9e077aaa237f4ac94bb,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1711994916091263120,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-fs5mb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f9b358f-3334-45d6-bf37-8b9d4a5cdf22,},Annotations:map[string]string{io.kubernetes.container.hash: 443bf787,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a9e53b6e259a95483b00d43f09340fc6cc7d02d1879784dead75b0cbea39c9c,PodSandboxId:d0447d80d95216b8d3a2be7a4fc89759ae380795ff8af281b591fd91bc174862,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1711994915447158393,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: gcp-auth-certs-patch-nfbsp,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 4f82e7e6-da53-4b94-8d54-77b063d994e6,},Annotations:map[string]string{io.kubernetes.container.hash: 66d4f610,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:705f1e37a6d00cd730235755a8d2b4bc0cd2a60f0bd04078ed7d1ba99f26d934,PodSandboxId:6b8a5331670695717c5d965895bb81d4fc08c4711c601185d152ed692d0ebb12,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1711994914130294755,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: gcp-auth-certs-create-wjwsh,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: f534b1b5-0dac-4293-81d9-0db5251a6e2e,},Annotations:map[string]string{io.kubernetes.container.hash: 1cadb88c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bea75c05375537df03ae94253a91625e1a3eec1ec21435d3984304a5be28d7f,PodSandboxId:31ea88edcf3332649acd59d7425e8517caf3cc0ad6f57cba6e017aae051d734c,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1711994913889245772,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f64f7572-e225-467c-ab07-def542d15d28,},Annotations:map[string]string{io.kubernetes.container.hash: fa195f7f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7e0ad8452cc41133914d205170c711cc56fc253bec7f1c813c73f6391ff36c0,PodSandboxId:f1e563ae45f7b9c814dc7d2dc96c9fadfdf62b04b832f9e077aaa237f4ac94bb,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1711994912284199530,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-fs5mb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f9b358f-3334-45d6-bf37-8b9d4a5cdf22,},Annotations:map[string]string{io.kubernetes.container.hash: aab1007e,io.kube
rnetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c96d7a284f1658126a51b795f141b9c9addbff857ee7c9e4cb68285d78abfef,PodSandboxId:672bb4852a1b99921af72a16b8ca9aff195451d4f635ffece9c58095e8ae68e6,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1711994910087429820,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b630782c-3751-4074-92ca-f544f91651c3,},Annotations:map[string]string{io.kubernetes.container.
hash: 5b2f0cd8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:098753116b336981b4c319ce089d6002b6770f52522172ddce33407332186c6a,PodSandboxId:0fde13a63c3c832b04d8a7c08b07f4df712a102feca2acd09788b583d5ad2948,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1711994908630186614,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-wf88x,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f9e23dd5-de9a-4127-98c8-7095ea4a801f,},Annotations:map[string]string{io.kubernetes.container.hash: 68a3b0c1,io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f9b111e671c93c26ca5599786b77e68584d0332fd237a33007f027e1aa910ce,PodSandboxId:23b05486602dd4faff956afd2dc7aec978561f2cf08c38b42ab85c3aed6582d2,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1711994908506936788,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-82sh9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f780133b-a5c0-4eab-8f19-bd1181b15957,},Annotations:map[string]string{io.kubern
etes.container.hash: 31ab90cd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50e175f9186328af059065586730b2ec090e49b21a21cb0fcdb196d380307b88,PodSandboxId:e5c0e7ac8b7fd18223cecf3c1239a3fc6536fc32efc5fd104624d6b6555eb6a8,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1711994906061597015,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-gpmcg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56b7
1b6f-9ddf-43ca-9893-1895d0c71024,},Annotations:map[string]string{io.kubernetes.container.hash: 84f0394a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c960c36158e755ee83fa70ef03cc977ffa9f22ddb7dd14d7ec07d13c652a631c,PodSandboxId:6fdadeba48bc383f87f8f4d5cb87d1264cfdca6c9c2aa50c4b579b8234f95df6,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1711994905921665552,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-
rtgfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 561da000-21ec-4e67-a8df-8aa9357a125f,},Annotations:map[string]string{io.kubernetes.container.hash: e7971b2e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c738a051f99c97d24e83233b7777445dfab452e800c256a6652be42d3476feca,PodSandboxId:84c4fd4e30f718a6e6d401a1ea8f368c28e48c70076adae7c696f2e63357f8da,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1711994904129082239,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-
n4pp4,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 85d661ab-6d0c-4c5d-80d7-5e87e8e096b0,},Annotations:map[string]string{io.kubernetes.container.hash: b71814d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61d5d77980e334cf94c083f97f1da43d08ada0ccac59d168a909e40a40f81441,PodSandboxId:692cd322213ca4c984b78386a2827ddd84db52088a37d98c674f9d33f132110c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,Crea
tedAt:1711994883598157417,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-vj7b5,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: a64cbbac-5366-4a37-ae93-33b625f92465,},Annotations:map[string]string{io.kubernetes.container.hash: d9b7c7ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d02f777dc4fb5d30481463a72a7f1514d457c51769db6f34c335fdc610985307,PodSandboxId:5af7e12bb8286138ae2697e329336eb97c6a67530e5f0a42b7a6d7e73847d235,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:538fb31f832e76c93f10035cb609c56fc5cd18b3cd85a3ba50699572c3c5dc50,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1a9bd6f561b5c8cb73e4847b4f
8044ef2d44a79008ee4cc46d71a87bbbebce32,State:CONTAINER_RUNNING,CreatedAt:1711994879975825975,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-5446596998-pvd79,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5144be1b-5f2f-4db7-8c66-bb679aa31a3f,},Annotations:map[string]string{io.kubernetes.container.hash: fb58b090,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:754bc383c8378e0117710a75def84081e0a5360ffebec2354063cd37e4fe1f8e,PodSandboxId:1f074153fdff81588d9c61d749dbf860c0d27fdaf4dedbb54422207c27799154,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:g
cr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1499ed4fbd0aa6ea742ab6bce25603aa33556e1ac0e2f24a4901a675247e538a,State:CONTAINER_RUNNING,CreatedAt:1711994875273684874,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f402a8d-9920-4ab9-b8f5-b24ff9528a04,},Annotations:map[string]string{io.kubernetes.container.hash: 15e79633,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d879594bec103909d539395a08207a09bcebce1a01b59adb744f55f6fc38269c,PodSandboxId:5802baa7237fc28883
b3905cb7db5e7e518fc3198235a21327ba38e3a7d10928,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711994856582826816,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d770fd8-541f-4ea5-bbff-8bdba366a91b,},Annotations:map[string]string{io.kubernetes.container.hash: 314cca10,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7592332265d29603d706dd8ae6215012ab699095b4cb81b5a287cb3566a87f87,PodSandboxId:ecf893481468496183e00c27b0928a
ff583b346c96cef194ccdda81157cbec21,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711994851527592538,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-7fhsg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e044680-92e0-46d9-aa37-6e95b606d9c6,},Annotations:map[string]string{io.kubernetes.container.hash: befc28bd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4364158240fbf7e504278f6465f4ca09aafa1f1add53cc175f8dfe119fce1326,PodSandboxId:8558ddf14bd58e21c04f7531200a71a09b899326b0d4218e33fd11d86c736cc1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711994849753251261,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fz2ml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6263627a-2781-45c7-b2a4-b06ab6c04879,},Annotations:map[string]string{io.kubernetes.container.hash: 98de96da,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubern
etes.pod.terminationGracePeriod: 30,},},&Container{Id:ac45958565c5aab2fa2b8390aeaf778faac10f25c756cb29e10b4afbcd107bd5,PodSandboxId:eb20eea5d33ff49ae7e9b03022f9891cf96444063f21b85bdf9b424fe286dc03,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711994830505372202,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-881427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e438293d0084e7b4bf6faae6a01bf5d8,},Annotations:map[string]string{io.kubernetes.container.hash: 36f5a6fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:c2d54581a0ef573e289f02ca7ad4f3eeb8b3f9014afdc78a9569a0c254bcfb09,PodSandboxId:10409e55554a74bc18e3debddb4d100156f8c514ce7ad47ce71cb8cbe26b42cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711994830453287289,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-881427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fee49c26fd8f5049e6dcf4449cacb5b,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,}
,},&Container{Id:03fb53a7e5f85c59443d18637bfcbf0ffa22527f75cc75a7845f585f87ee236d,PodSandboxId:4739cf05a2d89e506b62a362966c43cde11675026153960c48f79f290a804a94,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711994830448525577,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-881427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a2c1dd6e026812c08404e38be364fa4,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:
bd6fdf952e5501e85339e87407a72c5550cda3b57b1bdca9f53b58f499f8b941,PodSandboxId:f34902d274111dc89384043635a3c135d86fe98a2df83385c9a9c456769aaff6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711994830419294201,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-881427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37fe97e449b1812962375c600235bf53,},Annotations:map[string]string{io.kubernetes.container.hash: e4d7eaf4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/inter
ceptors.go:74" id=95f83820-e8ec-49d8-b258-204c705fbd69 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:09:15 addons-881427 crio[687]: time="2024-04-01 18:09:15.134925821Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=27062614-0725-49c8-8b7e-cec1212cf874 name=/runtime.v1.RuntimeService/Version
	Apr 01 18:09:15 addons-881427 crio[687]: time="2024-04-01 18:09:15.134997364Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=27062614-0725-49c8-8b7e-cec1212cf874 name=/runtime.v1.RuntimeService/Version
	Apr 01 18:09:15 addons-881427 crio[687]: time="2024-04-01 18:09:15.136925406Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cb3dedea-5049-4864-a1ca-d858971e8d7b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 18:09:15 addons-881427 crio[687]: time="2024-04-01 18:09:15.138125578Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711994955138097773,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:512318,},InodesUsed:&UInt64Value{Value:189,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cb3dedea-5049-4864-a1ca-d858971e8d7b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 18:09:15 addons-881427 crio[687]: time="2024-04-01 18:09:15.139277788Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7ee134dc-b271-4c7d-ab47-4eb31cd3c5f0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:09:15 addons-881427 crio[687]: time="2024-04-01 18:09:15.139363284Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7ee134dc-b271-4c7d-ab47-4eb31cd3c5f0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:09:15 addons-881427 crio[687]: time="2024-04-01 18:09:15.140146349Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:00d1a77ee0d728d62f413bb4a68324ab7ed4937c2d1944d7187b6634ebcfd936,PodSandboxId:fe11c103ac4320270b586ca42d1c1a3edbdfcfce3e94098335c737114b03ec8e,Metadata:&ContainerMetadata{Name:task-pv-container,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:52478f8cd6a142fd462f0a7614a7bb064e969a4c083648235d6943c786df8cc7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:92b11f67642b62bbb98e7e49169c346b30e20cd3c1c034d31087e46924b9312e,State:CONTAINER_RUNNING,CreatedAt:1711994953168926173,Labels:map[string]string{io.kubernetes.container.name: task-pv-container,io.kubernetes.pod.name: task-pv-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cc4381a4-b5ce-4478-a676-6d43d9ae14a3,},Annotations:map[string]string{io.kubernetes.container.hash: c89e62ef,io.kubernetes.container.ports: [{\"name\":\"htt
p-server\",\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94d38dc5d6cb3acef7bbfee136b8f74f854b43c45e20cd62c9b57af1dbe009d1,PodSandboxId:323d59b97b0e5c958879ed0f76feff2d8662a208370edda331c4848a9c5bd72b,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1711994950282622710,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-de16cdd6-519d-46fd-98d1-b0afa2a23e43,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 94226317-da40-4927-9343-b8066304e10d,}
,Annotations:map[string]string{io.kubernetes.container.hash: ee6f7986,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:873512ffa33ab552913f22374a7dfccb5cfc9ee48083496284dacb728bcb092f,PodSandboxId:ba301c3366c5baf01ce161906de957644b608dd58766ed956a3e5c85deb8f575,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:4be429a5fbb2e71ae7958bfa558bc637cf3a61baf40a708cb8fff532b39e52d0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ba5dc23f65d4cc4a4535bce55cf9e63b068eb02946e3422d3587e8ce803b6aab,State:CONTAINER_EXITED,CreatedAt:1711994947329536204,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 80642733-4707-42c6-8be7-d7f2bb1dc265,},Annotations
:map[string]string{io.kubernetes.container.hash: e769b8a5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5aac709d5a7980a1b61ba3594f3d9cf1b40ec9ba3705a5154c673d14fa4edbca,PodSandboxId:94438aba1762a769266866396044bb5b710d95b619f349c96d0b1b2ae557d7a9,Metadata:&ContainerMetadata{Name:helm-test,Attempt:0,},Image:&ImageSpec{Image:docker.io/alpine/helm@sha256:9d9fab00e0680f1328924429925595dfe96a68531c8a9c1518d05ee2ad45c36f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:98f6c3b32d565299b035cc773a15cee165942450c44e11cdcaaf370d2c26dc31,State:CONTAINER_EXITED,CreatedAt:1711994941256914775,Labels:map[string]string{io.kubernetes.container.name: helm-test,io.kubernetes.pod.name: helm-test,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1ee7c8c-1104-48a9-a831-574ebadcb997,},Annotations:map[string]st
ring{io.kubernetes.container.hash: e27156a1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:077ff767fcc7cab7483daece08c8c5368dc8420ea8295a74c8c45d55cd2aedb2,PodSandboxId:6c2154c9815141b331714e6332631c32f7e4c48d41de60114883d02fbd9765db,Metadata:&ContainerMetadata{Name:gadget,Attempt:2,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c6db0381372939368364efb98d8c90f4b0e3d86a5637682b85a01195937d9eff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:81f48f8d24e42642898d5669b6926805425c3577187c161e14dcdd4f857e1f8e,State:CONTAINER_EXITED,CreatedAt:1711994932919108187,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-x552z,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 19685406-f298-40bb-8bc8-1d4a0f011b1e,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 23131875,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0125f0c6d4aacc178dbf9901b3023e0afc815152e540ad3c84a6083e43f9abca,PodSandboxId:33fad9b23a3f122c379da83a962230efb2751641f8e4c73312e7c3281d027f32,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1711994931706237294,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-7d69788767-bhk6q,io.kubernetes.pod
.namespace: gcp-auth,io.kubernetes.pod.uid: 2259bc97-3726-4970-8f34-e0b2e0465e3e,},Annotations:map[string]string{io.kubernetes.container.hash: debf27c3,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8ae3a6404aa433dcedeab0122887f32d6cea3ea80348a8b0f59d3cb90fea2b9,PodSandboxId:d5902c0d68f4405ec7fde3cb4865219afed91ba77cccb76b4ae95d3fcdd432ea,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ffcc66479b5baa5a65f94b8b7c73c6ee5ed989ec0b7f8f9371999f335ce4f44c,State:CONTAINER_RUNNING,CreatedAt:1711994929873118697,Labels:map[string]string{io.kuber
netes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-65496f9567-2rjjb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: dab0ec67-96a1-49ff-9bf6-69aed7931052,},Annotations:map[string]string{io.kubernetes.container.hash: f7054f03,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a03d285d05a66f762678ceecfc0308f89eb66ce409aeefb74e4c3e12c0c926b3,PodSandboxId:f1e563ae45f7b9c814dc7d2dc96c9fadfdf62b04b832f9e077aaa237f4ac94bb,Metadata:&ContainerMetadata{Name:csi-
snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1711994922846847714,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-fs5mb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f9b358f-3334-45d6-bf37-8b9d4a5cdf22,},Annotations:map[string]string{io.kubernetes.container.hash: 41bb0f57,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22ef224683e545b78726585edbafdf9c3b80c7b83d088562b846340623b8253e,PodSandboxId:f1e563ae45f7b9c814dc7d2dc96c9fadfdf62b04b832f9e077aaa237f4ac94bb,M
etadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1711994921222153224,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-fs5mb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f9b358f-3334-45d6-bf37-8b9d4a5cdf22,},Annotations:map[string]string{io.kubernetes.container.hash: 9599748f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3faa5342d6428de38360e16637acb063c8c3cf1a92e0a8e622caba6c201b0179,PodSandboxId:f1e563ae45f7b9c814dc7d2dc96c9f
adfdf62b04b832f9e077aaa237f4ac94bb,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1711994918973440283,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-fs5mb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f9b358f-3334-45d6-bf37-8b9d4a5cdf22,},Annotations:map[string]string{io.kubernetes.container.hash: d58e7b3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2e227984749bb93fff0ff88eae92585185cc29994e2b3930bedaa886815f40a,PodSandboxId
:f1e563ae45f7b9c814dc7d2dc96c9fadfdf62b04b832f9e077aaa237f4ac94bb,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1711994917898946532,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-fs5mb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f9b358f-3334-45d6-bf37-8b9d4a5cdf22,},Annotations:map[string]string{io.kubernetes.container.hash: c6d91cb0,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.t
erminationGracePeriod: 30,},},&Container{Id:c934418b858002c23c1d84d6355410a61d6aaaa4dd3ac0d875fbfe8001a6d6df,PodSandboxId:f1e563ae45f7b9c814dc7d2dc96c9fadfdf62b04b832f9e077aaa237f4ac94bb,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1711994916091263120,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-fs5mb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f9b358f-3334-45d6-bf37-8b9d4a5cdf22,},Annotations:map[string]string{io.kubernetes.container.hash: 443bf787,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a9e53b6e259a95483b00d43f09340fc6cc7d02d1879784dead75b0cbea39c9c,PodSandboxId:d0447d80d95216b8d3a2be7a4fc89759ae380795ff8af281b591fd91bc174862,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1711994915447158393,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: gcp-auth-certs-patch-nfbsp,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 4f82e7e6-da53-4b94-8d54-77b063d994e6,},Annotations:map[string]string{io.kubernetes.container.hash: 66d4f610,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:705f1e37a6d00cd730235755a8d2b4bc0cd2a60f0bd04078ed7d1ba99f26d934,PodSandboxId:6b8a5331670695717c5d965895bb81d4fc08c4711c601185d152ed692d0ebb12,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1711994914130294755,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: gcp-auth-certs-create-wjwsh,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: f534b1b5-0dac-4293-81d9-0db5251a6e2e,},Annotations:map[string]string{io.kubernetes.container.hash: 1cadb88c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bea75c05375537df03ae94253a91625e1a3eec1ec21435d3984304a5be28d7f,PodSandboxId:31ea88edcf3332649acd59d7425e8517caf3cc0ad6f57cba6e017aae051d734c,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1711994913889245772,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f64f7572-e225-467c-ab07-def542d15d28,},Annotations:map[string]string{io.kubernetes.container.hash: fa195f7f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7e0ad8452cc41133914d205170c711cc56fc253bec7f1c813c73f6391ff36c0,PodSandboxId:f1e563ae45f7b9c814dc7d2dc96c9fadfdf62b04b832f9e077aaa237f4ac94bb,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1711994912284199530,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-fs5mb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f9b358f-3334-45d6-bf37-8b9d4a5cdf22,},Annotations:map[string]string{io.kubernetes.container.hash: aab1007e,io.kube
rnetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c96d7a284f1658126a51b795f141b9c9addbff857ee7c9e4cb68285d78abfef,PodSandboxId:672bb4852a1b99921af72a16b8ca9aff195451d4f635ffece9c58095e8ae68e6,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1711994910087429820,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b630782c-3751-4074-92ca-f544f91651c3,},Annotations:map[string]string{io.kubernetes.container.
hash: 5b2f0cd8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:098753116b336981b4c319ce089d6002b6770f52522172ddce33407332186c6a,PodSandboxId:0fde13a63c3c832b04d8a7c08b07f4df712a102feca2acd09788b583d5ad2948,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1711994908630186614,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-wf88x,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f9e23dd5-de9a-4127-98c8-7095ea4a801f,},Annotations:map[string]string{io.kubernetes.container.hash: 68a3b0c1,io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f9b111e671c93c26ca5599786b77e68584d0332fd237a33007f027e1aa910ce,PodSandboxId:23b05486602dd4faff956afd2dc7aec978561f2cf08c38b42ab85c3aed6582d2,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1711994908506936788,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-82sh9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f780133b-a5c0-4eab-8f19-bd1181b15957,},Annotations:map[string]string{io.kubern
etes.container.hash: 31ab90cd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50e175f9186328af059065586730b2ec090e49b21a21cb0fcdb196d380307b88,PodSandboxId:e5c0e7ac8b7fd18223cecf3c1239a3fc6536fc32efc5fd104624d6b6555eb6a8,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1711994906061597015,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-gpmcg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56b7
1b6f-9ddf-43ca-9893-1895d0c71024,},Annotations:map[string]string{io.kubernetes.container.hash: 84f0394a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c960c36158e755ee83fa70ef03cc977ffa9f22ddb7dd14d7ec07d13c652a631c,PodSandboxId:6fdadeba48bc383f87f8f4d5cb87d1264cfdca6c9c2aa50c4b579b8234f95df6,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1711994905921665552,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-
rtgfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 561da000-21ec-4e67-a8df-8aa9357a125f,},Annotations:map[string]string{io.kubernetes.container.hash: e7971b2e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c738a051f99c97d24e83233b7777445dfab452e800c256a6652be42d3476feca,PodSandboxId:84c4fd4e30f718a6e6d401a1ea8f368c28e48c70076adae7c696f2e63357f8da,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1711994904129082239,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-
n4pp4,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 85d661ab-6d0c-4c5d-80d7-5e87e8e096b0,},Annotations:map[string]string{io.kubernetes.container.hash: b71814d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61d5d77980e334cf94c083f97f1da43d08ada0ccac59d168a909e40a40f81441,PodSandboxId:692cd322213ca4c984b78386a2827ddd84db52088a37d98c674f9d33f132110c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,Crea
tedAt:1711994883598157417,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-vj7b5,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: a64cbbac-5366-4a37-ae93-33b625f92465,},Annotations:map[string]string{io.kubernetes.container.hash: d9b7c7ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d02f777dc4fb5d30481463a72a7f1514d457c51769db6f34c335fdc610985307,PodSandboxId:5af7e12bb8286138ae2697e329336eb97c6a67530e5f0a42b7a6d7e73847d235,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:538fb31f832e76c93f10035cb609c56fc5cd18b3cd85a3ba50699572c3c5dc50,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1a9bd6f561b5c8cb73e4847b4f
8044ef2d44a79008ee4cc46d71a87bbbebce32,State:CONTAINER_RUNNING,CreatedAt:1711994879975825975,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-5446596998-pvd79,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5144be1b-5f2f-4db7-8c66-bb679aa31a3f,},Annotations:map[string]string{io.kubernetes.container.hash: fb58b090,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:754bc383c8378e0117710a75def84081e0a5360ffebec2354063cd37e4fe1f8e,PodSandboxId:1f074153fdff81588d9c61d749dbf860c0d27fdaf4dedbb54422207c27799154,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:g
cr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1499ed4fbd0aa6ea742ab6bce25603aa33556e1ac0e2f24a4901a675247e538a,State:CONTAINER_RUNNING,CreatedAt:1711994875273684874,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f402a8d-9920-4ab9-b8f5-b24ff9528a04,},Annotations:map[string]string{io.kubernetes.container.hash: 15e79633,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d879594bec103909d539395a08207a09bcebce1a01b59adb744f55f6fc38269c,PodSandboxId:5802baa7237fc28883
b3905cb7db5e7e518fc3198235a21327ba38e3a7d10928,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711994856582826816,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d770fd8-541f-4ea5-bbff-8bdba366a91b,},Annotations:map[string]string{io.kubernetes.container.hash: 314cca10,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7592332265d29603d706dd8ae6215012ab699095b4cb81b5a287cb3566a87f87,PodSandboxId:ecf893481468496183e00c27b0928a
ff583b346c96cef194ccdda81157cbec21,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711994851527592538,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-7fhsg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e044680-92e0-46d9-aa37-6e95b606d9c6,},Annotations:map[string]string{io.kubernetes.container.hash: befc28bd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4364158240fbf7e504278f6465f4ca09aafa1f1add53cc175f8dfe119fce1326,PodSandboxId:8558ddf14bd58e21c04f7531200a71a09b899326b0d4218e33fd11d86c736cc1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711994849753251261,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fz2ml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6263627a-2781-45c7-b2a4-b06ab6c04879,},Annotations:map[string]string{io.kubernetes.container.hash: 98de96da,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubern
etes.pod.terminationGracePeriod: 30,},},&Container{Id:ac45958565c5aab2fa2b8390aeaf778faac10f25c756cb29e10b4afbcd107bd5,PodSandboxId:eb20eea5d33ff49ae7e9b03022f9891cf96444063f21b85bdf9b424fe286dc03,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711994830505372202,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-881427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e438293d0084e7b4bf6faae6a01bf5d8,},Annotations:map[string]string{io.kubernetes.container.hash: 36f5a6fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:c2d54581a0ef573e289f02ca7ad4f3eeb8b3f9014afdc78a9569a0c254bcfb09,PodSandboxId:10409e55554a74bc18e3debddb4d100156f8c514ce7ad47ce71cb8cbe26b42cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711994830453287289,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-881427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fee49c26fd8f5049e6dcf4449cacb5b,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,}
,},&Container{Id:03fb53a7e5f85c59443d18637bfcbf0ffa22527f75cc75a7845f585f87ee236d,PodSandboxId:4739cf05a2d89e506b62a362966c43cde11675026153960c48f79f290a804a94,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711994830448525577,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-881427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a2c1dd6e026812c08404e38be364fa4,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:
bd6fdf952e5501e85339e87407a72c5550cda3b57b1bdca9f53b58f499f8b941,PodSandboxId:f34902d274111dc89384043635a3c135d86fe98a2df83385c9a9c456769aaff6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711994830419294201,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-881427,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37fe97e449b1812962375c600235bf53,},Annotations:map[string]string{io.kubernetes.container.hash: e4d7eaf4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/inter
ceptors.go:74" id=7ee134dc-b271-4c7d-ab47-4eb31cd3c5f0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	00d1a77ee0d72       docker.io/library/nginx@sha256:52478f8cd6a142fd462f0a7614a7bb064e969a4c083648235d6943c786df8cc7                                              2 seconds ago        Running             task-pv-container                        0                   fe11c103ac432       task-pv-pod
	94d38dc5d6cb3       a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824                                                                             4 seconds ago        Exited              helper-pod                               0                   323d59b97b0e5       helper-pod-delete-pvc-de16cdd6-519d-46fd-98d1-b0afa2a23e43
	873512ffa33ab       docker.io/library/busybox@sha256:4be429a5fbb2e71ae7958bfa558bc637cf3a61baf40a708cb8fff532b39e52d0                                            7 seconds ago        Exited              busybox                                  0                   ba301c3366c5b       test-local-path
	5aac709d5a798       docker.io/alpine/helm@sha256:9d9fab00e0680f1328924429925595dfe96a68531c8a9c1518d05ee2ad45c36f                                                13 seconds ago       Exited              helm-test                                0                   94438aba1762a       helm-test
	077ff767fcc7c       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c6db0381372939368364efb98d8c90f4b0e3d86a5637682b85a01195937d9eff                            22 seconds ago       Exited              gadget                                   2                   6c2154c981514       gadget-x552z
	0125f0c6d4aac       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                                 23 seconds ago       Running             gcp-auth                                 0                   33fad9b23a3f1       gcp-auth-7d69788767-bhk6q
	f8ae3a6404aa4       registry.k8s.io/ingress-nginx/controller@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c                             25 seconds ago       Running             controller                               0                   d5902c0d68f44       ingress-nginx-controller-65496f9567-2rjjb
	a03d285d05a66       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          32 seconds ago       Running             csi-snapshotter                          0                   f1e563ae45f7b       csi-hostpathplugin-fs5mb
	22ef224683e54       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          33 seconds ago       Running             csi-provisioner                          0                   f1e563ae45f7b       csi-hostpathplugin-fs5mb
	3faa5342d6428       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            36 seconds ago       Running             liveness-probe                           0                   f1e563ae45f7b       csi-hostpathplugin-fs5mb
	b2e227984749b       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           37 seconds ago       Running             hostpath                                 0                   f1e563ae45f7b       csi-hostpathplugin-fs5mb
	c934418b85800       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                39 seconds ago       Running             node-driver-registrar                    0                   f1e563ae45f7b       csi-hostpathplugin-fs5mb
	7a9e53b6e259a       b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135                                                                             39 seconds ago       Exited              patch                                    1                   d0447d80d9521       gcp-auth-certs-patch-nfbsp
	705f1e37a6d00       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023                   41 seconds ago       Exited              create                                   0                   6b8a533167069       gcp-auth-certs-create-wjwsh
	1bea75c053755       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             41 seconds ago       Running             csi-attacher                             0                   31ea88edcf333       csi-hostpath-attacher-0
	a7e0ad8452cc4       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   42 seconds ago       Running             csi-external-health-monitor-controller   0                   f1e563ae45f7b       csi-hostpathplugin-fs5mb
	4c96d7a284f16       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              45 seconds ago       Running             csi-resizer                              0                   672bb4852a1b9       csi-hostpath-resizer-0
	098753116b336       b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135                                                                             46 seconds ago       Exited              patch                                    1                   0fde13a63c3c8       ingress-nginx-admission-patch-wf88x
	2f9b111e671c9       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023                   46 seconds ago       Exited              create                                   0                   23b05486602dd       ingress-nginx-admission-create-82sh9
	50e175f918632       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      49 seconds ago       Running             volume-snapshot-controller               0                   e5c0e7ac8b7fd       snapshot-controller-58dbcc7b99-gpmcg
	c960c36158e75       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      49 seconds ago       Running             volume-snapshot-controller               0                   6fdadeba48bc3       snapshot-controller-58dbcc7b99-rtgfk
	c738a051f99c9       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                                              51 seconds ago       Running             yakd                                     0                   84c4fd4e30f71       yakd-dashboard-9947fc6bf-n4pp4
	61d5d77980e33       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             About a minute ago   Running             local-path-provisioner                   0                   692cd322213ca       local-path-provisioner-78b46b4d5c-vj7b5
	d02f777dc4fb5       gcr.io/cloud-spanner-emulator/emulator@sha256:538fb31f832e76c93f10035cb609c56fc5cd18b3cd85a3ba50699572c3c5dc50                               About a minute ago   Running             cloud-spanner-emulator                   0                   5af7e12bb8286       cloud-spanner-emulator-5446596998-pvd79
	754bc383c8378       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f                             About a minute ago   Running             minikube-ingress-dns                     0                   1f074153fdff8       kube-ingress-dns-minikube
	d879594bec103       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             About a minute ago   Running             storage-provisioner                      0                   5802baa7237fc       storage-provisioner
	7592332265d29       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                                             About a minute ago   Running             coredns                                  0                   ecf8934814684       coredns-76f75df574-7fhsg
	4364158240fbf       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                                                             About a minute ago   Running             kube-proxy                               0                   8558ddf14bd58       kube-proxy-fz2ml
	ac45958565c5a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                                             2 minutes ago        Running             etcd                                     0                   eb20eea5d33ff       etcd-addons-881427
	c2d54581a0ef5       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                                                             2 minutes ago        Running             kube-controller-manager                  0                   10409e55554a7       kube-controller-manager-addons-881427
	03fb53a7e5f85       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                                                             2 minutes ago        Running             kube-scheduler                           0                   4739cf05a2d89       kube-scheduler-addons-881427
	bd6fdf952e550       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                                                             2 minutes ago        Running             kube-apiserver                           0                   f34902d274111       kube-apiserver-addons-881427
	
	
	==> coredns [7592332265d29603d706dd8ae6215012ab699095b4cb81b5a287cb3566a87f87] <==
	[INFO] 10.244.0.8:35944 - 58706 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000054988s
	[INFO] 10.244.0.8:55356 - 50083 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000042495s
	[INFO] 10.244.0.8:55356 - 9150 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000033261s
	[INFO] 10.244.0.8:52077 - 63713 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000035099s
	[INFO] 10.244.0.8:52077 - 62179 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000050102s
	[INFO] 10.244.0.8:47439 - 4739 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000046024s
	[INFO] 10.244.0.8:47439 - 64190 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000077618s
	[INFO] 10.244.0.8:39389 - 16057 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000167273s
	[INFO] 10.244.0.8:39389 - 1972 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000037108s
	[INFO] 10.244.0.8:54310 - 63209 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000132577s
	[INFO] 10.244.0.8:54310 - 37871 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00006181s
	[INFO] 10.244.0.8:45038 - 47355 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000186191s
	[INFO] 10.244.0.8:45038 - 47865 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00015532s
	[INFO] 10.244.0.8:59837 - 12201 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000119047s
	[INFO] 10.244.0.8:59837 - 37047 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00037651s
	[INFO] 10.244.0.22:60318 - 31283 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000235588s
	[INFO] 10.244.0.22:44948 - 39610 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00009288s
	[INFO] 10.244.0.22:43484 - 19852 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000091583s
	[INFO] 10.244.0.22:52045 - 35786 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.005156998s
	[INFO] 10.244.0.22:42144 - 25424 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000110329s
	[INFO] 10.244.0.22:55856 - 39525 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00035729s
	[INFO] 10.244.0.22:58614 - 52153 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001151698s
	[INFO] 10.244.0.22:43549 - 63757 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 420 0.002544635s
	[INFO] 10.244.0.25:48343 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000259044s
	[INFO] 10.244.0.25:36941 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000109538s
	
	
	==> describe nodes <==
	Name:               addons-881427
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-881427
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2
	                    minikube.k8s.io/name=addons-881427
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_01T18_07_16_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-881427
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-881427"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Apr 2024 18:07:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-881427
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Apr 2024 18:09:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Apr 2024 18:08:59 +0000   Mon, 01 Apr 2024 18:07:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Apr 2024 18:08:59 +0000   Mon, 01 Apr 2024 18:07:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Apr 2024 18:08:59 +0000   Mon, 01 Apr 2024 18:07:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Apr 2024 18:08:59 +0000   Mon, 01 Apr 2024 18:07:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.214
	  Hostname:    addons-881427
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 13a0cedeceb1427eafc5b915b829cb6d
	  System UUID:                13a0cede-ceb1-427e-afc5-b915b829cb6d
	  Boot ID:                    63021f69-00f1-4f7e-867e-931aaaef5107
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-5446596998-pvd79      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         102s
	  default                     task-pv-pod                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  gadget                      gadget-x552z                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  gcp-auth                    gcp-auth-7d69788767-bhk6q                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  ingress-nginx               ingress-nginx-controller-65496f9567-2rjjb    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         97s
	  kube-system                 coredns-76f75df574-7fhsg                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     106s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  kube-system                 csi-hostpathplugin-fs5mb                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  kube-system                 etcd-addons-881427                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         119s
	  kube-system                 kube-apiserver-addons-881427                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         119s
	  kube-system                 kube-controller-manager-addons-881427        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         119s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         101s
	  kube-system                 kube-proxy-fz2ml                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         107s
	  kube-system                 kube-scheduler-addons-881427                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         119s
	  kube-system                 snapshot-controller-58dbcc7b99-gpmcg         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  kube-system                 snapshot-controller-58dbcc7b99-rtgfk         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         101s
	  local-path-storage          local-path-provisioner-78b46b4d5c-vj7b5      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-n4pp4               0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     99s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             388Mi (10%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 104s  kube-proxy       
	  Normal  Starting                 119s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  119s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  119s  kubelet          Node addons-881427 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    119s  kubelet          Node addons-881427 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     119s  kubelet          Node addons-881427 status is now: NodeHasSufficientPID
	  Normal  NodeReady                118s  kubelet          Node addons-881427 status is now: NodeReady
	  Normal  RegisteredNode           107s  node-controller  Node addons-881427 event: Registered Node addons-881427 in Controller
	
	
	==> dmesg <==
	[  +0.142817] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.308051] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[Apr 1 18:07] systemd-fstab-generator[769]: Ignoring "noauto" option for root device
	[  +0.063034] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.595804] systemd-fstab-generator[940]: Ignoring "noauto" option for root device
	[  +0.494501] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.777773] systemd-fstab-generator[1285]: Ignoring "noauto" option for root device
	[  +0.101044] kauditd_printk_skb: 41 callbacks suppressed
	[ +12.851326] systemd-fstab-generator[1511]: Ignoring "noauto" option for root device
	[  +0.008371] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.711170] kauditd_printk_skb: 92 callbacks suppressed
	[  +5.025776] kauditd_printk_skb: 122 callbacks suppressed
	[  +6.622733] kauditd_printk_skb: 46 callbacks suppressed
	[  +8.672226] kauditd_printk_skb: 19 callbacks suppressed
	[Apr 1 18:08] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.074093] kauditd_printk_skb: 7 callbacks suppressed
	[ +11.791362] kauditd_printk_skb: 30 callbacks suppressed
	[  +5.759552] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.082966] kauditd_printk_skb: 62 callbacks suppressed
	[  +6.109031] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.096641] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.230632] kauditd_printk_skb: 21 callbacks suppressed
	[  +6.067503] kauditd_printk_skb: 19 callbacks suppressed
	[Apr 1 18:09] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.370253] kauditd_printk_skb: 41 callbacks suppressed
	
	
	==> etcd [ac45958565c5aab2fa2b8390aeaf778faac10f25c756cb29e10b4afbcd107bd5] <==
	{"level":"info","ts":"2024-04-01T18:08:32.154107Z","caller":"traceutil/trace.go:171","msg":"trace[1962955083] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1028; }","duration":"396.688883ms","start":"2024-04-01T18:08:31.757412Z","end":"2024-04-01T18:08:32.154101Z","steps":["trace[1962955083] 'agreement among raft nodes before linearized reading'  (duration: 396.595544ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-01T18:08:32.154143Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-01T18:08:31.757393Z","time spent":"396.744182ms","remote":"127.0.0.1:50404","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":3,"response size":11190,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	{"level":"warn","ts":"2024-04-01T18:08:32.152296Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.545538ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/\" range_end:\"/registry/endpointslices0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-04-01T18:08:32.154277Z","caller":"traceutil/trace.go:171","msg":"trace[2096374394] range","detail":"{range_begin:/registry/endpointslices/; range_end:/registry/endpointslices0; response_count:0; response_revision:1028; }","duration":"118.632393ms","start":"2024-04-01T18:08:32.03564Z","end":"2024-04-01T18:08:32.154272Z","steps":["trace[2096374394] 'agreement among raft nodes before linearized reading'  (duration: 116.534231ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-01T18:08:32.152591Z","caller":"traceutil/trace.go:171","msg":"trace[1510734342] transaction","detail":"{read_only:false; response_revision:1028; number_of_response:1; }","duration":"120.023509ms","start":"2024-04-01T18:08:32.032558Z","end":"2024-04-01T18:08:32.152581Z","steps":["trace[1510734342] 'process raft request'  (duration: 117.414335ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-01T18:08:32.153288Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.111609ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85321"}
	{"level":"info","ts":"2024-04-01T18:08:32.155574Z","caller":"traceutil/trace.go:171","msg":"trace[1412238551] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1028; }","duration":"117.416552ms","start":"2024-04-01T18:08:32.038148Z","end":"2024-04-01T18:08:32.155565Z","steps":["trace[1412238551] 'agreement among raft nodes before linearized reading'  (duration: 114.648979ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-01T18:08:46.708575Z","caller":"traceutil/trace.go:171","msg":"trace[1833287120] linearizableReadLoop","detail":"{readStateIndex:1163; appliedIndex:1162; }","duration":"452.511698ms","start":"2024-04-01T18:08:46.256051Z","end":"2024-04-01T18:08:46.708562Z","steps":["trace[1833287120] 'read index received'  (duration: 452.38851ms)","trace[1833287120] 'applied index is now lower than readState.Index'  (duration: 122.747µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-01T18:08:46.708968Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"452.901243ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11453"}
	{"level":"info","ts":"2024-04-01T18:08:46.709001Z","caller":"traceutil/trace.go:171","msg":"trace[1790539493] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1132; }","duration":"452.946706ms","start":"2024-04-01T18:08:46.256047Z","end":"2024-04-01T18:08:46.708994Z","steps":["trace[1790539493] 'agreement among raft nodes before linearized reading'  (duration: 452.839826ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-01T18:08:46.70902Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-01T18:08:46.256015Z","time spent":"453.000724ms","remote":"127.0.0.1:50404","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":3,"response size":11476,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	{"level":"info","ts":"2024-04-01T18:08:46.709207Z","caller":"traceutil/trace.go:171","msg":"trace[745578692] transaction","detail":"{read_only:false; response_revision:1132; number_of_response:1; }","duration":"466.920783ms","start":"2024-04-01T18:08:46.242278Z","end":"2024-04-01T18:08:46.709199Z","steps":["trace[745578692] 'process raft request'  (duration: 466.198693ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-01T18:08:46.708968Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"250.792971ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14363"}
	{"level":"info","ts":"2024-04-01T18:08:46.709257Z","caller":"traceutil/trace.go:171","msg":"trace[208484155] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1132; }","duration":"251.089944ms","start":"2024-04-01T18:08:46.458157Z","end":"2024-04-01T18:08:46.709247Z","steps":["trace[208484155] 'agreement among raft nodes before linearized reading'  (duration: 250.697337ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-01T18:08:46.709259Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-01T18:08:46.242264Z","time spent":"466.957691ms","remote":"127.0.0.1:50480","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":482,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1114 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:419 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"info","ts":"2024-04-01T18:08:49.200277Z","caller":"traceutil/trace.go:171","msg":"trace[871224223] transaction","detail":"{read_only:false; response_revision:1136; number_of_response:1; }","duration":"147.845564ms","start":"2024-04-01T18:08:49.052411Z","end":"2024-04-01T18:08:49.200257Z","steps":["trace[871224223] 'process raft request'  (duration: 147.299539ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-01T18:09:06.803152Z","caller":"traceutil/trace.go:171","msg":"trace[1279358777] linearizableReadLoop","detail":"{readStateIndex:1342; appliedIndex:1341; }","duration":"214.879381ms","start":"2024-04-01T18:09:06.588254Z","end":"2024-04-01T18:09:06.803133Z","steps":["trace[1279358777] 'read index received'  (duration: 214.701702ms)","trace[1279358777] 'applied index is now lower than readState.Index'  (duration: 176.798µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-01T18:09:06.803248Z","caller":"traceutil/trace.go:171","msg":"trace[1034489834] transaction","detail":"{read_only:false; response_revision:1304; number_of_response:1; }","duration":"302.392676ms","start":"2024-04-01T18:09:06.500849Z","end":"2024-04-01T18:09:06.803241Z","steps":["trace[1034489834] 'process raft request'  (duration: 302.147284ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-01T18:09:06.803336Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-01T18:09:06.500833Z","time spent":"302.432055ms","remote":"127.0.0.1:50480","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":486,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" mod_revision:1196 > success:<request_put:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" value_size:427 >> failure:<request_range:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" > >"}
	{"level":"warn","ts":"2024-04-01T18:09:06.803402Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.043415ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/yakd-dashboard/\" range_end:\"/registry/pods/yakd-dashboard0\" ","response":"range_response_count:1 size:4325"}
	{"level":"info","ts":"2024-04-01T18:09:06.803464Z","caller":"traceutil/trace.go:171","msg":"trace[742409902] range","detail":"{range_begin:/registry/pods/yakd-dashboard/; range_end:/registry/pods/yakd-dashboard0; response_count:1; response_revision:1304; }","duration":"140.130642ms","start":"2024-04-01T18:09:06.663316Z","end":"2024-04-01T18:09:06.803447Z","steps":["trace[742409902] 'agreement among raft nodes before linearized reading'  (duration: 140.009579ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-01T18:09:06.803614Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.891474ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/\" range_end:\"/registry/csinodes0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-04-01T18:09:06.803632Z","caller":"traceutil/trace.go:171","msg":"trace[241735526] range","detail":"{range_begin:/registry/csinodes/; range_end:/registry/csinodes0; response_count:0; response_revision:1304; }","duration":"136.986883ms","start":"2024-04-01T18:09:06.66664Z","end":"2024-04-01T18:09:06.803627Z","steps":["trace[241735526] 'agreement among raft nodes before linearized reading'  (duration: 136.949004ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-01T18:09:06.803637Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"215.384506ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllers/kube-system/registry\" ","response":"range_response_count:1 size:2820"}
	{"level":"info","ts":"2024-04-01T18:09:06.803657Z","caller":"traceutil/trace.go:171","msg":"trace[278867364] range","detail":"{range_begin:/registry/controllers/kube-system/registry; range_end:; response_count:1; response_revision:1304; }","duration":"215.430599ms","start":"2024-04-01T18:09:06.58822Z","end":"2024-04-01T18:09:06.80365Z","steps":["trace[278867364] 'agreement among raft nodes before linearized reading'  (duration: 215.384476ms)"],"step_count":1}
	
	
	==> gcp-auth [0125f0c6d4aacc178dbf9901b3023e0afc815152e540ad3c84a6083e43f9abca] <==
	2024/04/01 18:08:51 GCP Auth Webhook started!
	2024/04/01 18:08:58 Ready to marshal response ...
	2024/04/01 18:08:58 Ready to write response ...
	2024/04/01 18:09:00 Ready to marshal response ...
	2024/04/01 18:09:00 Ready to write response ...
	2024/04/01 18:09:00 Ready to marshal response ...
	2024/04/01 18:09:00 Ready to write response ...
	2024/04/01 18:09:03 Ready to marshal response ...
	2024/04/01 18:09:03 Ready to write response ...
	2024/04/01 18:09:09 Ready to marshal response ...
	2024/04/01 18:09:09 Ready to write response ...
	2024/04/01 18:09:12 Ready to marshal response ...
	2024/04/01 18:09:12 Ready to write response ...
	
	
	==> kernel <==
	 18:09:15 up 2 min,  0 users,  load average: 2.89, 1.61, 0.63
	Linux addons-881427 5.10.207 #1 SMP Wed Mar 27 22:02:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [bd6fdf952e5501e85339e87407a72c5550cda3b57b1bdca9f53b58f499f8b941] <==
	I0401 18:07:37.335347       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0401 18:07:37.335419       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0401 18:07:37.381908       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0401 18:07:37.381969       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0401 18:07:38.415148       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller" clusterIPs={"IPv4":"10.101.156.13"}
	I0401 18:07:38.494137       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller-admission" clusterIPs={"IPv4":"10.104.212.36"}
	I0401 18:07:38.583649       1 controller.go:624] quota admission added evaluator for: jobs.batch
	I0401 18:07:41.052578       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.96.27.111"}
	I0401 18:07:41.083688       1 controller.go:624] quota admission added evaluator for: statefulsets.apps
	I0401 18:07:41.307208       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.111.253.188"}
	I0401 18:07:42.940526       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.104.5.207"}
	W0401 18:08:15.968031       1 handler_proxy.go:93] no RequestInfo found in the context
	E0401 18:08:15.968146       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0401 18:08:15.969212       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.150.29:443/apis/metrics.k8s.io/v1beta1: Get "https://10.110.150.29:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.110.150.29:443: connect: connection refused
	E0401 18:08:15.969666       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.150.29:443/apis/metrics.k8s.io/v1beta1: Get "https://10.110.150.29:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.110.150.29:443: connect: connection refused
	E0401 18:08:15.975915       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.150.29:443/apis/metrics.k8s.io/v1beta1: Get "https://10.110.150.29:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.110.150.29:443: connect: connection refused
	I0401 18:08:16.082169       1 handler.go:275] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0401 18:08:32.147864       1 trace.go:236] Trace[437634442]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:a7bc285c-7ebb-4446-b5e7-bdd30df8a798,client:192.168.39.214,api-group:batch,api-version:v1,name:ingress-nginx-admission-patch,subresource:status,namespace:ingress-nginx,protocol:HTTP/2.0,resource:jobs,scope:resource,url:/apis/batch/v1/namespaces/ingress-nginx/jobs/ingress-nginx-admission-patch/status,user-agent:kube-controller-manager/v1.29.3 (linux/amd64) kubernetes/6813625/system:serviceaccount:kube-system:job-controller,verb:PUT (01-Apr-2024 18:08:31.646) (total time: 501ms):
	Trace[437634442]: ["GuaranteedUpdate etcd3" audit-id:a7bc285c-7ebb-4446-b5e7-bdd30df8a798,key:/jobs/ingress-nginx/ingress-nginx-admission-patch,type:*batch.Job,resource:jobs.batch 500ms (18:08:31.646)
	Trace[437634442]:  ---"Txn call completed" 499ms (18:08:32.147)]
	Trace[437634442]: [501.118872ms] [501.118872ms] END
	E0401 18:09:10.713814       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0401 18:09:10.720162       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0401 18:09:10.727257       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	
	
	==> kube-controller-manager [c2d54581a0ef573e289f02ca7ad4f3eeb8b3f9014afdc78a9569a0c254bcfb09] <==
	I0401 18:08:38.157607       1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0401 18:08:38.167578       1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0401 18:08:38.176251       1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0401 18:08:38.176351       1 event.go:376] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0401 18:08:38.206500       1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0401 18:08:41.616500       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="6.177615ms"
	I0401 18:08:41.617712       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="35.827µs"
	I0401 18:08:50.575358       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-65496f9567" duration="123.656µs"
	I0401 18:08:52.597604       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-7d69788767" duration="11.643961ms"
	I0401 18:08:52.597862       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-7d69788767" duration="181.059µs"
	I0401 18:08:53.084249       1 event.go:376] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0401 18:08:58.079599       1 event.go:376] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0401 18:08:59.670912       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-75d6c48ddd" duration="10.992µs"
	I0401 18:09:00.063615       1 event.go:376] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
	I0401 18:09:00.237518       1 event.go:376] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0401 18:09:02.897036       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-65496f9567" duration="22.586565ms"
	I0401 18:09:02.898971       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-65496f9567" duration="99.651µs"
	I0401 18:09:03.515893       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/tiller-deploy-7b677967b9" duration="3.615µs"
	I0401 18:09:06.811397       1 replica_set.go:676] "Finished syncing" kind="ReplicationController" key="kube-system/registry" duration="7.786µs"
	I0401 18:09:07.030185       1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0401 18:09:07.085350       1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0401 18:09:08.011061       1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0401 18:09:08.041439       1 job_controller.go:554] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0401 18:09:10.478865       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-78b46b4d5c" duration="5.933µs"
	I0401 18:09:11.389668       1 event.go:376] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	
	
	==> kube-proxy [4364158240fbf7e504278f6465f4ca09aafa1f1add53cc175f8dfe119fce1326] <==
	I0401 18:07:30.526214       1 server_others.go:72] "Using iptables proxy"
	I0401 18:07:30.551484       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.214"]
	I0401 18:07:30.637175       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0401 18:07:30.637193       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0401 18:07:30.637205       1 server_others.go:168] "Using iptables Proxier"
	I0401 18:07:30.644006       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0401 18:07:30.644182       1 server.go:865] "Version info" version="v1.29.3"
	I0401 18:07:30.644220       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 18:07:30.647365       1 config.go:188] "Starting service config controller"
	I0401 18:07:30.647413       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0401 18:07:30.647439       1 config.go:97] "Starting endpoint slice config controller"
	I0401 18:07:30.647443       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0401 18:07:30.648204       1 config.go:315] "Starting node config controller"
	I0401 18:07:30.648241       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0401 18:07:30.747919       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0401 18:07:30.748153       1 shared_informer.go:318] Caches are synced for service config
	I0401 18:07:30.748367       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [03fb53a7e5f85c59443d18637bfcbf0ffa22527f75cc75a7845f585f87ee236d] <==
	W0401 18:07:14.082231       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0401 18:07:14.082321       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0401 18:07:14.100481       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0401 18:07:14.102890       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0401 18:07:14.137795       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0401 18:07:14.138021       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0401 18:07:14.191511       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0401 18:07:14.191702       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0401 18:07:14.198218       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0401 18:07:14.198304       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0401 18:07:14.283781       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0401 18:07:14.284624       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0401 18:07:14.284580       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0401 18:07:14.285819       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0401 18:07:14.389013       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0401 18:07:14.389565       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0401 18:07:14.409029       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0401 18:07:14.409251       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0401 18:07:14.416193       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0401 18:07:14.416258       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0401 18:07:14.448321       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0401 18:07:14.448617       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0401 18:07:14.671805       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0401 18:07:14.671935       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0401 18:07:17.572681       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 01 18:09:12 addons-881427 kubelet[1292]: I0401 18:09:12.185927    1292 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94226317-da40-4927-9343-b8066304e10d-kube-api-access-bztl8" (OuterVolumeSpecName: "kube-api-access-bztl8") pod "94226317-da40-4927-9343-b8066304e10d" (UID: "94226317-da40-4927-9343-b8066304e10d"). InnerVolumeSpecName "kube-api-access-bztl8". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 01 18:09:12 addons-881427 kubelet[1292]: I0401 18:09:12.252435    1292 topology_manager.go:215] "Topology Admit Handler" podUID="cc4381a4-b5ce-4478-a676-6d43d9ae14a3" podNamespace="default" podName="task-pv-pod"
	Apr 01 18:09:12 addons-881427 kubelet[1292]: E0401 18:09:12.252536    1292 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="94226317-da40-4927-9343-b8066304e10d" containerName="helper-pod"
	Apr 01 18:09:12 addons-881427 kubelet[1292]: I0401 18:09:12.252576    1292 memory_manager.go:354] "RemoveStaleState removing state" podUID="94226317-da40-4927-9343-b8066304e10d" containerName="helper-pod"
	Apr 01 18:09:12 addons-881427 kubelet[1292]: I0401 18:09:12.278986    1292 reconciler_common.go:300] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/94226317-da40-4927-9343-b8066304e10d-script\") on node \"addons-881427\" DevicePath \"\""
	Apr 01 18:09:12 addons-881427 kubelet[1292]: I0401 18:09:12.279017    1292 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-bztl8\" (UniqueName: \"kubernetes.io/projected/94226317-da40-4927-9343-b8066304e10d-kube-api-access-bztl8\") on node \"addons-881427\" DevicePath \"\""
	Apr 01 18:09:12 addons-881427 kubelet[1292]: I0401 18:09:12.279027    1292 reconciler_common.go:300] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/94226317-da40-4927-9343-b8066304e10d-gcp-creds\") on node \"addons-881427\" DevicePath \"\""
	Apr 01 18:09:12 addons-881427 kubelet[1292]: I0401 18:09:12.279037    1292 reconciler_common.go:300] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/94226317-da40-4927-9343-b8066304e10d-data\") on node \"addons-881427\" DevicePath \"\""
	Apr 01 18:09:12 addons-881427 kubelet[1292]: I0401 18:09:12.379982    1292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3aeb1c98-b573-459b-a0ba-8a8c4ac488f1\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^f0db980c-f052-11ee-8bff-ca42f783c959\") pod \"task-pv-pod\" (UID: \"cc4381a4-b5ce-4478-a676-6d43d9ae14a3\") " pod="default/task-pv-pod"
	Apr 01 18:09:12 addons-881427 kubelet[1292]: I0401 18:09:12.380138    1292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/cc4381a4-b5ce-4478-a676-6d43d9ae14a3-gcp-creds\") pod \"task-pv-pod\" (UID: \"cc4381a4-b5ce-4478-a676-6d43d9ae14a3\") " pod="default/task-pv-pod"
	Apr 01 18:09:12 addons-881427 kubelet[1292]: I0401 18:09:12.380294    1292 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8xkb\" (UniqueName: \"kubernetes.io/projected/cc4381a4-b5ce-4478-a676-6d43d9ae14a3-kube-api-access-x8xkb\") pod \"task-pv-pod\" (UID: \"cc4381a4-b5ce-4478-a676-6d43d9ae14a3\") " pod="default/task-pv-pod"
	Apr 01 18:09:12 addons-881427 kubelet[1292]: I0401 18:09:12.494614    1292 operation_generator.go:664] "MountVolume.MountDevice succeeded for volume \"pvc-3aeb1c98-b573-459b-a0ba-8a8c4ac488f1\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^f0db980c-f052-11ee-8bff-ca42f783c959\") pod \"task-pv-pod\" (UID: \"cc4381a4-b5ce-4478-a676-6d43d9ae14a3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/hostpath.csi.k8s.io/be8fdcf3e749915d04f34643c8d4741cfa9b064e6261da8261a8a18985026f09/globalmount\"" pod="default/task-pv-pod"
	Apr 01 18:09:12 addons-881427 kubelet[1292]: I0401 18:09:12.945540    1292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="323d59b97b0e5c958879ed0f76feff2d8662a208370edda331c4848a9c5bd72b"
	Apr 01 18:09:13 addons-881427 kubelet[1292]: I0401 18:09:13.964271    1292 scope.go:117] "RemoveContainer" containerID="fc750c93a5b22a3323f8dc49ffecc7d320dbe4eba6894a49e8a348443bae2bf2"
	Apr 01 18:09:13 addons-881427 kubelet[1292]: I0401 18:09:13.994057    1292 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mrl6\" (UniqueName: \"kubernetes.io/projected/dd4046ef-ce6a-48e2-9d0e-bf3aa98f9156-kube-api-access-2mrl6\") pod \"dd4046ef-ce6a-48e2-9d0e-bf3aa98f9156\" (UID: \"dd4046ef-ce6a-48e2-9d0e-bf3aa98f9156\") "
	Apr 01 18:09:13 addons-881427 kubelet[1292]: I0401 18:09:13.994131    1292 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"device-plugin\" (UniqueName: \"kubernetes.io/host-path/dd4046ef-ce6a-48e2-9d0e-bf3aa98f9156-device-plugin\") pod \"dd4046ef-ce6a-48e2-9d0e-bf3aa98f9156\" (UID: \"dd4046ef-ce6a-48e2-9d0e-bf3aa98f9156\") "
	Apr 01 18:09:13 addons-881427 kubelet[1292]: I0401 18:09:13.994269    1292 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd4046ef-ce6a-48e2-9d0e-bf3aa98f9156-device-plugin" (OuterVolumeSpecName: "device-plugin") pod "dd4046ef-ce6a-48e2-9d0e-bf3aa98f9156" (UID: "dd4046ef-ce6a-48e2-9d0e-bf3aa98f9156"). InnerVolumeSpecName "device-plugin". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Apr 01 18:09:14 addons-881427 kubelet[1292]: I0401 18:09:14.018811    1292 scope.go:117] "RemoveContainer" containerID="fc750c93a5b22a3323f8dc49ffecc7d320dbe4eba6894a49e8a348443bae2bf2"
	Apr 01 18:09:14 addons-881427 kubelet[1292]: E0401 18:09:14.019322    1292 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc750c93a5b22a3323f8dc49ffecc7d320dbe4eba6894a49e8a348443bae2bf2\": container with ID starting with fc750c93a5b22a3323f8dc49ffecc7d320dbe4eba6894a49e8a348443bae2bf2 not found: ID does not exist" containerID="fc750c93a5b22a3323f8dc49ffecc7d320dbe4eba6894a49e8a348443bae2bf2"
	Apr 01 18:09:14 addons-881427 kubelet[1292]: I0401 18:09:14.019357    1292 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc750c93a5b22a3323f8dc49ffecc7d320dbe4eba6894a49e8a348443bae2bf2"} err="failed to get container status \"fc750c93a5b22a3323f8dc49ffecc7d320dbe4eba6894a49e8a348443bae2bf2\": rpc error: code = NotFound desc = could not find container \"fc750c93a5b22a3323f8dc49ffecc7d320dbe4eba6894a49e8a348443bae2bf2\": container with ID starting with fc750c93a5b22a3323f8dc49ffecc7d320dbe4eba6894a49e8a348443bae2bf2 not found: ID does not exist"
	Apr 01 18:09:14 addons-881427 kubelet[1292]: I0401 18:09:14.026882    1292 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd4046ef-ce6a-48e2-9d0e-bf3aa98f9156-kube-api-access-2mrl6" (OuterVolumeSpecName: "kube-api-access-2mrl6") pod "dd4046ef-ce6a-48e2-9d0e-bf3aa98f9156" (UID: "dd4046ef-ce6a-48e2-9d0e-bf3aa98f9156"). InnerVolumeSpecName "kube-api-access-2mrl6". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 01 18:09:14 addons-881427 kubelet[1292]: I0401 18:09:14.095380    1292 reconciler_common.go:300] "Volume detached for volume \"device-plugin\" (UniqueName: \"kubernetes.io/host-path/dd4046ef-ce6a-48e2-9d0e-bf3aa98f9156-device-plugin\") on node \"addons-881427\" DevicePath \"\""
	Apr 01 18:09:14 addons-881427 kubelet[1292]: I0401 18:09:14.095447    1292 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-2mrl6\" (UniqueName: \"kubernetes.io/projected/dd4046ef-ce6a-48e2-9d0e-bf3aa98f9156-kube-api-access-2mrl6\") on node \"addons-881427\" DevicePath \"\""
	Apr 01 18:09:14 addons-881427 kubelet[1292]: I0401 18:09:14.289348    1292 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/task-pv-pod" podStartSLOduration=1.844039347 podStartE2EDuration="2.289308748s" podCreationTimestamp="2024-04-01 18:09:12 +0000 UTC" firstStartedPulling="2024-04-01 18:09:12.69716302 +0000 UTC m=+116.199659376" lastFinishedPulling="2024-04-01 18:09:13.14243242 +0000 UTC m=+116.644928777" observedRunningTime="2024-04-01 18:09:14.002455331 +0000 UTC m=+117.504951706" watchObservedRunningTime="2024-04-01 18:09:14.289308748 +0000 UTC m=+117.791805144"
	Apr 01 18:09:14 addons-881427 kubelet[1292]: I0401 18:09:14.758136    1292 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd4046ef-ce6a-48e2-9d0e-bf3aa98f9156" path="/var/lib/kubelet/pods/dd4046ef-ce6a-48e2-9d0e-bf3aa98f9156/volumes"
	
	
	==> storage-provisioner [d879594bec103909d539395a08207a09bcebce1a01b59adb744f55f6fc38269c] <==
	I0401 18:07:38.113467       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0401 18:07:38.146685       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0401 18:07:38.146834       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0401 18:07:38.172460       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0401 18:07:38.173624       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"164f0e07-77c0-48d6-94da-cd0defe92d84", APIVersion:"v1", ResourceVersion:"656", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-881427_71c47355-bb7e-409b-b2c6-6ccfbd39fc62 became leader
	I0401 18:07:38.173680       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-881427_71c47355-bb7e-409b-b2c6-6ccfbd39fc62!
	I0401 18:07:38.277290       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-881427_71c47355-bb7e-409b-b2c6-6ccfbd39fc62!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-881427 -n addons-881427
helpers_test.go:261: (dbg) Run:  kubectl --context addons-881427 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-82sh9 ingress-nginx-admission-patch-wf88x
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/CloudSpanner]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-881427 describe pod ingress-nginx-admission-create-82sh9 ingress-nginx-admission-patch-wf88x
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-881427 describe pod ingress-nginx-admission-create-82sh9 ingress-nginx-admission-patch-wf88x: exit status 1 (60.061988ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-82sh9" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-wf88x" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-881427 describe pod ingress-nginx-admission-create-82sh9 ingress-nginx-admission-patch-wf88x: exit status 1
--- FAIL: TestAddons/parallel/CloudSpanner (7.77s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.46s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-881427
addons_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-881427: exit status 82 (2m0.480910025s)

                                                
                                                
-- stdout --
	* Stopping node "addons-881427"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:174: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-881427" : exit status 82
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-881427
addons_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-881427: exit status 11 (21.690996708s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.214:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:178: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-881427" : exit status 11
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-881427
addons_test.go:180: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-881427: exit status 11 (6.143383475s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.214:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:182: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-881427" : exit status 11
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-881427
addons_test.go:185: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-881427: exit status 11 (6.142731456s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.214:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:187: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-881427" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-784295 ssh pgrep buildkitd: exit status 1 (228.99216ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 image build -t localhost/my-image:functional-784295 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-784295 image build -t localhost/my-image:functional-784295 testdata/build --alsologtostderr: (3.092626466s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-784295 image build -t localhost/my-image:functional-784295 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> b4e50fa860f
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-784295
--> 5be8618737c
Successfully tagged localhost/my-image:functional-784295
5be8618737ca9806350a6be20620b048ea02af3c767cf890ed37eaaf23587454
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-784295 image build -t localhost/my-image:functional-784295 testdata/build --alsologtostderr:
I0401 18:19:46.331028   27001 out.go:291] Setting OutFile to fd 1 ...
I0401 18:19:46.331164   27001 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0401 18:19:46.331174   27001 out.go:304] Setting ErrFile to fd 2...
I0401 18:19:46.331179   27001 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0401 18:19:46.331342   27001 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-10493/.minikube/bin
I0401 18:19:46.331869   27001 config.go:182] Loaded profile config "functional-784295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0401 18:19:46.332599   27001 config.go:182] Loaded profile config "functional-784295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0401 18:19:46.333030   27001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0401 18:19:46.333095   27001 main.go:141] libmachine: Launching plugin server for driver kvm2
I0401 18:19:46.352246   27001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33769
I0401 18:19:46.352694   27001 main.go:141] libmachine: () Calling .GetVersion
I0401 18:19:46.353274   27001 main.go:141] libmachine: Using API Version  1
I0401 18:19:46.353301   27001 main.go:141] libmachine: () Calling .SetConfigRaw
I0401 18:19:46.353698   27001 main.go:141] libmachine: () Calling .GetMachineName
I0401 18:19:46.353991   27001 main.go:141] libmachine: (functional-784295) Calling .GetState
I0401 18:19:46.356047   27001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0401 18:19:46.356133   27001 main.go:141] libmachine: Launching plugin server for driver kvm2
I0401 18:19:46.371158   27001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44889
I0401 18:19:46.371601   27001 main.go:141] libmachine: () Calling .GetVersion
I0401 18:19:46.372194   27001 main.go:141] libmachine: Using API Version  1
I0401 18:19:46.372220   27001 main.go:141] libmachine: () Calling .SetConfigRaw
I0401 18:19:46.372611   27001 main.go:141] libmachine: () Calling .GetMachineName
I0401 18:19:46.372821   27001 main.go:141] libmachine: (functional-784295) Calling .DriverName
I0401 18:19:46.373032   27001 ssh_runner.go:195] Run: systemctl --version
I0401 18:19:46.373062   27001 main.go:141] libmachine: (functional-784295) Calling .GetSSHHostname
I0401 18:19:46.376022   27001 main.go:141] libmachine: (functional-784295) DBG | domain functional-784295 has defined MAC address 52:54:00:94:b3:40 in network mk-functional-784295
I0401 18:19:46.376522   27001 main.go:141] libmachine: (functional-784295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:b3:40", ip: ""} in network mk-functional-784295: {Iface:virbr1 ExpiryTime:2024-04-01 19:16:05 +0000 UTC Type:0 Mac:52:54:00:94:b3:40 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:functional-784295 Clientid:01:52:54:00:94:b3:40}
I0401 18:19:46.376602   27001 main.go:141] libmachine: (functional-784295) DBG | domain functional-784295 has defined IP address 192.168.39.229 and MAC address 52:54:00:94:b3:40 in network mk-functional-784295
I0401 18:19:46.376820   27001 main.go:141] libmachine: (functional-784295) Calling .GetSSHPort
I0401 18:19:46.376995   27001 main.go:141] libmachine: (functional-784295) Calling .GetSSHKeyPath
I0401 18:19:46.377158   27001 main.go:141] libmachine: (functional-784295) Calling .GetSSHUsername
I0401 18:19:46.377383   27001 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/functional-784295/id_rsa Username:docker}
I0401 18:19:46.509611   27001 build_images.go:161] Building image from path: /tmp/build.2614486332.tar
I0401 18:19:46.509700   27001 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0401 18:19:46.540671   27001 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2614486332.tar
I0401 18:19:46.592063   27001 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2614486332.tar: stat -c "%s %y" /var/lib/minikube/build/build.2614486332.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2614486332.tar': No such file or directory
I0401 18:19:46.592111   27001 ssh_runner.go:362] scp /tmp/build.2614486332.tar --> /var/lib/minikube/build/build.2614486332.tar (3072 bytes)
I0401 18:19:46.653903   27001 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2614486332
I0401 18:19:46.668627   27001 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2614486332 -xf /var/lib/minikube/build/build.2614486332.tar
I0401 18:19:46.722681   27001 crio.go:315] Building image: /var/lib/minikube/build/build.2614486332
I0401 18:19:46.722790   27001 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-784295 /var/lib/minikube/build/build.2614486332 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0401 18:19:49.295098   27001 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-784295 /var/lib/minikube/build/build.2614486332 --cgroup-manager=cgroupfs: (2.572273469s)
I0401 18:19:49.295168   27001 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2614486332
I0401 18:19:49.322757   27001 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2614486332.tar
I0401 18:19:49.353715   27001 build_images.go:217] Built localhost/my-image:functional-784295 from /tmp/build.2614486332.tar
I0401 18:19:49.353751   27001 build_images.go:133] succeeded building to: functional-784295
I0401 18:19:49.353758   27001 build_images.go:134] failed building to: 
I0401 18:19:49.353784   27001 main.go:141] libmachine: Making call to close driver server
I0401 18:19:49.353798   27001 main.go:141] libmachine: (functional-784295) Calling .Close
I0401 18:19:49.354074   27001 main.go:141] libmachine: Successfully made call to close driver server
I0401 18:19:49.354092   27001 main.go:141] libmachine: Making call to close connection to plugin binary
I0401 18:19:49.354101   27001 main.go:141] libmachine: Making call to close driver server
I0401 18:19:49.354108   27001 main.go:141] libmachine: (functional-784295) Calling .Close
I0401 18:19:49.354340   27001 main.go:141] libmachine: (functional-784295) DBG | Closing plugin on server side
I0401 18:19:49.354372   27001 main.go:141] libmachine: Successfully made call to close driver server
I0401 18:19:49.354380   27001 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 image ls
functional_test.go:447: (dbg) Done: out/minikube-linux-amd64 -p functional-784295 image ls: (2.430589603s)
functional_test.go:442: expected "localhost/my-image:functional-784295" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (5.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (142.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 node stop m02 -v=7 --alsologtostderr
E0401 18:24:57.819829   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/functional-784295/client.crt: no such file or directory
E0401 18:25:38.780999   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/functional-784295/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-293078 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.495563184s)

                                                
                                                
-- stdout --
	* Stopping node "ha-293078-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 18:24:40.961140   31003 out.go:291] Setting OutFile to fd 1 ...
	I0401 18:24:40.961283   31003 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 18:24:40.961294   31003 out.go:304] Setting ErrFile to fd 2...
	I0401 18:24:40.961297   31003 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 18:24:40.961497   31003 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-10493/.minikube/bin
	I0401 18:24:40.961769   31003 mustload.go:65] Loading cluster: ha-293078
	I0401 18:24:40.962179   31003 config.go:182] Loaded profile config "ha-293078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 18:24:40.962204   31003 stop.go:39] StopHost: ha-293078-m02
	I0401 18:24:40.962563   31003 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:24:40.962612   31003 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:24:40.978188   31003 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36925
	I0401 18:24:40.978814   31003 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:24:40.979494   31003 main.go:141] libmachine: Using API Version  1
	I0401 18:24:40.979526   31003 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:24:40.980088   31003 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:24:40.982507   31003 out.go:177] * Stopping node "ha-293078-m02"  ...
	I0401 18:24:40.983722   31003 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0401 18:24:40.983755   31003 main.go:141] libmachine: (ha-293078-m02) Calling .DriverName
	I0401 18:24:40.983988   31003 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0401 18:24:40.984020   31003 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHHostname
	I0401 18:24:40.987051   31003 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:24:40.987512   31003 main.go:141] libmachine: (ha-293078-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7f:87", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:21:19 +0000 UTC Type:0 Mac:52:54:00:25:7f:87 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-293078-m02 Clientid:01:52:54:00:25:7f:87}
	I0401 18:24:40.987541   31003 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined IP address 192.168.39.161 and MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:24:40.987675   31003 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHPort
	I0401 18:24:40.987845   31003 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHKeyPath
	I0401 18:24:40.988006   31003 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHUsername
	I0401 18:24:40.988125   31003 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m02/id_rsa Username:docker}
	I0401 18:24:41.080708   31003 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0401 18:24:41.138345   31003 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0401 18:24:41.195667   31003 main.go:141] libmachine: Stopping "ha-293078-m02"...
	I0401 18:24:41.195721   31003 main.go:141] libmachine: (ha-293078-m02) Calling .GetState
	I0401 18:24:41.197318   31003 main.go:141] libmachine: (ha-293078-m02) Calling .Stop
	I0401 18:24:41.201073   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 0/120
	I0401 18:24:42.202274   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 1/120
	I0401 18:24:43.204341   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 2/120
	I0401 18:24:44.206520   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 3/120
	I0401 18:24:45.208117   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 4/120
	I0401 18:24:46.209776   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 5/120
	I0401 18:24:47.212223   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 6/120
	I0401 18:24:48.213592   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 7/120
	I0401 18:24:49.214998   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 8/120
	I0401 18:24:50.216335   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 9/120
	I0401 18:24:51.218451   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 10/120
	I0401 18:24:52.220162   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 11/120
	I0401 18:24:53.221598   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 12/120
	I0401 18:24:54.223092   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 13/120
	I0401 18:24:55.224471   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 14/120
	I0401 18:24:56.226504   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 15/120
	I0401 18:24:57.227945   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 16/120
	I0401 18:24:58.229309   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 17/120
	I0401 18:24:59.230747   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 18/120
	I0401 18:25:00.232196   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 19/120
	I0401 18:25:01.234232   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 20/120
	I0401 18:25:02.236409   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 21/120
	I0401 18:25:03.237696   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 22/120
	I0401 18:25:04.239727   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 23/120
	I0401 18:25:05.241611   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 24/120
	I0401 18:25:06.243253   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 25/120
	I0401 18:25:07.244811   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 26/120
	I0401 18:25:08.246227   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 27/120
	I0401 18:25:09.248018   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 28/120
	I0401 18:25:10.249363   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 29/120
	I0401 18:25:11.251409   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 30/120
	I0401 18:25:12.253464   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 31/120
	I0401 18:25:13.254553   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 32/120
	I0401 18:25:14.255818   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 33/120
	I0401 18:25:15.257002   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 34/120
	I0401 18:25:16.259081   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 35/120
	I0401 18:25:17.260313   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 36/120
	I0401 18:25:18.261654   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 37/120
	I0401 18:25:19.263203   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 38/120
	I0401 18:25:20.264427   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 39/120
	I0401 18:25:21.266244   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 40/120
	I0401 18:25:22.268084   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 41/120
	I0401 18:25:23.269280   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 42/120
	I0401 18:25:24.270646   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 43/120
	I0401 18:25:25.271932   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 44/120
	I0401 18:25:26.273899   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 45/120
	I0401 18:25:27.275431   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 46/120
	I0401 18:25:28.276783   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 47/120
	I0401 18:25:29.278180   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 48/120
	I0401 18:25:30.280140   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 49/120
	I0401 18:25:31.282477   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 50/120
	I0401 18:25:32.284081   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 51/120
	I0401 18:25:33.286470   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 52/120
	I0401 18:25:34.288434   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 53/120
	I0401 18:25:35.289683   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 54/120
	I0401 18:25:36.290980   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 55/120
	I0401 18:25:37.292906   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 56/120
	I0401 18:25:38.294225   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 57/120
	I0401 18:25:39.296113   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 58/120
	I0401 18:25:40.297425   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 59/120
	I0401 18:25:41.299361   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 60/120
	I0401 18:25:42.300608   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 61/120
	I0401 18:25:43.302034   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 62/120
	I0401 18:25:44.304250   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 63/120
	I0401 18:25:45.305554   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 64/120
	I0401 18:25:46.307445   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 65/120
	I0401 18:25:47.308794   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 66/120
	I0401 18:25:48.310732   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 67/120
	I0401 18:25:49.312568   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 68/120
	I0401 18:25:50.314021   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 69/120
	I0401 18:25:51.315865   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 70/120
	I0401 18:25:52.317348   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 71/120
	I0401 18:25:53.318727   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 72/120
	I0401 18:25:54.320947   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 73/120
	I0401 18:25:55.322176   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 74/120
	I0401 18:25:56.324039   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 75/120
	I0401 18:25:57.325461   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 76/120
	I0401 18:25:58.326742   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 77/120
	I0401 18:25:59.328083   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 78/120
	I0401 18:26:00.329802   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 79/120
	I0401 18:26:01.332362   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 80/120
	I0401 18:26:02.334755   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 81/120
	I0401 18:26:03.336832   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 82/120
	I0401 18:26:04.338797   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 83/120
	I0401 18:26:05.339997   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 84/120
	I0401 18:26:06.341793   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 85/120
	I0401 18:26:07.343944   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 86/120
	I0401 18:26:08.345075   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 87/120
	I0401 18:26:09.346370   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 88/120
	I0401 18:26:10.348003   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 89/120
	I0401 18:26:11.350203   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 90/120
	I0401 18:26:12.352228   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 91/120
	I0401 18:26:13.353602   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 92/120
	I0401 18:26:14.354936   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 93/120
	I0401 18:26:15.356550   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 94/120
	I0401 18:26:16.358339   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 95/120
	I0401 18:26:17.360120   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 96/120
	I0401 18:26:18.361420   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 97/120
	I0401 18:26:19.362721   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 98/120
	I0401 18:26:20.364683   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 99/120
	I0401 18:26:21.366656   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 100/120
	I0401 18:26:22.368224   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 101/120
	I0401 18:26:23.370535   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 102/120
	I0401 18:26:24.372055   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 103/120
	I0401 18:26:25.373723   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 104/120
	I0401 18:26:26.375114   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 105/120
	I0401 18:26:27.376499   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 106/120
	I0401 18:26:28.378159   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 107/120
	I0401 18:26:29.380317   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 108/120
	I0401 18:26:30.381674   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 109/120
	I0401 18:26:31.383867   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 110/120
	I0401 18:26:32.386344   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 111/120
	I0401 18:26:33.388193   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 112/120
	I0401 18:26:34.389603   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 113/120
	I0401 18:26:35.391166   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 114/120
	I0401 18:26:36.393312   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 115/120
	I0401 18:26:37.394951   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 116/120
	I0401 18:26:38.396315   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 117/120
	I0401 18:26:39.397530   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 118/120
	I0401 18:26:40.398884   31003 main.go:141] libmachine: (ha-293078-m02) Waiting for machine to stop 119/120
	I0401 18:26:41.399572   31003 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0401 18:26:41.399709   31003 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-293078 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-293078 status -v=7 --alsologtostderr: exit status 3 (19.287750814s)

                                                
                                                
-- stdout --
	ha-293078
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-293078-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-293078-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-293078-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 18:26:41.456712   31328 out.go:291] Setting OutFile to fd 1 ...
	I0401 18:26:41.456969   31328 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 18:26:41.456979   31328 out.go:304] Setting ErrFile to fd 2...
	I0401 18:26:41.456983   31328 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 18:26:41.457177   31328 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-10493/.minikube/bin
	I0401 18:26:41.457344   31328 out.go:298] Setting JSON to false
	I0401 18:26:41.457367   31328 mustload.go:65] Loading cluster: ha-293078
	I0401 18:26:41.457440   31328 notify.go:220] Checking for updates...
	I0401 18:26:41.457801   31328 config.go:182] Loaded profile config "ha-293078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 18:26:41.457822   31328 status.go:255] checking status of ha-293078 ...
	I0401 18:26:41.458234   31328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:26:41.458292   31328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:26:41.475491   31328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41929
	I0401 18:26:41.475859   31328 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:26:41.476456   31328 main.go:141] libmachine: Using API Version  1
	I0401 18:26:41.476473   31328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:26:41.476881   31328 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:26:41.477153   31328 main.go:141] libmachine: (ha-293078) Calling .GetState
	I0401 18:26:41.479016   31328 status.go:330] ha-293078 host status = "Running" (err=<nil>)
	I0401 18:26:41.479033   31328 host.go:66] Checking if "ha-293078" exists ...
	I0401 18:26:41.479476   31328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:26:41.479533   31328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:26:41.494862   31328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35581
	I0401 18:26:41.495330   31328 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:26:41.495788   31328 main.go:141] libmachine: Using API Version  1
	I0401 18:26:41.495810   31328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:26:41.496110   31328 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:26:41.496338   31328 main.go:141] libmachine: (ha-293078) Calling .GetIP
	I0401 18:26:41.499334   31328 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:26:41.499772   31328 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:26:41.499793   31328 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:26:41.499928   31328 host.go:66] Checking if "ha-293078" exists ...
	I0401 18:26:41.500303   31328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:26:41.500366   31328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:26:41.514557   31328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35345
	I0401 18:26:41.514962   31328 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:26:41.515397   31328 main.go:141] libmachine: Using API Version  1
	I0401 18:26:41.515421   31328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:26:41.515741   31328 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:26:41.515934   31328 main.go:141] libmachine: (ha-293078) Calling .DriverName
	I0401 18:26:41.516159   31328 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 18:26:41.516185   31328 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:26:41.519206   31328 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:26:41.519664   31328 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:26:41.519700   31328 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:26:41.519839   31328 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:26:41.519993   31328 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:26:41.520133   31328 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:26:41.520298   31328 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078/id_rsa Username:docker}
	I0401 18:26:41.612009   31328 ssh_runner.go:195] Run: systemctl --version
	I0401 18:26:41.619946   31328 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 18:26:41.639534   31328 kubeconfig.go:125] found "ha-293078" server: "https://192.168.39.254:8443"
	I0401 18:26:41.639574   31328 api_server.go:166] Checking apiserver status ...
	I0401 18:26:41.639619   31328 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 18:26:41.656899   31328 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1197/cgroup
	W0401 18:26:41.669482   31328 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1197/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0401 18:26:41.669527   31328 ssh_runner.go:195] Run: ls
	I0401 18:26:41.675070   31328 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0401 18:26:41.679645   31328 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0401 18:26:41.679670   31328 status.go:422] ha-293078 apiserver status = Running (err=<nil>)
	I0401 18:26:41.679682   31328 status.go:257] ha-293078 status: &{Name:ha-293078 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0401 18:26:41.679713   31328 status.go:255] checking status of ha-293078-m02 ...
	I0401 18:26:41.679995   31328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:26:41.680032   31328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:26:41.696640   31328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44985
	I0401 18:26:41.697107   31328 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:26:41.697596   31328 main.go:141] libmachine: Using API Version  1
	I0401 18:26:41.697615   31328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:26:41.697913   31328 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:26:41.698092   31328 main.go:141] libmachine: (ha-293078-m02) Calling .GetState
	I0401 18:26:41.699455   31328 status.go:330] ha-293078-m02 host status = "Running" (err=<nil>)
	I0401 18:26:41.699477   31328 host.go:66] Checking if "ha-293078-m02" exists ...
	I0401 18:26:41.699775   31328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:26:41.699815   31328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:26:41.714053   31328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38333
	I0401 18:26:41.714411   31328 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:26:41.714858   31328 main.go:141] libmachine: Using API Version  1
	I0401 18:26:41.714879   31328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:26:41.715162   31328 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:26:41.715376   31328 main.go:141] libmachine: (ha-293078-m02) Calling .GetIP
	I0401 18:26:41.718170   31328 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:26:41.718692   31328 main.go:141] libmachine: (ha-293078-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7f:87", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:21:19 +0000 UTC Type:0 Mac:52:54:00:25:7f:87 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-293078-m02 Clientid:01:52:54:00:25:7f:87}
	I0401 18:26:41.718713   31328 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined IP address 192.168.39.161 and MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:26:41.718915   31328 host.go:66] Checking if "ha-293078-m02" exists ...
	I0401 18:26:41.719344   31328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:26:41.719386   31328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:26:41.733853   31328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38419
	I0401 18:26:41.734364   31328 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:26:41.734791   31328 main.go:141] libmachine: Using API Version  1
	I0401 18:26:41.734816   31328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:26:41.735144   31328 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:26:41.735320   31328 main.go:141] libmachine: (ha-293078-m02) Calling .DriverName
	I0401 18:26:41.735496   31328 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 18:26:41.735517   31328 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHHostname
	I0401 18:26:41.738552   31328 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:26:41.738983   31328 main.go:141] libmachine: (ha-293078-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7f:87", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:21:19 +0000 UTC Type:0 Mac:52:54:00:25:7f:87 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-293078-m02 Clientid:01:52:54:00:25:7f:87}
	I0401 18:26:41.739017   31328 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined IP address 192.168.39.161 and MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:26:41.739157   31328 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHPort
	I0401 18:26:41.739366   31328 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHKeyPath
	I0401 18:26:41.739527   31328 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHUsername
	I0401 18:26:41.739684   31328 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m02/id_rsa Username:docker}
	W0401 18:27:00.289797   31328 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.161:22: connect: no route to host
	W0401 18:27:00.289871   31328 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.161:22: connect: no route to host
	E0401 18:27:00.289910   31328 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.161:22: connect: no route to host
	I0401 18:27:00.289926   31328 status.go:257] ha-293078-m02 status: &{Name:ha-293078-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0401 18:27:00.289964   31328 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.161:22: connect: no route to host
	I0401 18:27:00.289972   31328 status.go:255] checking status of ha-293078-m03 ...
	I0401 18:27:00.290280   31328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:00.290348   31328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:00.305508   31328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36133
	I0401 18:27:00.306087   31328 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:00.306636   31328 main.go:141] libmachine: Using API Version  1
	I0401 18:27:00.306660   31328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:00.307000   31328 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:00.307198   31328 main.go:141] libmachine: (ha-293078-m03) Calling .GetState
	I0401 18:27:00.308879   31328 status.go:330] ha-293078-m03 host status = "Running" (err=<nil>)
	I0401 18:27:00.308893   31328 host.go:66] Checking if "ha-293078-m03" exists ...
	I0401 18:27:00.309179   31328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:00.309211   31328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:00.324128   31328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43605
	I0401 18:27:00.324550   31328 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:00.324994   31328 main.go:141] libmachine: Using API Version  1
	I0401 18:27:00.325017   31328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:00.325325   31328 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:00.325521   31328 main.go:141] libmachine: (ha-293078-m03) Calling .GetIP
	I0401 18:27:00.328400   31328 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:27:00.328782   31328 main.go:141] libmachine: (ha-293078-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:33:4d", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:22:31 +0000 UTC Type:0 Mac:52:54:00:48:33:4d Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-293078-m03 Clientid:01:52:54:00:48:33:4d}
	I0401 18:27:00.328813   31328 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:27:00.328964   31328 host.go:66] Checking if "ha-293078-m03" exists ...
	I0401 18:27:00.329355   31328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:00.329401   31328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:00.343905   31328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37101
	I0401 18:27:00.344354   31328 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:00.344819   31328 main.go:141] libmachine: Using API Version  1
	I0401 18:27:00.344843   31328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:00.345170   31328 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:00.345359   31328 main.go:141] libmachine: (ha-293078-m03) Calling .DriverName
	I0401 18:27:00.345565   31328 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 18:27:00.345586   31328 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHHostname
	I0401 18:27:00.348484   31328 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:27:00.348878   31328 main.go:141] libmachine: (ha-293078-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:33:4d", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:22:31 +0000 UTC Type:0 Mac:52:54:00:48:33:4d Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-293078-m03 Clientid:01:52:54:00:48:33:4d}
	I0401 18:27:00.348903   31328 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:27:00.349056   31328 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHPort
	I0401 18:27:00.349219   31328 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHKeyPath
	I0401 18:27:00.349395   31328 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHUsername
	I0401 18:27:00.349568   31328 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m03/id_rsa Username:docker}
	I0401 18:27:00.448624   31328 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 18:27:00.469002   31328 kubeconfig.go:125] found "ha-293078" server: "https://192.168.39.254:8443"
	I0401 18:27:00.469027   31328 api_server.go:166] Checking apiserver status ...
	I0401 18:27:00.469080   31328 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 18:27:00.491530   31328 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1510/cgroup
	W0401 18:27:00.505823   31328 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1510/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0401 18:27:00.505883   31328 ssh_runner.go:195] Run: ls
	I0401 18:27:00.512309   31328 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0401 18:27:00.516676   31328 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0401 18:27:00.516701   31328 status.go:422] ha-293078-m03 apiserver status = Running (err=<nil>)
	I0401 18:27:00.516709   31328 status.go:257] ha-293078-m03 status: &{Name:ha-293078-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0401 18:27:00.516722   31328 status.go:255] checking status of ha-293078-m04 ...
	I0401 18:27:00.517014   31328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:00.517047   31328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:00.532071   31328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32779
	I0401 18:27:00.532493   31328 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:00.532923   31328 main.go:141] libmachine: Using API Version  1
	I0401 18:27:00.532947   31328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:00.533270   31328 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:00.533470   31328 main.go:141] libmachine: (ha-293078-m04) Calling .GetState
	I0401 18:27:00.535450   31328 status.go:330] ha-293078-m04 host status = "Running" (err=<nil>)
	I0401 18:27:00.535470   31328 host.go:66] Checking if "ha-293078-m04" exists ...
	I0401 18:27:00.535912   31328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:00.535956   31328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:00.550748   31328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40497
	I0401 18:27:00.551174   31328 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:00.551691   31328 main.go:141] libmachine: Using API Version  1
	I0401 18:27:00.551711   31328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:00.552105   31328 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:00.552357   31328 main.go:141] libmachine: (ha-293078-m04) Calling .GetIP
	I0401 18:27:00.555264   31328 main.go:141] libmachine: (ha-293078-m04) DBG | domain ha-293078-m04 has defined MAC address 52:54:00:b5:ec:c5 in network mk-ha-293078
	I0401 18:27:00.555666   31328 main.go:141] libmachine: (ha-293078-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:ec:c5", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:23:56 +0000 UTC Type:0 Mac:52:54:00:b5:ec:c5 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-293078-m04 Clientid:01:52:54:00:b5:ec:c5}
	I0401 18:27:00.555693   31328 main.go:141] libmachine: (ha-293078-m04) DBG | domain ha-293078-m04 has defined IP address 192.168.39.14 and MAC address 52:54:00:b5:ec:c5 in network mk-ha-293078
	I0401 18:27:00.555956   31328 host.go:66] Checking if "ha-293078-m04" exists ...
	I0401 18:27:00.556456   31328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:00.556504   31328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:00.571284   31328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39401
	I0401 18:27:00.571674   31328 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:00.572144   31328 main.go:141] libmachine: Using API Version  1
	I0401 18:27:00.572163   31328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:00.572479   31328 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:00.572676   31328 main.go:141] libmachine: (ha-293078-m04) Calling .DriverName
	I0401 18:27:00.572857   31328 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 18:27:00.572875   31328 main.go:141] libmachine: (ha-293078-m04) Calling .GetSSHHostname
	I0401 18:27:00.575581   31328 main.go:141] libmachine: (ha-293078-m04) DBG | domain ha-293078-m04 has defined MAC address 52:54:00:b5:ec:c5 in network mk-ha-293078
	I0401 18:27:00.576026   31328 main.go:141] libmachine: (ha-293078-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:ec:c5", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:23:56 +0000 UTC Type:0 Mac:52:54:00:b5:ec:c5 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-293078-m04 Clientid:01:52:54:00:b5:ec:c5}
	I0401 18:27:00.576045   31328 main.go:141] libmachine: (ha-293078-m04) DBG | domain ha-293078-m04 has defined IP address 192.168.39.14 and MAC address 52:54:00:b5:ec:c5 in network mk-ha-293078
	I0401 18:27:00.576189   31328 main.go:141] libmachine: (ha-293078-m04) Calling .GetSSHPort
	I0401 18:27:00.576342   31328 main.go:141] libmachine: (ha-293078-m04) Calling .GetSSHKeyPath
	I0401 18:27:00.576509   31328 main.go:141] libmachine: (ha-293078-m04) Calling .GetSSHUsername
	I0401 18:27:00.576644   31328 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m04/id_rsa Username:docker}
	I0401 18:27:00.665457   31328 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 18:27:00.687019   31328 status.go:257] ha-293078-m04 status: &{Name:ha-293078-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-293078 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-293078 -n ha-293078
E0401 18:27:00.701531   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/functional-784295/client.crt: no such file or directory
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-293078 logs -n 25: (1.702659815s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   |    Version     |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| cp      | ha-293078 cp ha-293078-m03:/home/docker/cp-test.txt                              | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3967030531/001/cp-test_ha-293078-m03.txt |           |         |                |                     |                     |
	| ssh     | ha-293078 ssh -n                                                                 | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | ha-293078-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-293078 cp ha-293078-m03:/home/docker/cp-test.txt                              | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | ha-293078:/home/docker/cp-test_ha-293078-m03_ha-293078.txt                       |           |         |                |                     |                     |
	| ssh     | ha-293078 ssh -n                                                                 | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | ha-293078-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-293078 ssh -n ha-293078 sudo cat                                              | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | /home/docker/cp-test_ha-293078-m03_ha-293078.txt                                 |           |         |                |                     |                     |
	| cp      | ha-293078 cp ha-293078-m03:/home/docker/cp-test.txt                              | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | ha-293078-m02:/home/docker/cp-test_ha-293078-m03_ha-293078-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-293078 ssh -n                                                                 | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | ha-293078-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-293078 ssh -n ha-293078-m02 sudo cat                                          | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | /home/docker/cp-test_ha-293078-m03_ha-293078-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-293078 cp ha-293078-m03:/home/docker/cp-test.txt                              | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | ha-293078-m04:/home/docker/cp-test_ha-293078-m03_ha-293078-m04.txt               |           |         |                |                     |                     |
	| ssh     | ha-293078 ssh -n                                                                 | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | ha-293078-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-293078 ssh -n ha-293078-m04 sudo cat                                          | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | /home/docker/cp-test_ha-293078-m03_ha-293078-m04.txt                             |           |         |                |                     |                     |
	| cp      | ha-293078 cp testdata/cp-test.txt                                                | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | ha-293078-m04:/home/docker/cp-test.txt                                           |           |         |                |                     |                     |
	| ssh     | ha-293078 ssh -n                                                                 | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | ha-293078-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-293078 cp ha-293078-m04:/home/docker/cp-test.txt                              | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3967030531/001/cp-test_ha-293078-m04.txt |           |         |                |                     |                     |
	| ssh     | ha-293078 ssh -n                                                                 | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | ha-293078-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-293078 cp ha-293078-m04:/home/docker/cp-test.txt                              | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | ha-293078:/home/docker/cp-test_ha-293078-m04_ha-293078.txt                       |           |         |                |                     |                     |
	| ssh     | ha-293078 ssh -n                                                                 | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | ha-293078-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-293078 ssh -n ha-293078 sudo cat                                              | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | /home/docker/cp-test_ha-293078-m04_ha-293078.txt                                 |           |         |                |                     |                     |
	| cp      | ha-293078 cp ha-293078-m04:/home/docker/cp-test.txt                              | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | ha-293078-m02:/home/docker/cp-test_ha-293078-m04_ha-293078-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-293078 ssh -n                                                                 | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | ha-293078-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-293078 ssh -n ha-293078-m02 sudo cat                                          | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | /home/docker/cp-test_ha-293078-m04_ha-293078-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-293078 cp ha-293078-m04:/home/docker/cp-test.txt                              | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | ha-293078-m03:/home/docker/cp-test_ha-293078-m04_ha-293078-m03.txt               |           |         |                |                     |                     |
	| ssh     | ha-293078 ssh -n                                                                 | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | ha-293078-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-293078 ssh -n ha-293078-m03 sudo cat                                          | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | /home/docker/cp-test_ha-293078-m04_ha-293078-m03.txt                             |           |         |                |                     |                     |
	| node    | ha-293078 node stop m02 -v=7                                                     | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/01 18:20:08
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 18:20:08.169597   27284 out.go:291] Setting OutFile to fd 1 ...
	I0401 18:20:08.169727   27284 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 18:20:08.169736   27284 out.go:304] Setting ErrFile to fd 2...
	I0401 18:20:08.169741   27284 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 18:20:08.169959   27284 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-10493/.minikube/bin
	I0401 18:20:08.170658   27284 out.go:298] Setting JSON to false
	I0401 18:20:08.171489   27284 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3760,"bootTime":1711991848,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 18:20:08.171542   27284 start.go:139] virtualization: kvm guest
	I0401 18:20:08.173669   27284 out.go:177] * [ha-293078] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 18:20:08.175086   27284 out.go:177]   - MINIKUBE_LOCATION=18233
	I0401 18:20:08.175120   27284 notify.go:220] Checking for updates...
	I0401 18:20:08.176449   27284 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 18:20:08.177913   27284 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 18:20:08.179348   27284 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-10493/.minikube
	I0401 18:20:08.180659   27284 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 18:20:08.182041   27284 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 18:20:08.183488   27284 driver.go:392] Setting default libvirt URI to qemu:///system
	I0401 18:20:08.217194   27284 out.go:177] * Using the kvm2 driver based on user configuration
	I0401 18:20:08.218651   27284 start.go:297] selected driver: kvm2
	I0401 18:20:08.218676   27284 start.go:901] validating driver "kvm2" against <nil>
	I0401 18:20:08.218689   27284 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 18:20:08.219402   27284 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 18:20:08.219517   27284 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18233-10493/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0401 18:20:08.233744   27284 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0401 18:20:08.233777   27284 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0401 18:20:08.234002   27284 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 18:20:08.234071   27284 cni.go:84] Creating CNI manager for ""
	I0401 18:20:08.234087   27284 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0401 18:20:08.234099   27284 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0401 18:20:08.234162   27284 start.go:340] cluster config:
	{Name:ha-293078 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-293078 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0401 18:20:08.234288   27284 iso.go:125] acquiring lock: {Name:mka511ffe42ecd86bd7f46e7a17ddcdd3e5e4327 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 18:20:08.236122   27284 out.go:177] * Starting "ha-293078" primary control-plane node in "ha-293078" cluster
	I0401 18:20:08.237614   27284 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0401 18:20:08.237656   27284 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0401 18:20:08.237684   27284 cache.go:56] Caching tarball of preloaded images
	I0401 18:20:08.237772   27284 preload.go:173] Found /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 18:20:08.237787   27284 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0401 18:20:08.238046   27284 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/config.json ...
	I0401 18:20:08.238066   27284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/config.json: {Name:mke97edf58f64b766cee43b56480c9c081c5d8fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:20:08.238207   27284 start.go:360] acquireMachinesLock for ha-293078: {Name:mk6b7472209a8db5f40be4c2f0565da7e0094c19 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 18:20:08.238243   27284 start.go:364] duration metric: took 19.532µs to acquireMachinesLock for "ha-293078"
	I0401 18:20:08.238265   27284 start.go:93] Provisioning new machine with config: &{Name:ha-293078 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:ha-293078 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 18:20:08.238318   27284 start.go:125] createHost starting for "" (driver="kvm2")
	I0401 18:20:08.239952   27284 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0401 18:20:08.240088   27284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:20:08.240162   27284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:20:08.253548   27284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42945
	I0401 18:20:08.253976   27284 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:20:08.254480   27284 main.go:141] libmachine: Using API Version  1
	I0401 18:20:08.254510   27284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:20:08.254853   27284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:20:08.255023   27284 main.go:141] libmachine: (ha-293078) Calling .GetMachineName
	I0401 18:20:08.255150   27284 main.go:141] libmachine: (ha-293078) Calling .DriverName
	I0401 18:20:08.255269   27284 start.go:159] libmachine.API.Create for "ha-293078" (driver="kvm2")
	I0401 18:20:08.255296   27284 client.go:168] LocalClient.Create starting
	I0401 18:20:08.255326   27284 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem
	I0401 18:20:08.255355   27284 main.go:141] libmachine: Decoding PEM data...
	I0401 18:20:08.255373   27284 main.go:141] libmachine: Parsing certificate...
	I0401 18:20:08.255426   27284 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem
	I0401 18:20:08.255451   27284 main.go:141] libmachine: Decoding PEM data...
	I0401 18:20:08.255466   27284 main.go:141] libmachine: Parsing certificate...
	I0401 18:20:08.255483   27284 main.go:141] libmachine: Running pre-create checks...
	I0401 18:20:08.255492   27284 main.go:141] libmachine: (ha-293078) Calling .PreCreateCheck
	I0401 18:20:08.255778   27284 main.go:141] libmachine: (ha-293078) Calling .GetConfigRaw
	I0401 18:20:08.256108   27284 main.go:141] libmachine: Creating machine...
	I0401 18:20:08.256122   27284 main.go:141] libmachine: (ha-293078) Calling .Create
	I0401 18:20:08.256238   27284 main.go:141] libmachine: (ha-293078) Creating KVM machine...
	I0401 18:20:08.257432   27284 main.go:141] libmachine: (ha-293078) DBG | found existing default KVM network
	I0401 18:20:08.258083   27284 main.go:141] libmachine: (ha-293078) DBG | I0401 18:20:08.257965   27307 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012d980}
	I0401 18:20:08.258106   27284 main.go:141] libmachine: (ha-293078) DBG | created network xml: 
	I0401 18:20:08.258115   27284 main.go:141] libmachine: (ha-293078) DBG | <network>
	I0401 18:20:08.258122   27284 main.go:141] libmachine: (ha-293078) DBG |   <name>mk-ha-293078</name>
	I0401 18:20:08.258130   27284 main.go:141] libmachine: (ha-293078) DBG |   <dns enable='no'/>
	I0401 18:20:08.258135   27284 main.go:141] libmachine: (ha-293078) DBG |   
	I0401 18:20:08.258143   27284 main.go:141] libmachine: (ha-293078) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0401 18:20:08.258150   27284 main.go:141] libmachine: (ha-293078) DBG |     <dhcp>
	I0401 18:20:08.258164   27284 main.go:141] libmachine: (ha-293078) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0401 18:20:08.258174   27284 main.go:141] libmachine: (ha-293078) DBG |     </dhcp>
	I0401 18:20:08.258187   27284 main.go:141] libmachine: (ha-293078) DBG |   </ip>
	I0401 18:20:08.258196   27284 main.go:141] libmachine: (ha-293078) DBG |   
	I0401 18:20:08.258202   27284 main.go:141] libmachine: (ha-293078) DBG | </network>
	I0401 18:20:08.258210   27284 main.go:141] libmachine: (ha-293078) DBG | 
	I0401 18:20:08.262992   27284 main.go:141] libmachine: (ha-293078) DBG | trying to create private KVM network mk-ha-293078 192.168.39.0/24...
	I0401 18:20:08.323174   27284 main.go:141] libmachine: (ha-293078) DBG | private KVM network mk-ha-293078 192.168.39.0/24 created
	I0401 18:20:08.323209   27284 main.go:141] libmachine: (ha-293078) Setting up store path in /home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078 ...
	I0401 18:20:08.323224   27284 main.go:141] libmachine: (ha-293078) DBG | I0401 18:20:08.323141   27307 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18233-10493/.minikube
	I0401 18:20:08.323244   27284 main.go:141] libmachine: (ha-293078) Building disk image from file:///home/jenkins/minikube-integration/18233-10493/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso
	I0401 18:20:08.323281   27284 main.go:141] libmachine: (ha-293078) Downloading /home/jenkins/minikube-integration/18233-10493/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18233-10493/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso...
	I0401 18:20:08.545463   27284 main.go:141] libmachine: (ha-293078) DBG | I0401 18:20:08.545359   27307 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078/id_rsa...
	I0401 18:20:08.619955   27284 main.go:141] libmachine: (ha-293078) DBG | I0401 18:20:08.619862   27307 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078/ha-293078.rawdisk...
	I0401 18:20:08.619974   27284 main.go:141] libmachine: (ha-293078) DBG | Writing magic tar header
	I0401 18:20:08.619985   27284 main.go:141] libmachine: (ha-293078) DBG | Writing SSH key tar header
	I0401 18:20:08.620173   27284 main.go:141] libmachine: (ha-293078) DBG | I0401 18:20:08.620120   27307 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078 ...
	I0401 18:20:08.620277   27284 main.go:141] libmachine: (ha-293078) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078
	I0401 18:20:08.620308   27284 main.go:141] libmachine: (ha-293078) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18233-10493/.minikube/machines
	I0401 18:20:08.620325   27284 main.go:141] libmachine: (ha-293078) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18233-10493/.minikube
	I0401 18:20:08.620338   27284 main.go:141] libmachine: (ha-293078) Setting executable bit set on /home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078 (perms=drwx------)
	I0401 18:20:08.620352   27284 main.go:141] libmachine: (ha-293078) Setting executable bit set on /home/jenkins/minikube-integration/18233-10493/.minikube/machines (perms=drwxr-xr-x)
	I0401 18:20:08.620363   27284 main.go:141] libmachine: (ha-293078) Setting executable bit set on /home/jenkins/minikube-integration/18233-10493/.minikube (perms=drwxr-xr-x)
	I0401 18:20:08.620380   27284 main.go:141] libmachine: (ha-293078) Setting executable bit set on /home/jenkins/minikube-integration/18233-10493 (perms=drwxrwxr-x)
	I0401 18:20:08.620392   27284 main.go:141] libmachine: (ha-293078) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0401 18:20:08.620418   27284 main.go:141] libmachine: (ha-293078) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18233-10493
	I0401 18:20:08.620436   27284 main.go:141] libmachine: (ha-293078) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0401 18:20:08.620448   27284 main.go:141] libmachine: (ha-293078) DBG | Checking permissions on dir: /home/jenkins
	I0401 18:20:08.620458   27284 main.go:141] libmachine: (ha-293078) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0401 18:20:08.620470   27284 main.go:141] libmachine: (ha-293078) DBG | Checking permissions on dir: /home
	I0401 18:20:08.620484   27284 main.go:141] libmachine: (ha-293078) DBG | Skipping /home - not owner
	I0401 18:20:08.620494   27284 main.go:141] libmachine: (ha-293078) Creating domain...
	I0401 18:20:08.621488   27284 main.go:141] libmachine: (ha-293078) define libvirt domain using xml: 
	I0401 18:20:08.621518   27284 main.go:141] libmachine: (ha-293078) <domain type='kvm'>
	I0401 18:20:08.621529   27284 main.go:141] libmachine: (ha-293078)   <name>ha-293078</name>
	I0401 18:20:08.621542   27284 main.go:141] libmachine: (ha-293078)   <memory unit='MiB'>2200</memory>
	I0401 18:20:08.621554   27284 main.go:141] libmachine: (ha-293078)   <vcpu>2</vcpu>
	I0401 18:20:08.621561   27284 main.go:141] libmachine: (ha-293078)   <features>
	I0401 18:20:08.621569   27284 main.go:141] libmachine: (ha-293078)     <acpi/>
	I0401 18:20:08.621578   27284 main.go:141] libmachine: (ha-293078)     <apic/>
	I0401 18:20:08.621587   27284 main.go:141] libmachine: (ha-293078)     <pae/>
	I0401 18:20:08.621596   27284 main.go:141] libmachine: (ha-293078)     
	I0401 18:20:08.621606   27284 main.go:141] libmachine: (ha-293078)   </features>
	I0401 18:20:08.621616   27284 main.go:141] libmachine: (ha-293078)   <cpu mode='host-passthrough'>
	I0401 18:20:08.621639   27284 main.go:141] libmachine: (ha-293078)   
	I0401 18:20:08.621677   27284 main.go:141] libmachine: (ha-293078)   </cpu>
	I0401 18:20:08.621687   27284 main.go:141] libmachine: (ha-293078)   <os>
	I0401 18:20:08.621695   27284 main.go:141] libmachine: (ha-293078)     <type>hvm</type>
	I0401 18:20:08.621703   27284 main.go:141] libmachine: (ha-293078)     <boot dev='cdrom'/>
	I0401 18:20:08.621710   27284 main.go:141] libmachine: (ha-293078)     <boot dev='hd'/>
	I0401 18:20:08.621718   27284 main.go:141] libmachine: (ha-293078)     <bootmenu enable='no'/>
	I0401 18:20:08.621723   27284 main.go:141] libmachine: (ha-293078)   </os>
	I0401 18:20:08.621729   27284 main.go:141] libmachine: (ha-293078)   <devices>
	I0401 18:20:08.621734   27284 main.go:141] libmachine: (ha-293078)     <disk type='file' device='cdrom'>
	I0401 18:20:08.621745   27284 main.go:141] libmachine: (ha-293078)       <source file='/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078/boot2docker.iso'/>
	I0401 18:20:08.621750   27284 main.go:141] libmachine: (ha-293078)       <target dev='hdc' bus='scsi'/>
	I0401 18:20:08.621774   27284 main.go:141] libmachine: (ha-293078)       <readonly/>
	I0401 18:20:08.621800   27284 main.go:141] libmachine: (ha-293078)     </disk>
	I0401 18:20:08.621822   27284 main.go:141] libmachine: (ha-293078)     <disk type='file' device='disk'>
	I0401 18:20:08.621833   27284 main.go:141] libmachine: (ha-293078)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0401 18:20:08.621847   27284 main.go:141] libmachine: (ha-293078)       <source file='/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078/ha-293078.rawdisk'/>
	I0401 18:20:08.621855   27284 main.go:141] libmachine: (ha-293078)       <target dev='hda' bus='virtio'/>
	I0401 18:20:08.621863   27284 main.go:141] libmachine: (ha-293078)     </disk>
	I0401 18:20:08.621876   27284 main.go:141] libmachine: (ha-293078)     <interface type='network'>
	I0401 18:20:08.621908   27284 main.go:141] libmachine: (ha-293078)       <source network='mk-ha-293078'/>
	I0401 18:20:08.621933   27284 main.go:141] libmachine: (ha-293078)       <model type='virtio'/>
	I0401 18:20:08.621948   27284 main.go:141] libmachine: (ha-293078)     </interface>
	I0401 18:20:08.621962   27284 main.go:141] libmachine: (ha-293078)     <interface type='network'>
	I0401 18:20:08.621977   27284 main.go:141] libmachine: (ha-293078)       <source network='default'/>
	I0401 18:20:08.621990   27284 main.go:141] libmachine: (ha-293078)       <model type='virtio'/>
	I0401 18:20:08.622005   27284 main.go:141] libmachine: (ha-293078)     </interface>
	I0401 18:20:08.622021   27284 main.go:141] libmachine: (ha-293078)     <serial type='pty'>
	I0401 18:20:08.622035   27284 main.go:141] libmachine: (ha-293078)       <target port='0'/>
	I0401 18:20:08.622049   27284 main.go:141] libmachine: (ha-293078)     </serial>
	I0401 18:20:08.622063   27284 main.go:141] libmachine: (ha-293078)     <console type='pty'>
	I0401 18:20:08.622074   27284 main.go:141] libmachine: (ha-293078)       <target type='serial' port='0'/>
	I0401 18:20:08.622094   27284 main.go:141] libmachine: (ha-293078)     </console>
	I0401 18:20:08.622115   27284 main.go:141] libmachine: (ha-293078)     <rng model='virtio'>
	I0401 18:20:08.622134   27284 main.go:141] libmachine: (ha-293078)       <backend model='random'>/dev/random</backend>
	I0401 18:20:08.622155   27284 main.go:141] libmachine: (ha-293078)     </rng>
	I0401 18:20:08.622173   27284 main.go:141] libmachine: (ha-293078)     
	I0401 18:20:08.622185   27284 main.go:141] libmachine: (ha-293078)     
	I0401 18:20:08.622191   27284 main.go:141] libmachine: (ha-293078)   </devices>
	I0401 18:20:08.622199   27284 main.go:141] libmachine: (ha-293078) </domain>
	I0401 18:20:08.622208   27284 main.go:141] libmachine: (ha-293078) 
	I0401 18:20:08.626471   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:8a:2d:39 in network default
	I0401 18:20:08.627059   27284 main.go:141] libmachine: (ha-293078) Ensuring networks are active...
	I0401 18:20:08.627100   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:08.627670   27284 main.go:141] libmachine: (ha-293078) Ensuring network default is active
	I0401 18:20:08.628041   27284 main.go:141] libmachine: (ha-293078) Ensuring network mk-ha-293078 is active
	I0401 18:20:08.628575   27284 main.go:141] libmachine: (ha-293078) Getting domain xml...
	I0401 18:20:08.629240   27284 main.go:141] libmachine: (ha-293078) Creating domain...
	I0401 18:20:09.790198   27284 main.go:141] libmachine: (ha-293078) Waiting to get IP...
	I0401 18:20:09.791005   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:09.791393   27284 main.go:141] libmachine: (ha-293078) DBG | unable to find current IP address of domain ha-293078 in network mk-ha-293078
	I0401 18:20:09.791415   27284 main.go:141] libmachine: (ha-293078) DBG | I0401 18:20:09.791378   27307 retry.go:31] will retry after 299.095049ms: waiting for machine to come up
	I0401 18:20:10.091780   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:10.092218   27284 main.go:141] libmachine: (ha-293078) DBG | unable to find current IP address of domain ha-293078 in network mk-ha-293078
	I0401 18:20:10.092244   27284 main.go:141] libmachine: (ha-293078) DBG | I0401 18:20:10.092172   27307 retry.go:31] will retry after 341.823452ms: waiting for machine to come up
	I0401 18:20:10.435740   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:10.436221   27284 main.go:141] libmachine: (ha-293078) DBG | unable to find current IP address of domain ha-293078 in network mk-ha-293078
	I0401 18:20:10.436263   27284 main.go:141] libmachine: (ha-293078) DBG | I0401 18:20:10.436201   27307 retry.go:31] will retry after 412.275855ms: waiting for machine to come up
	I0401 18:20:10.849632   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:10.850052   27284 main.go:141] libmachine: (ha-293078) DBG | unable to find current IP address of domain ha-293078 in network mk-ha-293078
	I0401 18:20:10.850075   27284 main.go:141] libmachine: (ha-293078) DBG | I0401 18:20:10.850019   27307 retry.go:31] will retry after 504.08215ms: waiting for machine to come up
	I0401 18:20:11.356728   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:11.357488   27284 main.go:141] libmachine: (ha-293078) DBG | unable to find current IP address of domain ha-293078 in network mk-ha-293078
	I0401 18:20:11.357594   27284 main.go:141] libmachine: (ha-293078) DBG | I0401 18:20:11.357446   27307 retry.go:31] will retry after 521.12253ms: waiting for machine to come up
	I0401 18:20:11.880118   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:11.880587   27284 main.go:141] libmachine: (ha-293078) DBG | unable to find current IP address of domain ha-293078 in network mk-ha-293078
	I0401 18:20:11.880607   27284 main.go:141] libmachine: (ha-293078) DBG | I0401 18:20:11.880563   27307 retry.go:31] will retry after 840.04722ms: waiting for machine to come up
	I0401 18:20:12.722613   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:12.722961   27284 main.go:141] libmachine: (ha-293078) DBG | unable to find current IP address of domain ha-293078 in network mk-ha-293078
	I0401 18:20:12.723019   27284 main.go:141] libmachine: (ha-293078) DBG | I0401 18:20:12.722910   27307 retry.go:31] will retry after 1.165268416s: waiting for machine to come up
	I0401 18:20:13.889819   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:13.890267   27284 main.go:141] libmachine: (ha-293078) DBG | unable to find current IP address of domain ha-293078 in network mk-ha-293078
	I0401 18:20:13.890296   27284 main.go:141] libmachine: (ha-293078) DBG | I0401 18:20:13.890213   27307 retry.go:31] will retry after 955.488594ms: waiting for machine to come up
	I0401 18:20:14.847839   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:14.848189   27284 main.go:141] libmachine: (ha-293078) DBG | unable to find current IP address of domain ha-293078 in network mk-ha-293078
	I0401 18:20:14.848212   27284 main.go:141] libmachine: (ha-293078) DBG | I0401 18:20:14.848142   27307 retry.go:31] will retry after 1.835094911s: waiting for machine to come up
	I0401 18:20:16.686235   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:16.686609   27284 main.go:141] libmachine: (ha-293078) DBG | unable to find current IP address of domain ha-293078 in network mk-ha-293078
	I0401 18:20:16.686636   27284 main.go:141] libmachine: (ha-293078) DBG | I0401 18:20:16.686563   27307 retry.go:31] will retry after 1.705606324s: waiting for machine to come up
	I0401 18:20:18.393239   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:18.393664   27284 main.go:141] libmachine: (ha-293078) DBG | unable to find current IP address of domain ha-293078 in network mk-ha-293078
	I0401 18:20:18.393692   27284 main.go:141] libmachine: (ha-293078) DBG | I0401 18:20:18.393591   27307 retry.go:31] will retry after 2.302351777s: waiting for machine to come up
	I0401 18:20:20.697519   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:20.698043   27284 main.go:141] libmachine: (ha-293078) DBG | unable to find current IP address of domain ha-293078 in network mk-ha-293078
	I0401 18:20:20.698070   27284 main.go:141] libmachine: (ha-293078) DBG | I0401 18:20:20.697994   27307 retry.go:31] will retry after 2.904641277s: waiting for machine to come up
	I0401 18:20:23.604466   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:23.604767   27284 main.go:141] libmachine: (ha-293078) DBG | unable to find current IP address of domain ha-293078 in network mk-ha-293078
	I0401 18:20:23.604798   27284 main.go:141] libmachine: (ha-293078) DBG | I0401 18:20:23.604722   27307 retry.go:31] will retry after 2.947312694s: waiting for machine to come up
	I0401 18:20:26.554688   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:26.555152   27284 main.go:141] libmachine: (ha-293078) DBG | unable to find current IP address of domain ha-293078 in network mk-ha-293078
	I0401 18:20:26.555179   27284 main.go:141] libmachine: (ha-293078) DBG | I0401 18:20:26.555119   27307 retry.go:31] will retry after 3.439592829s: waiting for machine to come up
	I0401 18:20:29.995900   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:29.996358   27284 main.go:141] libmachine: (ha-293078) Found IP for machine: 192.168.39.74
	I0401 18:20:29.996376   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has current primary IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:29.996384   27284 main.go:141] libmachine: (ha-293078) Reserving static IP address...
	I0401 18:20:29.996738   27284 main.go:141] libmachine: (ha-293078) DBG | unable to find host DHCP lease matching {name: "ha-293078", mac: "52:54:00:62:80:20", ip: "192.168.39.74"} in network mk-ha-293078
	I0401 18:20:30.067441   27284 main.go:141] libmachine: (ha-293078) DBG | Getting to WaitForSSH function...
	I0401 18:20:30.067471   27284 main.go:141] libmachine: (ha-293078) Reserved static IP address: 192.168.39.74
	I0401 18:20:30.067483   27284 main.go:141] libmachine: (ha-293078) Waiting for SSH to be available...
	I0401 18:20:30.070189   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:30.070712   27284 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:minikube Clientid:01:52:54:00:62:80:20}
	I0401 18:20:30.070750   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:30.070867   27284 main.go:141] libmachine: (ha-293078) DBG | Using SSH client type: external
	I0401 18:20:30.070886   27284 main.go:141] libmachine: (ha-293078) DBG | Using SSH private key: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078/id_rsa (-rw-------)
	I0401 18:20:30.070918   27284 main.go:141] libmachine: (ha-293078) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.74 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0401 18:20:30.070927   27284 main.go:141] libmachine: (ha-293078) DBG | About to run SSH command:
	I0401 18:20:30.070951   27284 main.go:141] libmachine: (ha-293078) DBG | exit 0
	I0401 18:20:30.197899   27284 main.go:141] libmachine: (ha-293078) DBG | SSH cmd err, output: <nil>: 
	I0401 18:20:30.198132   27284 main.go:141] libmachine: (ha-293078) KVM machine creation complete!
	I0401 18:20:30.198461   27284 main.go:141] libmachine: (ha-293078) Calling .GetConfigRaw
	I0401 18:20:30.199022   27284 main.go:141] libmachine: (ha-293078) Calling .DriverName
	I0401 18:20:30.199228   27284 main.go:141] libmachine: (ha-293078) Calling .DriverName
	I0401 18:20:30.199391   27284 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0401 18:20:30.199407   27284 main.go:141] libmachine: (ha-293078) Calling .GetState
	I0401 18:20:30.200977   27284 main.go:141] libmachine: Detecting operating system of created instance...
	I0401 18:20:30.200991   27284 main.go:141] libmachine: Waiting for SSH to be available...
	I0401 18:20:30.200996   27284 main.go:141] libmachine: Getting to WaitForSSH function...
	I0401 18:20:30.201002   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:20:30.203360   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:30.203700   27284 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:20:30.203721   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:30.203841   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:20:30.204042   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:20:30.204204   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:20:30.204362   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:20:30.204568   27284 main.go:141] libmachine: Using SSH client type: native
	I0401 18:20:30.204761   27284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.74 22 <nil> <nil>}
	I0401 18:20:30.204781   27284 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0401 18:20:30.309268   27284 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 18:20:30.309293   27284 main.go:141] libmachine: Detecting the provisioner...
	I0401 18:20:30.309300   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:20:30.312064   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:30.312427   27284 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:20:30.312450   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:30.312601   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:20:30.312781   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:20:30.312920   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:20:30.313078   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:20:30.313242   27284 main.go:141] libmachine: Using SSH client type: native
	I0401 18:20:30.313393   27284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.74 22 <nil> <nil>}
	I0401 18:20:30.313402   27284 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0401 18:20:30.418580   27284 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0401 18:20:30.418654   27284 main.go:141] libmachine: found compatible host: buildroot
	I0401 18:20:30.418663   27284 main.go:141] libmachine: Provisioning with buildroot...
	I0401 18:20:30.418680   27284 main.go:141] libmachine: (ha-293078) Calling .GetMachineName
	I0401 18:20:30.418923   27284 buildroot.go:166] provisioning hostname "ha-293078"
	I0401 18:20:30.418946   27284 main.go:141] libmachine: (ha-293078) Calling .GetMachineName
	I0401 18:20:30.419138   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:20:30.421591   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:30.421893   27284 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:20:30.421919   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:30.422014   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:20:30.422199   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:20:30.422366   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:20:30.422504   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:20:30.422662   27284 main.go:141] libmachine: Using SSH client type: native
	I0401 18:20:30.422818   27284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.74 22 <nil> <nil>}
	I0401 18:20:30.422829   27284 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-293078 && echo "ha-293078" | sudo tee /etc/hostname
	I0401 18:20:30.545449   27284 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-293078
	
	I0401 18:20:30.545475   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:20:30.549246   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:30.549678   27284 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:20:30.549728   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:30.549960   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:20:30.550142   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:20:30.550292   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:20:30.550466   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:20:30.550639   27284 main.go:141] libmachine: Using SSH client type: native
	I0401 18:20:30.550873   27284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.74 22 <nil> <nil>}
	I0401 18:20:30.550896   27284 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-293078' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-293078/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-293078' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 18:20:30.672682   27284 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 18:20:30.672709   27284 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18233-10493/.minikube CaCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18233-10493/.minikube}
	I0401 18:20:30.672747   27284 buildroot.go:174] setting up certificates
	I0401 18:20:30.672763   27284 provision.go:84] configureAuth start
	I0401 18:20:30.672774   27284 main.go:141] libmachine: (ha-293078) Calling .GetMachineName
	I0401 18:20:30.673066   27284 main.go:141] libmachine: (ha-293078) Calling .GetIP
	I0401 18:20:30.675645   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:30.675993   27284 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:20:30.676016   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:30.676132   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:20:30.678264   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:30.678585   27284 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:20:30.678612   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:30.678751   27284 provision.go:143] copyHostCerts
	I0401 18:20:30.678781   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem
	I0401 18:20:30.678811   27284 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem, removing ...
	I0401 18:20:30.678823   27284 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem
	I0401 18:20:30.678897   27284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem (1679 bytes)
	I0401 18:20:30.679000   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem
	I0401 18:20:30.679024   27284 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem, removing ...
	I0401 18:20:30.679030   27284 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem
	I0401 18:20:30.679066   27284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem (1082 bytes)
	I0401 18:20:30.679136   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem
	I0401 18:20:30.679159   27284 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem, removing ...
	I0401 18:20:30.679169   27284 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem
	I0401 18:20:30.679205   27284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem (1123 bytes)
	I0401 18:20:30.679282   27284 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem org=jenkins.ha-293078 san=[127.0.0.1 192.168.39.74 ha-293078 localhost minikube]
	I0401 18:20:30.820542   27284 provision.go:177] copyRemoteCerts
	I0401 18:20:30.820604   27284 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 18:20:30.820625   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:20:30.823170   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:30.823407   27284 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:20:30.823440   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:30.823579   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:20:30.823747   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:20:30.823897   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:20:30.824064   27284 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078/id_rsa Username:docker}
	I0401 18:20:30.908150   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0401 18:20:30.908240   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 18:20:30.935046   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0401 18:20:30.935124   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0401 18:20:30.961458   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0401 18:20:30.961511   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 18:20:30.987687   27284 provision.go:87] duration metric: took 314.911851ms to configureAuth
	I0401 18:20:30.987710   27284 buildroot.go:189] setting minikube options for container-runtime
	I0401 18:20:30.987847   27284 config.go:182] Loaded profile config "ha-293078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 18:20:30.987935   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:20:30.990671   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:30.990936   27284 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:20:30.990965   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:30.991191   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:20:30.991368   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:20:30.991515   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:20:30.991639   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:20:30.991778   27284 main.go:141] libmachine: Using SSH client type: native
	I0401 18:20:30.991936   27284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.74 22 <nil> <nil>}
	I0401 18:20:30.991950   27284 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 18:20:31.264914   27284 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 18:20:31.264936   27284 main.go:141] libmachine: Checking connection to Docker...
	I0401 18:20:31.264943   27284 main.go:141] libmachine: (ha-293078) Calling .GetURL
	I0401 18:20:31.266161   27284 main.go:141] libmachine: (ha-293078) DBG | Using libvirt version 6000000
	I0401 18:20:31.268364   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:31.268834   27284 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:20:31.268860   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:31.269027   27284 main.go:141] libmachine: Docker is up and running!
	I0401 18:20:31.269042   27284 main.go:141] libmachine: Reticulating splines...
	I0401 18:20:31.269059   27284 client.go:171] duration metric: took 23.013742748s to LocalClient.Create
	I0401 18:20:31.269081   27284 start.go:167] duration metric: took 23.013815219s to libmachine.API.Create "ha-293078"
	I0401 18:20:31.269090   27284 start.go:293] postStartSetup for "ha-293078" (driver="kvm2")
	I0401 18:20:31.269099   27284 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 18:20:31.269114   27284 main.go:141] libmachine: (ha-293078) Calling .DriverName
	I0401 18:20:31.269330   27284 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 18:20:31.269351   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:20:31.271284   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:31.271575   27284 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:20:31.271603   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:31.271717   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:20:31.271906   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:20:31.272071   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:20:31.272191   27284 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078/id_rsa Username:docker}
	I0401 18:20:31.356469   27284 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 18:20:31.361087   27284 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 18:20:31.361105   27284 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/addons for local assets ...
	I0401 18:20:31.361161   27284 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/files for local assets ...
	I0401 18:20:31.361238   27284 filesync.go:149] local asset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> 177512.pem in /etc/ssl/certs
	I0401 18:20:31.361247   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> /etc/ssl/certs/177512.pem
	I0401 18:20:31.361351   27284 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 18:20:31.370978   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /etc/ssl/certs/177512.pem (1708 bytes)
	I0401 18:20:31.396855   27284 start.go:296] duration metric: took 127.754309ms for postStartSetup
	I0401 18:20:31.396900   27284 main.go:141] libmachine: (ha-293078) Calling .GetConfigRaw
	I0401 18:20:31.397475   27284 main.go:141] libmachine: (ha-293078) Calling .GetIP
	I0401 18:20:31.400335   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:31.400666   27284 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:20:31.400687   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:31.400904   27284 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/config.json ...
	I0401 18:20:31.401088   27284 start.go:128] duration metric: took 23.162760686s to createHost
	I0401 18:20:31.401111   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:20:31.403095   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:31.403333   27284 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:20:31.403356   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:31.403624   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:20:31.403853   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:20:31.404031   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:20:31.404163   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:20:31.404354   27284 main.go:141] libmachine: Using SSH client type: native
	I0401 18:20:31.404507   27284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.74 22 <nil> <nil>}
	I0401 18:20:31.404525   27284 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 18:20:31.510860   27284 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711995631.480746331
	
	I0401 18:20:31.510887   27284 fix.go:216] guest clock: 1711995631.480746331
	I0401 18:20:31.510898   27284 fix.go:229] Guest: 2024-04-01 18:20:31.480746331 +0000 UTC Remote: 2024-04-01 18:20:31.401099618 +0000 UTC m=+23.276094302 (delta=79.646713ms)
	I0401 18:20:31.510923   27284 fix.go:200] guest clock delta is within tolerance: 79.646713ms
	I0401 18:20:31.510929   27284 start.go:83] releasing machines lock for "ha-293078", held for 23.2726754s
	I0401 18:20:31.510965   27284 main.go:141] libmachine: (ha-293078) Calling .DriverName
	I0401 18:20:31.511213   27284 main.go:141] libmachine: (ha-293078) Calling .GetIP
	I0401 18:20:31.513686   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:31.514016   27284 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:20:31.514045   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:31.514193   27284 main.go:141] libmachine: (ha-293078) Calling .DriverName
	I0401 18:20:31.514662   27284 main.go:141] libmachine: (ha-293078) Calling .DriverName
	I0401 18:20:31.514842   27284 main.go:141] libmachine: (ha-293078) Calling .DriverName
	I0401 18:20:31.514931   27284 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 18:20:31.514968   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:20:31.515029   27284 ssh_runner.go:195] Run: cat /version.json
	I0401 18:20:31.515058   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:20:31.517418   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:31.517747   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:31.517809   27284 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:20:31.517843   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:31.518162   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:20:31.518359   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:20:31.518367   27284 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:20:31.518388   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:31.518441   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:20:31.518539   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:20:31.518563   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:20:31.518659   27284 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078/id_rsa Username:docker}
	I0401 18:20:31.519038   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:20:31.519191   27284 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078/id_rsa Username:docker}
	I0401 18:20:31.621743   27284 ssh_runner.go:195] Run: systemctl --version
	I0401 18:20:31.628066   27284 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 18:20:31.794252   27284 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 18:20:31.801897   27284 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 18:20:31.801953   27284 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 18:20:31.819968   27284 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 18:20:31.819996   27284 start.go:494] detecting cgroup driver to use...
	I0401 18:20:31.820050   27284 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 18:20:31.837349   27284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 18:20:31.851827   27284 docker.go:217] disabling cri-docker service (if available) ...
	I0401 18:20:31.851887   27284 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 18:20:31.866122   27284 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 18:20:31.880472   27284 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 18:20:32.000719   27284 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 18:20:32.173281   27284 docker.go:233] disabling docker service ...
	I0401 18:20:32.173362   27284 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 18:20:32.188338   27284 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 18:20:32.201976   27284 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 18:20:32.341030   27284 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 18:20:32.481681   27284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 18:20:32.497291   27284 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 18:20:32.518625   27284 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0401 18:20:32.518683   27284 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:20:32.529562   27284 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 18:20:32.529627   27284 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:20:32.541685   27284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:20:32.553418   27284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:20:32.565639   27284 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 18:20:32.578097   27284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:20:32.590452   27284 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:20:32.609793   27284 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:20:32.622253   27284 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 18:20:32.633090   27284 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0401 18:20:32.633147   27284 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0401 18:20:32.649008   27284 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 18:20:32.661173   27284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 18:20:32.770760   27284 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 18:20:32.915987   27284 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 18:20:32.916053   27284 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 18:20:32.921542   27284 start.go:562] Will wait 60s for crictl version
	I0401 18:20:32.921601   27284 ssh_runner.go:195] Run: which crictl
	I0401 18:20:32.925615   27284 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 18:20:32.965401   27284 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 18:20:32.965472   27284 ssh_runner.go:195] Run: crio --version
	I0401 18:20:32.997166   27284 ssh_runner.go:195] Run: crio --version
	I0401 18:20:33.030321   27284 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0401 18:20:33.031852   27284 main.go:141] libmachine: (ha-293078) Calling .GetIP
	I0401 18:20:33.034539   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:33.034939   27284 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:20:33.034968   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:33.035159   27284 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0401 18:20:33.039978   27284 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 18:20:33.054194   27284 kubeadm.go:877] updating cluster {Name:ha-293078 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cl
usterName:ha-293078 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.74 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 18:20:33.054315   27284 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0401 18:20:33.054393   27284 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 18:20:33.089715   27284 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0401 18:20:33.089786   27284 ssh_runner.go:195] Run: which lz4
	I0401 18:20:33.094230   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0401 18:20:33.094340   27284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0401 18:20:33.099107   27284 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 18:20:33.099135   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0401 18:20:34.709169   27284 crio.go:462] duration metric: took 1.614860102s to copy over tarball
	I0401 18:20:34.709248   27284 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 18:20:37.121760   27284 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.412487353s)
	I0401 18:20:37.121784   27284 crio.go:469] duration metric: took 2.412587851s to extract the tarball
	I0401 18:20:37.121793   27284 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 18:20:37.160490   27284 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 18:20:37.205846   27284 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 18:20:37.205872   27284 cache_images.go:84] Images are preloaded, skipping loading
	I0401 18:20:37.205880   27284 kubeadm.go:928] updating node { 192.168.39.74 8443 v1.29.3 crio true true} ...
	I0401 18:20:37.206012   27284 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-293078 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.74
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-293078 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 18:20:37.206098   27284 ssh_runner.go:195] Run: crio config
	I0401 18:20:37.260913   27284 cni.go:84] Creating CNI manager for ""
	I0401 18:20:37.260931   27284 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0401 18:20:37.260940   27284 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 18:20:37.260958   27284 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.74 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-293078 NodeName:ha-293078 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.74"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.74 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 18:20:37.261069   27284 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.74
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-293078"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.74
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.74"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 18:20:37.261092   27284 kube-vip.go:111] generating kube-vip config ...
	I0401 18:20:37.261129   27284 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0401 18:20:37.280880   27284 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0401 18:20:37.280977   27284 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0401 18:20:37.281039   27284 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0401 18:20:37.291683   27284 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 18:20:37.291743   27284 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0401 18:20:37.302077   27284 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0401 18:20:37.320474   27284 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 18:20:37.338780   27284 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0401 18:20:37.356836   27284 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0401 18:20:37.375090   27284 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0401 18:20:37.379879   27284 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 18:20:37.392854   27284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 18:20:37.533762   27284 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 18:20:37.553364   27284 certs.go:68] Setting up /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078 for IP: 192.168.39.74
	I0401 18:20:37.553396   27284 certs.go:194] generating shared ca certs ...
	I0401 18:20:37.553415   27284 certs.go:226] acquiring lock for ca certs: {Name:mk348b3e250c104b662139cd7212c6c6dfda3180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:20:37.553589   27284 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key
	I0401 18:20:37.553667   27284 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key
	I0401 18:20:37.553683   27284 certs.go:256] generating profile certs ...
	I0401 18:20:37.553748   27284 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/client.key
	I0401 18:20:37.553766   27284 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/client.crt with IP's: []
	I0401 18:20:37.623123   27284 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/client.crt ...
	I0401 18:20:37.623150   27284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/client.crt: {Name:mka69a75b279d67cb0f822de057a95d603fed36f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:20:37.623340   27284 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/client.key ...
	I0401 18:20:37.623355   27284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/client.key: {Name:mkfed0a0336184144d67564578cf1b0894b7f875 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:20:37.623454   27284 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.key.23c0cf34
	I0401 18:20:37.623478   27284 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.crt.23c0cf34 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.74 192.168.39.254]
	I0401 18:20:37.776964   27284 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.crt.23c0cf34 ...
	I0401 18:20:37.776991   27284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.crt.23c0cf34: {Name:mkae6087a8d97af53f8a9b350489b78cf9f08d14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:20:37.777174   27284 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.key.23c0cf34 ...
	I0401 18:20:37.777189   27284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.key.23c0cf34: {Name:mk9782ed8659a69b7d748f796cc45cebb965f23e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:20:37.777297   27284 certs.go:381] copying /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.crt.23c0cf34 -> /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.crt
	I0401 18:20:37.777387   27284 certs.go:385] copying /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.key.23c0cf34 -> /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.key
	I0401 18:20:37.777446   27284 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/proxy-client.key
	I0401 18:20:37.777462   27284 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/proxy-client.crt with IP's: []
	I0401 18:20:37.912136   27284 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/proxy-client.crt ...
	I0401 18:20:37.912165   27284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/proxy-client.crt: {Name:mkdf191f1c09913103ca9c0cb067c7122be9de80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:20:37.912345   27284 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/proxy-client.key ...
	I0401 18:20:37.912359   27284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/proxy-client.key: {Name:mk1a7ee05d4a02c67f6cc33ad844664c3e24362a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:20:37.912453   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0401 18:20:37.912472   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0401 18:20:37.912483   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0401 18:20:37.912496   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0401 18:20:37.912508   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0401 18:20:37.912519   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0401 18:20:37.912530   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0401 18:20:37.912542   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0401 18:20:37.912586   27284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem (1338 bytes)
	W0401 18:20:37.912617   27284 certs.go:480] ignoring /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751_empty.pem, impossibly tiny 0 bytes
	I0401 18:20:37.912626   27284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem (1675 bytes)
	I0401 18:20:37.912647   27284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem (1082 bytes)
	I0401 18:20:37.912668   27284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem (1123 bytes)
	I0401 18:20:37.912689   27284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem (1679 bytes)
	I0401 18:20:37.912729   27284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem (1708 bytes)
	I0401 18:20:37.912754   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> /usr/share/ca-certificates/177512.pem
	I0401 18:20:37.912768   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0401 18:20:37.912779   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem -> /usr/share/ca-certificates/17751.pem
	I0401 18:20:37.913265   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 18:20:37.952672   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 18:20:37.982092   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 18:20:38.009743   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 18:20:38.037543   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0401 18:20:38.064945   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 18:20:38.095389   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 18:20:38.125756   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 18:20:38.156175   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /usr/share/ca-certificates/177512.pem (1708 bytes)
	I0401 18:20:38.184996   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 18:20:38.241033   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem --> /usr/share/ca-certificates/17751.pem (1338 bytes)
	I0401 18:20:38.274092   27284 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0401 18:20:38.296076   27284 ssh_runner.go:195] Run: openssl version
	I0401 18:20:38.303065   27284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177512.pem && ln -fs /usr/share/ca-certificates/177512.pem /etc/ssl/certs/177512.pem"
	I0401 18:20:38.315888   27284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177512.pem
	I0401 18:20:38.321232   27284 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 18:15 /usr/share/ca-certificates/177512.pem
	I0401 18:20:38.321281   27284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177512.pem
	I0401 18:20:38.327854   27284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177512.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 18:20:38.345364   27284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 18:20:38.368179   27284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 18:20:38.375060   27284 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 18:07 /usr/share/ca-certificates/minikubeCA.pem
	I0401 18:20:38.375124   27284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 18:20:38.384472   27284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 18:20:38.401237   27284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17751.pem && ln -fs /usr/share/ca-certificates/17751.pem /etc/ssl/certs/17751.pem"
	I0401 18:20:38.419543   27284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17751.pem
	I0401 18:20:38.426032   27284 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 18:15 /usr/share/ca-certificates/17751.pem
	I0401 18:20:38.426111   27284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17751.pem
	I0401 18:20:38.432837   27284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17751.pem /etc/ssl/certs/51391683.0"
	I0401 18:20:38.445342   27284 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 18:20:38.450196   27284 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 18:20:38.450243   27284 kubeadm.go:391] StartCluster: {Name:ha-293078 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clust
erName:ha-293078 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.74 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 18:20:38.450327   27284 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 18:20:38.450397   27284 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 18:20:38.497189   27284 cri.go:89] found id: ""
	I0401 18:20:38.497259   27284 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 18:20:38.508245   27284 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 18:20:38.518589   27284 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 18:20:38.528901   27284 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 18:20:38.528946   27284 kubeadm.go:156] found existing configuration files:
	
	I0401 18:20:38.528991   27284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 18:20:38.538806   27284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 18:20:38.538863   27284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 18:20:38.549472   27284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 18:20:38.559440   27284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 18:20:38.559498   27284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 18:20:38.569753   27284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 18:20:38.579455   27284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 18:20:38.579501   27284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 18:20:38.589684   27284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 18:20:38.599478   27284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 18:20:38.599535   27284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 18:20:38.609636   27284 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 18:20:38.852092   27284 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 18:20:49.908270   27284 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0401 18:20:49.908351   27284 kubeadm.go:309] [preflight] Running pre-flight checks
	I0401 18:20:49.908439   27284 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 18:20:49.908556   27284 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 18:20:49.908694   27284 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 18:20:49.908792   27284 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 18:20:49.910380   27284 out.go:204]   - Generating certificates and keys ...
	I0401 18:20:49.910459   27284 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0401 18:20:49.910557   27284 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0401 18:20:49.910675   27284 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 18:20:49.910763   27284 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0401 18:20:49.910865   27284 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0401 18:20:49.910954   27284 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0401 18:20:49.911031   27284 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0401 18:20:49.911168   27284 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-293078 localhost] and IPs [192.168.39.74 127.0.0.1 ::1]
	I0401 18:20:49.911262   27284 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0401 18:20:49.911422   27284 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-293078 localhost] and IPs [192.168.39.74 127.0.0.1 ::1]
	I0401 18:20:49.911521   27284 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 18:20:49.911606   27284 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 18:20:49.911688   27284 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0401 18:20:49.911760   27284 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 18:20:49.911834   27284 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 18:20:49.911917   27284 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 18:20:49.911992   27284 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 18:20:49.912090   27284 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 18:20:49.912168   27284 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 18:20:49.912279   27284 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 18:20:49.912370   27284 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 18:20:49.913926   27284 out.go:204]   - Booting up control plane ...
	I0401 18:20:49.914007   27284 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 18:20:49.914071   27284 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 18:20:49.914165   27284 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 18:20:49.914315   27284 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 18:20:49.914433   27284 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 18:20:49.914490   27284 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0401 18:20:49.914681   27284 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 18:20:49.914801   27284 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.585874 seconds
	I0401 18:20:49.914935   27284 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 18:20:49.915098   27284 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 18:20:49.915181   27284 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 18:20:49.915332   27284 kubeadm.go:309] [mark-control-plane] Marking the node ha-293078 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 18:20:49.915394   27284 kubeadm.go:309] [bootstrap-token] Using token: 4btpo1.kjs6l4hetnoxsot3
	I0401 18:20:49.916801   27284 out.go:204]   - Configuring RBAC rules ...
	I0401 18:20:49.916907   27284 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 18:20:49.916976   27284 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 18:20:49.917160   27284 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 18:20:49.917329   27284 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 18:20:49.917466   27284 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 18:20:49.917550   27284 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 18:20:49.917663   27284 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 18:20:49.917710   27284 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0401 18:20:49.917763   27284 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0401 18:20:49.917770   27284 kubeadm.go:309] 
	I0401 18:20:49.917823   27284 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0401 18:20:49.917831   27284 kubeadm.go:309] 
	I0401 18:20:49.917892   27284 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0401 18:20:49.917898   27284 kubeadm.go:309] 
	I0401 18:20:49.917950   27284 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0401 18:20:49.918051   27284 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 18:20:49.918136   27284 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 18:20:49.918154   27284 kubeadm.go:309] 
	I0401 18:20:49.918243   27284 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0401 18:20:49.918256   27284 kubeadm.go:309] 
	I0401 18:20:49.918315   27284 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 18:20:49.918327   27284 kubeadm.go:309] 
	I0401 18:20:49.918397   27284 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0401 18:20:49.918486   27284 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 18:20:49.918577   27284 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 18:20:49.918592   27284 kubeadm.go:309] 
	I0401 18:20:49.918696   27284 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 18:20:49.918790   27284 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0401 18:20:49.918799   27284 kubeadm.go:309] 
	I0401 18:20:49.918876   27284 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 4btpo1.kjs6l4hetnoxsot3 \
	I0401 18:20:49.918965   27284 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b8a0197ad47aa27a5800307c57228d22e61e4d31af785fa8a896f2b7fab267b8 \
	I0401 18:20:49.918990   27284 kubeadm.go:309] 	--control-plane 
	I0401 18:20:49.918996   27284 kubeadm.go:309] 
	I0401 18:20:49.919071   27284 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0401 18:20:49.919078   27284 kubeadm.go:309] 
	I0401 18:20:49.919165   27284 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 4btpo1.kjs6l4hetnoxsot3 \
	I0401 18:20:49.919294   27284 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b8a0197ad47aa27a5800307c57228d22e61e4d31af785fa8a896f2b7fab267b8 
	I0401 18:20:49.919310   27284 cni.go:84] Creating CNI manager for ""
	I0401 18:20:49.919318   27284 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0401 18:20:49.920989   27284 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0401 18:20:49.922390   27284 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0401 18:20:49.931672   27284 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0401 18:20:49.931686   27284 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0401 18:20:50.013816   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0401 18:20:50.385478   27284 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 18:20:50.385572   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:20:50.385623   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-293078 minikube.k8s.io/updated_at=2024_04_01T18_20_50_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2 minikube.k8s.io/name=ha-293078 minikube.k8s.io/primary=true
	I0401 18:20:50.538373   27284 ops.go:34] apiserver oom_adj: -16
	I0401 18:20:50.538723   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:20:51.038830   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:20:51.539815   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:20:52.038888   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:20:52.539727   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:20:53.038949   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:20:53.539222   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:20:54.039139   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:20:54.539731   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:20:55.039569   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:20:55.539189   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:20:56.039819   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:20:56.538840   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:20:57.039523   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:20:57.538954   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:20:58.038853   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:20:58.539195   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:20:59.039784   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:20:59.539064   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:21:00.038883   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:21:00.539202   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:21:01.038799   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:21:01.538997   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:21:02.039061   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:21:02.166524   27284 kubeadm.go:1107] duration metric: took 11.781001236s to wait for elevateKubeSystemPrivileges
	W0401 18:21:02.166562   27284 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0401 18:21:02.166571   27284 kubeadm.go:393] duration metric: took 23.716331763s to StartCluster
	I0401 18:21:02.166591   27284 settings.go:142] acquiring lock: {Name:mk5cd3d9600680d3808ad7ff6310a5e71b09e71d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:21:02.166672   27284 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 18:21:02.167349   27284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/kubeconfig: {Name:mkbd988e40ba29769e9f8a43c4d876f38e957f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:21:02.167565   27284 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.74 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 18:21:02.167587   27284 start.go:240] waiting for startup goroutines ...
	I0401 18:21:02.167588   27284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0401 18:21:02.167603   27284 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0401 18:21:02.167667   27284 addons.go:69] Setting storage-provisioner=true in profile "ha-293078"
	I0401 18:21:02.167698   27284 addons.go:234] Setting addon storage-provisioner=true in "ha-293078"
	I0401 18:21:02.167700   27284 addons.go:69] Setting default-storageclass=true in profile "ha-293078"
	I0401 18:21:02.167730   27284 host.go:66] Checking if "ha-293078" exists ...
	I0401 18:21:02.167751   27284 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-293078"
	I0401 18:21:02.167840   27284 config.go:182] Loaded profile config "ha-293078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 18:21:02.168166   27284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:21:02.168174   27284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:21:02.168212   27284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:21:02.168356   27284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:21:02.183507   27284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40007
	I0401 18:21:02.183506   27284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40585
	I0401 18:21:02.184007   27284 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:21:02.184090   27284 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:21:02.184543   27284 main.go:141] libmachine: Using API Version  1
	I0401 18:21:02.184574   27284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:21:02.184645   27284 main.go:141] libmachine: Using API Version  1
	I0401 18:21:02.184666   27284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:21:02.184968   27284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:21:02.184998   27284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:21:02.185163   27284 main.go:141] libmachine: (ha-293078) Calling .GetState
	I0401 18:21:02.185508   27284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:21:02.185550   27284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:21:02.187166   27284 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 18:21:02.187496   27284 kapi.go:59] client config for ha-293078: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/client.crt", KeyFile:"/home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/client.key", CAFile:"/home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5ca00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0401 18:21:02.187933   27284 cert_rotation.go:137] Starting client certificate rotation controller
	I0401 18:21:02.188202   27284 addons.go:234] Setting addon default-storageclass=true in "ha-293078"
	I0401 18:21:02.188243   27284 host.go:66] Checking if "ha-293078" exists ...
	I0401 18:21:02.188599   27284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:21:02.188636   27284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:21:02.201087   27284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43639
	I0401 18:21:02.201616   27284 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:21:02.202193   27284 main.go:141] libmachine: Using API Version  1
	I0401 18:21:02.202221   27284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:21:02.202587   27284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:21:02.202778   27284 main.go:141] libmachine: (ha-293078) Calling .GetState
	I0401 18:21:02.204106   27284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33179
	I0401 18:21:02.204511   27284 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:21:02.204692   27284 main.go:141] libmachine: (ha-293078) Calling .DriverName
	I0401 18:21:02.206642   27284 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 18:21:02.205112   27284 main.go:141] libmachine: Using API Version  1
	I0401 18:21:02.208416   27284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:21:02.208530   27284 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 18:21:02.208547   27284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 18:21:02.208565   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:21:02.208802   27284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:21:02.209666   27284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:21:02.209718   27284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:21:02.211851   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:21:02.212291   27284 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:21:02.212324   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:21:02.212442   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:21:02.212630   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:21:02.212814   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:21:02.212939   27284 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078/id_rsa Username:docker}
	I0401 18:21:02.224552   27284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40335
	I0401 18:21:02.225021   27284 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:21:02.225566   27284 main.go:141] libmachine: Using API Version  1
	I0401 18:21:02.225590   27284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:21:02.225965   27284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:21:02.226136   27284 main.go:141] libmachine: (ha-293078) Calling .GetState
	I0401 18:21:02.227678   27284 main.go:141] libmachine: (ha-293078) Calling .DriverName
	I0401 18:21:02.227968   27284 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 18:21:02.227985   27284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 18:21:02.228001   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:21:02.231262   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:21:02.231707   27284 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:21:02.231740   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:21:02.231876   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:21:02.232054   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:21:02.232202   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:21:02.232331   27284 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078/id_rsa Username:docker}
	I0401 18:21:02.333022   27284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0401 18:21:02.438375   27284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 18:21:02.526416   27284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 18:21:02.900392   27284 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0401 18:21:03.336256   27284 main.go:141] libmachine: Making call to close driver server
	I0401 18:21:03.336275   27284 main.go:141] libmachine: (ha-293078) Calling .Close
	I0401 18:21:03.336334   27284 main.go:141] libmachine: Making call to close driver server
	I0401 18:21:03.336349   27284 main.go:141] libmachine: (ha-293078) Calling .Close
	I0401 18:21:03.336595   27284 main.go:141] libmachine: (ha-293078) DBG | Closing plugin on server side
	I0401 18:21:03.336622   27284 main.go:141] libmachine: (ha-293078) DBG | Closing plugin on server side
	I0401 18:21:03.336642   27284 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:21:03.336660   27284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:21:03.336670   27284 main.go:141] libmachine: Making call to close driver server
	I0401 18:21:03.336674   27284 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:21:03.336677   27284 main.go:141] libmachine: (ha-293078) Calling .Close
	I0401 18:21:03.336690   27284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:21:03.336699   27284 main.go:141] libmachine: Making call to close driver server
	I0401 18:21:03.336706   27284 main.go:141] libmachine: (ha-293078) Calling .Close
	I0401 18:21:03.336925   27284 main.go:141] libmachine: (ha-293078) DBG | Closing plugin on server side
	I0401 18:21:03.336924   27284 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:21:03.336945   27284 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:21:03.336950   27284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:21:03.336954   27284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:21:03.336998   27284 main.go:141] libmachine: (ha-293078) DBG | Closing plugin on server side
	I0401 18:21:03.337071   27284 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0401 18:21:03.337078   27284 round_trippers.go:469] Request Headers:
	I0401 18:21:03.337088   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:21:03.337095   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:21:03.352729   27284 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0401 18:21:03.353445   27284 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0401 18:21:03.353530   27284 round_trippers.go:469] Request Headers:
	I0401 18:21:03.353551   27284 round_trippers.go:473]     Content-Type: application/json
	I0401 18:21:03.353565   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:21:03.353569   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:21:03.357118   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:21:03.357247   27284 main.go:141] libmachine: Making call to close driver server
	I0401 18:21:03.357258   27284 main.go:141] libmachine: (ha-293078) Calling .Close
	I0401 18:21:03.357509   27284 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:21:03.357528   27284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:21:03.359532   27284 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0401 18:21:03.361453   27284 addons.go:505] duration metric: took 1.193852719s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0401 18:21:03.361494   27284 start.go:245] waiting for cluster config update ...
	I0401 18:21:03.361508   27284 start.go:254] writing updated cluster config ...
	I0401 18:21:03.363430   27284 out.go:177] 
	I0401 18:21:03.367209   27284 config.go:182] Loaded profile config "ha-293078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 18:21:03.367283   27284 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/config.json ...
	I0401 18:21:03.370001   27284 out.go:177] * Starting "ha-293078-m02" control-plane node in "ha-293078" cluster
	I0401 18:21:03.370993   27284 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0401 18:21:03.371025   27284 cache.go:56] Caching tarball of preloaded images
	I0401 18:21:03.371147   27284 preload.go:173] Found /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 18:21:03.371161   27284 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0401 18:21:03.371222   27284 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/config.json ...
	I0401 18:21:03.371398   27284 start.go:360] acquireMachinesLock for ha-293078-m02: {Name:mk6b7472209a8db5f40be4c2f0565da7e0094c19 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 18:21:03.371440   27284 start.go:364] duration metric: took 24.034µs to acquireMachinesLock for "ha-293078-m02"
	I0401 18:21:03.371458   27284 start.go:93] Provisioning new machine with config: &{Name:ha-293078 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:ha-293078 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.74 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 18:21:03.371524   27284 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0401 18:21:03.373092   27284 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0401 18:21:03.373175   27284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:21:03.373208   27284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:21:03.387531   27284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33333
	I0401 18:21:03.387968   27284 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:21:03.388474   27284 main.go:141] libmachine: Using API Version  1
	I0401 18:21:03.388500   27284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:21:03.388861   27284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:21:03.389050   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetMachineName
	I0401 18:21:03.389231   27284 main.go:141] libmachine: (ha-293078-m02) Calling .DriverName
	I0401 18:21:03.389396   27284 start.go:159] libmachine.API.Create for "ha-293078" (driver="kvm2")
	I0401 18:21:03.389425   27284 client.go:168] LocalClient.Create starting
	I0401 18:21:03.389454   27284 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem
	I0401 18:21:03.389479   27284 main.go:141] libmachine: Decoding PEM data...
	I0401 18:21:03.389493   27284 main.go:141] libmachine: Parsing certificate...
	I0401 18:21:03.389536   27284 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem
	I0401 18:21:03.389553   27284 main.go:141] libmachine: Decoding PEM data...
	I0401 18:21:03.389565   27284 main.go:141] libmachine: Parsing certificate...
	I0401 18:21:03.389582   27284 main.go:141] libmachine: Running pre-create checks...
	I0401 18:21:03.389594   27284 main.go:141] libmachine: (ha-293078-m02) Calling .PreCreateCheck
	I0401 18:21:03.389809   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetConfigRaw
	I0401 18:21:03.390215   27284 main.go:141] libmachine: Creating machine...
	I0401 18:21:03.390230   27284 main.go:141] libmachine: (ha-293078-m02) Calling .Create
	I0401 18:21:03.390373   27284 main.go:141] libmachine: (ha-293078-m02) Creating KVM machine...
	I0401 18:21:03.391699   27284 main.go:141] libmachine: (ha-293078-m02) DBG | found existing default KVM network
	I0401 18:21:03.391854   27284 main.go:141] libmachine: (ha-293078-m02) DBG | found existing private KVM network mk-ha-293078
	I0401 18:21:03.392014   27284 main.go:141] libmachine: (ha-293078-m02) Setting up store path in /home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m02 ...
	I0401 18:21:03.392040   27284 main.go:141] libmachine: (ha-293078-m02) Building disk image from file:///home/jenkins/minikube-integration/18233-10493/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso
	I0401 18:21:03.392083   27284 main.go:141] libmachine: (ha-293078-m02) DBG | I0401 18:21:03.391999   27619 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18233-10493/.minikube
	I0401 18:21:03.392209   27284 main.go:141] libmachine: (ha-293078-m02) Downloading /home/jenkins/minikube-integration/18233-10493/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18233-10493/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso...
	I0401 18:21:03.619622   27284 main.go:141] libmachine: (ha-293078-m02) DBG | I0401 18:21:03.619513   27619 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m02/id_rsa...
	I0401 18:21:03.702083   27284 main.go:141] libmachine: (ha-293078-m02) DBG | I0401 18:21:03.701957   27619 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m02/ha-293078-m02.rawdisk...
	I0401 18:21:03.702115   27284 main.go:141] libmachine: (ha-293078-m02) DBG | Writing magic tar header
	I0401 18:21:03.702130   27284 main.go:141] libmachine: (ha-293078-m02) DBG | Writing SSH key tar header
	I0401 18:21:03.702148   27284 main.go:141] libmachine: (ha-293078-m02) DBG | I0401 18:21:03.702084   27619 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m02 ...
	I0401 18:21:03.702242   27284 main.go:141] libmachine: (ha-293078-m02) Setting executable bit set on /home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m02 (perms=drwx------)
	I0401 18:21:03.702271   27284 main.go:141] libmachine: (ha-293078-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m02
	I0401 18:21:03.702283   27284 main.go:141] libmachine: (ha-293078-m02) Setting executable bit set on /home/jenkins/minikube-integration/18233-10493/.minikube/machines (perms=drwxr-xr-x)
	I0401 18:21:03.702297   27284 main.go:141] libmachine: (ha-293078-m02) Setting executable bit set on /home/jenkins/minikube-integration/18233-10493/.minikube (perms=drwxr-xr-x)
	I0401 18:21:03.702306   27284 main.go:141] libmachine: (ha-293078-m02) Setting executable bit set on /home/jenkins/minikube-integration/18233-10493 (perms=drwxrwxr-x)
	I0401 18:21:03.702326   27284 main.go:141] libmachine: (ha-293078-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0401 18:21:03.702341   27284 main.go:141] libmachine: (ha-293078-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0401 18:21:03.702352   27284 main.go:141] libmachine: (ha-293078-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18233-10493/.minikube/machines
	I0401 18:21:03.702386   27284 main.go:141] libmachine: (ha-293078-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18233-10493/.minikube
	I0401 18:21:03.702410   27284 main.go:141] libmachine: (ha-293078-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18233-10493
	I0401 18:21:03.702417   27284 main.go:141] libmachine: (ha-293078-m02) Creating domain...
	I0401 18:21:03.702427   27284 main.go:141] libmachine: (ha-293078-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0401 18:21:03.702439   27284 main.go:141] libmachine: (ha-293078-m02) DBG | Checking permissions on dir: /home/jenkins
	I0401 18:21:03.702460   27284 main.go:141] libmachine: (ha-293078-m02) DBG | Checking permissions on dir: /home
	I0401 18:21:03.702478   27284 main.go:141] libmachine: (ha-293078-m02) DBG | Skipping /home - not owner
	I0401 18:21:03.703271   27284 main.go:141] libmachine: (ha-293078-m02) define libvirt domain using xml: 
	I0401 18:21:03.703294   27284 main.go:141] libmachine: (ha-293078-m02) <domain type='kvm'>
	I0401 18:21:03.703305   27284 main.go:141] libmachine: (ha-293078-m02)   <name>ha-293078-m02</name>
	I0401 18:21:03.703317   27284 main.go:141] libmachine: (ha-293078-m02)   <memory unit='MiB'>2200</memory>
	I0401 18:21:03.703329   27284 main.go:141] libmachine: (ha-293078-m02)   <vcpu>2</vcpu>
	I0401 18:21:03.703337   27284 main.go:141] libmachine: (ha-293078-m02)   <features>
	I0401 18:21:03.703342   27284 main.go:141] libmachine: (ha-293078-m02)     <acpi/>
	I0401 18:21:03.703349   27284 main.go:141] libmachine: (ha-293078-m02)     <apic/>
	I0401 18:21:03.703354   27284 main.go:141] libmachine: (ha-293078-m02)     <pae/>
	I0401 18:21:03.703360   27284 main.go:141] libmachine: (ha-293078-m02)     
	I0401 18:21:03.703365   27284 main.go:141] libmachine: (ha-293078-m02)   </features>
	I0401 18:21:03.703376   27284 main.go:141] libmachine: (ha-293078-m02)   <cpu mode='host-passthrough'>
	I0401 18:21:03.703383   27284 main.go:141] libmachine: (ha-293078-m02)   
	I0401 18:21:03.703389   27284 main.go:141] libmachine: (ha-293078-m02)   </cpu>
	I0401 18:21:03.703397   27284 main.go:141] libmachine: (ha-293078-m02)   <os>
	I0401 18:21:03.703406   27284 main.go:141] libmachine: (ha-293078-m02)     <type>hvm</type>
	I0401 18:21:03.703413   27284 main.go:141] libmachine: (ha-293078-m02)     <boot dev='cdrom'/>
	I0401 18:21:03.703418   27284 main.go:141] libmachine: (ha-293078-m02)     <boot dev='hd'/>
	I0401 18:21:03.703427   27284 main.go:141] libmachine: (ha-293078-m02)     <bootmenu enable='no'/>
	I0401 18:21:03.703434   27284 main.go:141] libmachine: (ha-293078-m02)   </os>
	I0401 18:21:03.703439   27284 main.go:141] libmachine: (ha-293078-m02)   <devices>
	I0401 18:21:03.703446   27284 main.go:141] libmachine: (ha-293078-m02)     <disk type='file' device='cdrom'>
	I0401 18:21:03.703454   27284 main.go:141] libmachine: (ha-293078-m02)       <source file='/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m02/boot2docker.iso'/>
	I0401 18:21:03.703462   27284 main.go:141] libmachine: (ha-293078-m02)       <target dev='hdc' bus='scsi'/>
	I0401 18:21:03.703467   27284 main.go:141] libmachine: (ha-293078-m02)       <readonly/>
	I0401 18:21:03.703474   27284 main.go:141] libmachine: (ha-293078-m02)     </disk>
	I0401 18:21:03.703480   27284 main.go:141] libmachine: (ha-293078-m02)     <disk type='file' device='disk'>
	I0401 18:21:03.703489   27284 main.go:141] libmachine: (ha-293078-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0401 18:21:03.703499   27284 main.go:141] libmachine: (ha-293078-m02)       <source file='/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m02/ha-293078-m02.rawdisk'/>
	I0401 18:21:03.703506   27284 main.go:141] libmachine: (ha-293078-m02)       <target dev='hda' bus='virtio'/>
	I0401 18:21:03.703511   27284 main.go:141] libmachine: (ha-293078-m02)     </disk>
	I0401 18:21:03.703519   27284 main.go:141] libmachine: (ha-293078-m02)     <interface type='network'>
	I0401 18:21:03.703525   27284 main.go:141] libmachine: (ha-293078-m02)       <source network='mk-ha-293078'/>
	I0401 18:21:03.703542   27284 main.go:141] libmachine: (ha-293078-m02)       <model type='virtio'/>
	I0401 18:21:03.703550   27284 main.go:141] libmachine: (ha-293078-m02)     </interface>
	I0401 18:21:03.703557   27284 main.go:141] libmachine: (ha-293078-m02)     <interface type='network'>
	I0401 18:21:03.703586   27284 main.go:141] libmachine: (ha-293078-m02)       <source network='default'/>
	I0401 18:21:03.703613   27284 main.go:141] libmachine: (ha-293078-m02)       <model type='virtio'/>
	I0401 18:21:03.703637   27284 main.go:141] libmachine: (ha-293078-m02)     </interface>
	I0401 18:21:03.703655   27284 main.go:141] libmachine: (ha-293078-m02)     <serial type='pty'>
	I0401 18:21:03.703668   27284 main.go:141] libmachine: (ha-293078-m02)       <target port='0'/>
	I0401 18:21:03.703675   27284 main.go:141] libmachine: (ha-293078-m02)     </serial>
	I0401 18:21:03.703687   27284 main.go:141] libmachine: (ha-293078-m02)     <console type='pty'>
	I0401 18:21:03.703699   27284 main.go:141] libmachine: (ha-293078-m02)       <target type='serial' port='0'/>
	I0401 18:21:03.703710   27284 main.go:141] libmachine: (ha-293078-m02)     </console>
	I0401 18:21:03.703721   27284 main.go:141] libmachine: (ha-293078-m02)     <rng model='virtio'>
	I0401 18:21:03.703736   27284 main.go:141] libmachine: (ha-293078-m02)       <backend model='random'>/dev/random</backend>
	I0401 18:21:03.703748   27284 main.go:141] libmachine: (ha-293078-m02)     </rng>
	I0401 18:21:03.703758   27284 main.go:141] libmachine: (ha-293078-m02)     
	I0401 18:21:03.703762   27284 main.go:141] libmachine: (ha-293078-m02)     
	I0401 18:21:03.703767   27284 main.go:141] libmachine: (ha-293078-m02)   </devices>
	I0401 18:21:03.703773   27284 main.go:141] libmachine: (ha-293078-m02) </domain>
	I0401 18:21:03.703780   27284 main.go:141] libmachine: (ha-293078-m02) 
	I0401 18:21:03.710624   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:a9:04:b3 in network default
	I0401 18:21:03.711193   27284 main.go:141] libmachine: (ha-293078-m02) Ensuring networks are active...
	I0401 18:21:03.711240   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:03.711921   27284 main.go:141] libmachine: (ha-293078-m02) Ensuring network default is active
	I0401 18:21:03.712272   27284 main.go:141] libmachine: (ha-293078-m02) Ensuring network mk-ha-293078 is active
	I0401 18:21:03.712652   27284 main.go:141] libmachine: (ha-293078-m02) Getting domain xml...
	I0401 18:21:03.713321   27284 main.go:141] libmachine: (ha-293078-m02) Creating domain...
	I0401 18:21:04.918039   27284 main.go:141] libmachine: (ha-293078-m02) Waiting to get IP...
	I0401 18:21:04.918782   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:04.919154   27284 main.go:141] libmachine: (ha-293078-m02) DBG | unable to find current IP address of domain ha-293078-m02 in network mk-ha-293078
	I0401 18:21:04.919194   27284 main.go:141] libmachine: (ha-293078-m02) DBG | I0401 18:21:04.919136   27619 retry.go:31] will retry after 227.797489ms: waiting for machine to come up
	I0401 18:21:05.149672   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:05.149704   27284 main.go:141] libmachine: (ha-293078-m02) DBG | unable to find current IP address of domain ha-293078-m02 in network mk-ha-293078
	I0401 18:21:05.149752   27284 main.go:141] libmachine: (ha-293078-m02) DBG | I0401 18:21:05.149267   27619 retry.go:31] will retry after 256.715132ms: waiting for machine to come up
	I0401 18:21:05.407614   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:05.407989   27284 main.go:141] libmachine: (ha-293078-m02) DBG | unable to find current IP address of domain ha-293078-m02 in network mk-ha-293078
	I0401 18:21:05.408017   27284 main.go:141] libmachine: (ha-293078-m02) DBG | I0401 18:21:05.407946   27619 retry.go:31] will retry after 318.976551ms: waiting for machine to come up
	I0401 18:21:05.728528   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:05.728967   27284 main.go:141] libmachine: (ha-293078-m02) DBG | unable to find current IP address of domain ha-293078-m02 in network mk-ha-293078
	I0401 18:21:05.729001   27284 main.go:141] libmachine: (ha-293078-m02) DBG | I0401 18:21:05.728928   27619 retry.go:31] will retry after 593.684858ms: waiting for machine to come up
	I0401 18:21:06.324677   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:06.325204   27284 main.go:141] libmachine: (ha-293078-m02) DBG | unable to find current IP address of domain ha-293078-m02 in network mk-ha-293078
	I0401 18:21:06.325228   27284 main.go:141] libmachine: (ha-293078-m02) DBG | I0401 18:21:06.325156   27619 retry.go:31] will retry after 725.038622ms: waiting for machine to come up
	I0401 18:21:07.051601   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:07.052129   27284 main.go:141] libmachine: (ha-293078-m02) DBG | unable to find current IP address of domain ha-293078-m02 in network mk-ha-293078
	I0401 18:21:07.052178   27284 main.go:141] libmachine: (ha-293078-m02) DBG | I0401 18:21:07.052024   27619 retry.go:31] will retry after 794.779612ms: waiting for machine to come up
	I0401 18:21:07.847869   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:07.848306   27284 main.go:141] libmachine: (ha-293078-m02) DBG | unable to find current IP address of domain ha-293078-m02 in network mk-ha-293078
	I0401 18:21:07.848336   27284 main.go:141] libmachine: (ha-293078-m02) DBG | I0401 18:21:07.848259   27619 retry.go:31] will retry after 905.868947ms: waiting for machine to come up
	I0401 18:21:08.755840   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:08.756291   27284 main.go:141] libmachine: (ha-293078-m02) DBG | unable to find current IP address of domain ha-293078-m02 in network mk-ha-293078
	I0401 18:21:08.756320   27284 main.go:141] libmachine: (ha-293078-m02) DBG | I0401 18:21:08.756244   27619 retry.go:31] will retry after 1.176905759s: waiting for machine to come up
	I0401 18:21:09.934471   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:09.934892   27284 main.go:141] libmachine: (ha-293078-m02) DBG | unable to find current IP address of domain ha-293078-m02 in network mk-ha-293078
	I0401 18:21:09.934917   27284 main.go:141] libmachine: (ha-293078-m02) DBG | I0401 18:21:09.934863   27619 retry.go:31] will retry after 1.546450636s: waiting for machine to come up
	I0401 18:21:11.483188   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:11.483679   27284 main.go:141] libmachine: (ha-293078-m02) DBG | unable to find current IP address of domain ha-293078-m02 in network mk-ha-293078
	I0401 18:21:11.483711   27284 main.go:141] libmachine: (ha-293078-m02) DBG | I0401 18:21:11.483608   27619 retry.go:31] will retry after 1.88382657s: waiting for machine to come up
	I0401 18:21:13.369758   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:13.370228   27284 main.go:141] libmachine: (ha-293078-m02) DBG | unable to find current IP address of domain ha-293078-m02 in network mk-ha-293078
	I0401 18:21:13.370280   27284 main.go:141] libmachine: (ha-293078-m02) DBG | I0401 18:21:13.370209   27619 retry.go:31] will retry after 2.400689416s: waiting for machine to come up
	I0401 18:21:15.774266   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:15.774725   27284 main.go:141] libmachine: (ha-293078-m02) DBG | unable to find current IP address of domain ha-293078-m02 in network mk-ha-293078
	I0401 18:21:15.774760   27284 main.go:141] libmachine: (ha-293078-m02) DBG | I0401 18:21:15.774698   27619 retry.go:31] will retry after 2.684241486s: waiting for machine to come up
	I0401 18:21:18.460365   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:18.460851   27284 main.go:141] libmachine: (ha-293078-m02) DBG | unable to find current IP address of domain ha-293078-m02 in network mk-ha-293078
	I0401 18:21:18.460881   27284 main.go:141] libmachine: (ha-293078-m02) DBG | I0401 18:21:18.460803   27619 retry.go:31] will retry after 3.608105612s: waiting for machine to come up
	I0401 18:21:22.070078   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:22.070551   27284 main.go:141] libmachine: (ha-293078-m02) DBG | unable to find current IP address of domain ha-293078-m02 in network mk-ha-293078
	I0401 18:21:22.070568   27284 main.go:141] libmachine: (ha-293078-m02) DBG | I0401 18:21:22.070518   27619 retry.go:31] will retry after 4.235958126s: waiting for machine to come up
	I0401 18:21:26.307669   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:26.308161   27284 main.go:141] libmachine: (ha-293078-m02) Found IP for machine: 192.168.39.161
	I0401 18:21:26.308188   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has current primary IP address 192.168.39.161 and MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:26.308198   27284 main.go:141] libmachine: (ha-293078-m02) Reserving static IP address...
	I0401 18:21:26.308719   27284 main.go:141] libmachine: (ha-293078-m02) DBG | unable to find host DHCP lease matching {name: "ha-293078-m02", mac: "52:54:00:25:7f:87", ip: "192.168.39.161"} in network mk-ha-293078
	I0401 18:21:26.379934   27284 main.go:141] libmachine: (ha-293078-m02) Reserved static IP address: 192.168.39.161
	I0401 18:21:26.379960   27284 main.go:141] libmachine: (ha-293078-m02) Waiting for SSH to be available...
	I0401 18:21:26.379986   27284 main.go:141] libmachine: (ha-293078-m02) DBG | Getting to WaitForSSH function...
	I0401 18:21:26.382918   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:26.383348   27284 main.go:141] libmachine: (ha-293078-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7f:87", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:21:19 +0000 UTC Type:0 Mac:52:54:00:25:7f:87 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:minikube Clientid:01:52:54:00:25:7f:87}
	I0401 18:21:26.383380   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined IP address 192.168.39.161 and MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:26.383533   27284 main.go:141] libmachine: (ha-293078-m02) DBG | Using SSH client type: external
	I0401 18:21:26.383560   27284 main.go:141] libmachine: (ha-293078-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m02/id_rsa (-rw-------)
	I0401 18:21:26.383591   27284 main.go:141] libmachine: (ha-293078-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.161 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0401 18:21:26.383605   27284 main.go:141] libmachine: (ha-293078-m02) DBG | About to run SSH command:
	I0401 18:21:26.383626   27284 main.go:141] libmachine: (ha-293078-m02) DBG | exit 0
	I0401 18:21:26.510059   27284 main.go:141] libmachine: (ha-293078-m02) DBG | SSH cmd err, output: <nil>: 
	I0401 18:21:26.510348   27284 main.go:141] libmachine: (ha-293078-m02) KVM machine creation complete!
	I0401 18:21:26.510788   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetConfigRaw
	I0401 18:21:26.511324   27284 main.go:141] libmachine: (ha-293078-m02) Calling .DriverName
	I0401 18:21:26.511511   27284 main.go:141] libmachine: (ha-293078-m02) Calling .DriverName
	I0401 18:21:26.511702   27284 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0401 18:21:26.511715   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetState
	I0401 18:21:26.512934   27284 main.go:141] libmachine: Detecting operating system of created instance...
	I0401 18:21:26.512950   27284 main.go:141] libmachine: Waiting for SSH to be available...
	I0401 18:21:26.512958   27284 main.go:141] libmachine: Getting to WaitForSSH function...
	I0401 18:21:26.512967   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHHostname
	I0401 18:21:26.515154   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:26.515518   27284 main.go:141] libmachine: (ha-293078-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7f:87", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:21:19 +0000 UTC Type:0 Mac:52:54:00:25:7f:87 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-293078-m02 Clientid:01:52:54:00:25:7f:87}
	I0401 18:21:26.515554   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined IP address 192.168.39.161 and MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:26.515686   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHPort
	I0401 18:21:26.515860   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHKeyPath
	I0401 18:21:26.516022   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHKeyPath
	I0401 18:21:26.516149   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHUsername
	I0401 18:21:26.516281   27284 main.go:141] libmachine: Using SSH client type: native
	I0401 18:21:26.516455   27284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.161 22 <nil> <nil>}
	I0401 18:21:26.516466   27284 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0401 18:21:26.625274   27284 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 18:21:26.625297   27284 main.go:141] libmachine: Detecting the provisioner...
	I0401 18:21:26.625307   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHHostname
	I0401 18:21:26.628826   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:26.629252   27284 main.go:141] libmachine: (ha-293078-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7f:87", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:21:19 +0000 UTC Type:0 Mac:52:54:00:25:7f:87 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-293078-m02 Clientid:01:52:54:00:25:7f:87}
	I0401 18:21:26.629276   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined IP address 192.168.39.161 and MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:26.629444   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHPort
	I0401 18:21:26.629693   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHKeyPath
	I0401 18:21:26.629965   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHKeyPath
	I0401 18:21:26.630129   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHUsername
	I0401 18:21:26.630341   27284 main.go:141] libmachine: Using SSH client type: native
	I0401 18:21:26.630510   27284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.161 22 <nil> <nil>}
	I0401 18:21:26.630525   27284 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0401 18:21:26.743612   27284 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0401 18:21:26.743747   27284 main.go:141] libmachine: found compatible host: buildroot
	I0401 18:21:26.743777   27284 main.go:141] libmachine: Provisioning with buildroot...
	I0401 18:21:26.743793   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetMachineName
	I0401 18:21:26.744087   27284 buildroot.go:166] provisioning hostname "ha-293078-m02"
	I0401 18:21:26.744121   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetMachineName
	I0401 18:21:26.744371   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHHostname
	I0401 18:21:26.747234   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:26.747650   27284 main.go:141] libmachine: (ha-293078-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7f:87", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:21:19 +0000 UTC Type:0 Mac:52:54:00:25:7f:87 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-293078-m02 Clientid:01:52:54:00:25:7f:87}
	I0401 18:21:26.747674   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined IP address 192.168.39.161 and MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:26.747826   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHPort
	I0401 18:21:26.747980   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHKeyPath
	I0401 18:21:26.748133   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHKeyPath
	I0401 18:21:26.748296   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHUsername
	I0401 18:21:26.748480   27284 main.go:141] libmachine: Using SSH client type: native
	I0401 18:21:26.748678   27284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.161 22 <nil> <nil>}
	I0401 18:21:26.748691   27284 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-293078-m02 && echo "ha-293078-m02" | sudo tee /etc/hostname
	I0401 18:21:26.873937   27284 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-293078-m02
	
	I0401 18:21:26.873966   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHHostname
	I0401 18:21:26.876644   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:26.877003   27284 main.go:141] libmachine: (ha-293078-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7f:87", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:21:19 +0000 UTC Type:0 Mac:52:54:00:25:7f:87 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-293078-m02 Clientid:01:52:54:00:25:7f:87}
	I0401 18:21:26.877032   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined IP address 192.168.39.161 and MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:26.877242   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHPort
	I0401 18:21:26.877438   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHKeyPath
	I0401 18:21:26.877682   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHKeyPath
	I0401 18:21:26.877833   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHUsername
	I0401 18:21:26.878018   27284 main.go:141] libmachine: Using SSH client type: native
	I0401 18:21:26.878233   27284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.161 22 <nil> <nil>}
	I0401 18:21:26.878260   27284 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-293078-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-293078-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-293078-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 18:21:26.996402   27284 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 18:21:26.996428   27284 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18233-10493/.minikube CaCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18233-10493/.minikube}
	I0401 18:21:26.996460   27284 buildroot.go:174] setting up certificates
	I0401 18:21:26.996472   27284 provision.go:84] configureAuth start
	I0401 18:21:26.996482   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetMachineName
	I0401 18:21:26.996761   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetIP
	I0401 18:21:26.999638   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:27.000033   27284 main.go:141] libmachine: (ha-293078-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7f:87", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:21:19 +0000 UTC Type:0 Mac:52:54:00:25:7f:87 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-293078-m02 Clientid:01:52:54:00:25:7f:87}
	I0401 18:21:27.000064   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined IP address 192.168.39.161 and MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:27.000191   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHHostname
	I0401 18:21:27.003607   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:27.004035   27284 main.go:141] libmachine: (ha-293078-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7f:87", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:21:19 +0000 UTC Type:0 Mac:52:54:00:25:7f:87 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-293078-m02 Clientid:01:52:54:00:25:7f:87}
	I0401 18:21:27.004057   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined IP address 192.168.39.161 and MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:27.004217   27284 provision.go:143] copyHostCerts
	I0401 18:21:27.004270   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem
	I0401 18:21:27.004311   27284 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem, removing ...
	I0401 18:21:27.004319   27284 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem
	I0401 18:21:27.004399   27284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem (1082 bytes)
	I0401 18:21:27.004497   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem
	I0401 18:21:27.004529   27284 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem, removing ...
	I0401 18:21:27.004536   27284 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem
	I0401 18:21:27.004578   27284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem (1123 bytes)
	I0401 18:21:27.004661   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem
	I0401 18:21:27.004684   27284 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem, removing ...
	I0401 18:21:27.004693   27284 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem
	I0401 18:21:27.004743   27284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem (1679 bytes)
	I0401 18:21:27.004807   27284 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem org=jenkins.ha-293078-m02 san=[127.0.0.1 192.168.39.161 ha-293078-m02 localhost minikube]
	I0401 18:21:27.204268   27284 provision.go:177] copyRemoteCerts
	I0401 18:21:27.204319   27284 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 18:21:27.204339   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHHostname
	I0401 18:21:27.206890   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:27.207315   27284 main.go:141] libmachine: (ha-293078-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7f:87", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:21:19 +0000 UTC Type:0 Mac:52:54:00:25:7f:87 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-293078-m02 Clientid:01:52:54:00:25:7f:87}
	I0401 18:21:27.207342   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined IP address 192.168.39.161 and MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:27.207549   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHPort
	I0401 18:21:27.207738   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHKeyPath
	I0401 18:21:27.207934   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHUsername
	I0401 18:21:27.208135   27284 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m02/id_rsa Username:docker}
	I0401 18:21:27.294958   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0401 18:21:27.295023   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 18:21:27.321972   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0401 18:21:27.322025   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0401 18:21:27.348642   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0401 18:21:27.348716   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 18:21:27.379204   27284 provision.go:87] duration metric: took 382.719932ms to configureAuth
	I0401 18:21:27.379229   27284 buildroot.go:189] setting minikube options for container-runtime
	I0401 18:21:27.379439   27284 config.go:182] Loaded profile config "ha-293078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 18:21:27.379528   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHHostname
	I0401 18:21:27.382418   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:27.382761   27284 main.go:141] libmachine: (ha-293078-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7f:87", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:21:19 +0000 UTC Type:0 Mac:52:54:00:25:7f:87 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-293078-m02 Clientid:01:52:54:00:25:7f:87}
	I0401 18:21:27.382780   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined IP address 192.168.39.161 and MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:27.383016   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHPort
	I0401 18:21:27.383211   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHKeyPath
	I0401 18:21:27.383423   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHKeyPath
	I0401 18:21:27.383621   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHUsername
	I0401 18:21:27.383790   27284 main.go:141] libmachine: Using SSH client type: native
	I0401 18:21:27.383984   27284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.161 22 <nil> <nil>}
	I0401 18:21:27.384008   27284 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 18:21:27.676307   27284 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 18:21:27.676358   27284 main.go:141] libmachine: Checking connection to Docker...
	I0401 18:21:27.676371   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetURL
	I0401 18:21:27.677756   27284 main.go:141] libmachine: (ha-293078-m02) DBG | Using libvirt version 6000000
	I0401 18:21:27.679933   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:27.680321   27284 main.go:141] libmachine: (ha-293078-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7f:87", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:21:19 +0000 UTC Type:0 Mac:52:54:00:25:7f:87 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-293078-m02 Clientid:01:52:54:00:25:7f:87}
	I0401 18:21:27.680348   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined IP address 192.168.39.161 and MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:27.680486   27284 main.go:141] libmachine: Docker is up and running!
	I0401 18:21:27.680501   27284 main.go:141] libmachine: Reticulating splines...
	I0401 18:21:27.680509   27284 client.go:171] duration metric: took 24.291073713s to LocalClient.Create
	I0401 18:21:27.680531   27284 start.go:167] duration metric: took 24.291136909s to libmachine.API.Create "ha-293078"
	I0401 18:21:27.680541   27284 start.go:293] postStartSetup for "ha-293078-m02" (driver="kvm2")
	I0401 18:21:27.680550   27284 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 18:21:27.680560   27284 main.go:141] libmachine: (ha-293078-m02) Calling .DriverName
	I0401 18:21:27.680816   27284 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 18:21:27.680838   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHHostname
	I0401 18:21:27.682693   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:27.683017   27284 main.go:141] libmachine: (ha-293078-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7f:87", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:21:19 +0000 UTC Type:0 Mac:52:54:00:25:7f:87 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-293078-m02 Clientid:01:52:54:00:25:7f:87}
	I0401 18:21:27.683043   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined IP address 192.168.39.161 and MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:27.683188   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHPort
	I0401 18:21:27.683350   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHKeyPath
	I0401 18:21:27.683526   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHUsername
	I0401 18:21:27.683714   27284 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m02/id_rsa Username:docker}
	I0401 18:21:27.771858   27284 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 18:21:27.776684   27284 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 18:21:27.776703   27284 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/addons for local assets ...
	I0401 18:21:27.776776   27284 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/files for local assets ...
	I0401 18:21:27.776861   27284 filesync.go:149] local asset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> 177512.pem in /etc/ssl/certs
	I0401 18:21:27.776874   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> /etc/ssl/certs/177512.pem
	I0401 18:21:27.776970   27284 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 18:21:27.788156   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /etc/ssl/certs/177512.pem (1708 bytes)
	I0401 18:21:27.815024   27284 start.go:296] duration metric: took 134.472512ms for postStartSetup
	I0401 18:21:27.815069   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetConfigRaw
	I0401 18:21:27.815610   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetIP
	I0401 18:21:27.818358   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:27.818716   27284 main.go:141] libmachine: (ha-293078-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7f:87", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:21:19 +0000 UTC Type:0 Mac:52:54:00:25:7f:87 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-293078-m02 Clientid:01:52:54:00:25:7f:87}
	I0401 18:21:27.818744   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined IP address 192.168.39.161 and MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:27.818962   27284 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/config.json ...
	I0401 18:21:27.819126   27284 start.go:128] duration metric: took 24.447591421s to createHost
	I0401 18:21:27.819147   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHHostname
	I0401 18:21:27.821482   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:27.821833   27284 main.go:141] libmachine: (ha-293078-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7f:87", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:21:19 +0000 UTC Type:0 Mac:52:54:00:25:7f:87 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-293078-m02 Clientid:01:52:54:00:25:7f:87}
	I0401 18:21:27.821861   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined IP address 192.168.39.161 and MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:27.822014   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHPort
	I0401 18:21:27.822205   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHKeyPath
	I0401 18:21:27.822399   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHKeyPath
	I0401 18:21:27.822542   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHUsername
	I0401 18:21:27.822720   27284 main.go:141] libmachine: Using SSH client type: native
	I0401 18:21:27.822910   27284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.161 22 <nil> <nil>}
	I0401 18:21:27.822928   27284 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 18:21:27.930959   27284 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711995687.901340486
	
	I0401 18:21:27.930981   27284 fix.go:216] guest clock: 1711995687.901340486
	I0401 18:21:27.930988   27284 fix.go:229] Guest: 2024-04-01 18:21:27.901340486 +0000 UTC Remote: 2024-04-01 18:21:27.819137286 +0000 UTC m=+79.694131970 (delta=82.2032ms)
	I0401 18:21:27.931002   27284 fix.go:200] guest clock delta is within tolerance: 82.2032ms
	I0401 18:21:27.931007   27284 start.go:83] releasing machines lock for "ha-293078-m02", held for 24.559557046s
	I0401 18:21:27.931026   27284 main.go:141] libmachine: (ha-293078-m02) Calling .DriverName
	I0401 18:21:27.931329   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetIP
	I0401 18:21:27.933913   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:27.934296   27284 main.go:141] libmachine: (ha-293078-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7f:87", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:21:19 +0000 UTC Type:0 Mac:52:54:00:25:7f:87 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-293078-m02 Clientid:01:52:54:00:25:7f:87}
	I0401 18:21:27.934328   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined IP address 192.168.39.161 and MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:27.937056   27284 out.go:177] * Found network options:
	I0401 18:21:27.938623   27284 out.go:177]   - NO_PROXY=192.168.39.74
	W0401 18:21:27.940013   27284 proxy.go:119] fail to check proxy env: Error ip not in block
	I0401 18:21:27.940053   27284 main.go:141] libmachine: (ha-293078-m02) Calling .DriverName
	I0401 18:21:27.940565   27284 main.go:141] libmachine: (ha-293078-m02) Calling .DriverName
	I0401 18:21:27.940750   27284 main.go:141] libmachine: (ha-293078-m02) Calling .DriverName
	I0401 18:21:27.940881   27284 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 18:21:27.940918   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHHostname
	W0401 18:21:27.940943   27284 proxy.go:119] fail to check proxy env: Error ip not in block
	I0401 18:21:27.941032   27284 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 18:21:27.941053   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHHostname
	I0401 18:21:27.943773   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:27.944165   27284 main.go:141] libmachine: (ha-293078-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7f:87", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:21:19 +0000 UTC Type:0 Mac:52:54:00:25:7f:87 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-293078-m02 Clientid:01:52:54:00:25:7f:87}
	I0401 18:21:27.944197   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined IP address 192.168.39.161 and MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:27.944228   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:27.944304   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHPort
	I0401 18:21:27.944466   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHKeyPath
	I0401 18:21:27.944626   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHUsername
	I0401 18:21:27.944724   27284 main.go:141] libmachine: (ha-293078-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7f:87", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:21:19 +0000 UTC Type:0 Mac:52:54:00:25:7f:87 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-293078-m02 Clientid:01:52:54:00:25:7f:87}
	I0401 18:21:27.944746   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined IP address 192.168.39.161 and MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:27.944755   27284 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m02/id_rsa Username:docker}
	I0401 18:21:27.944925   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHPort
	I0401 18:21:27.945077   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHKeyPath
	I0401 18:21:27.945233   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHUsername
	I0401 18:21:27.945365   27284 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m02/id_rsa Username:docker}
	I0401 18:21:28.200403   27284 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 18:21:28.207254   27284 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 18:21:28.207305   27284 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 18:21:28.225468   27284 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 18:21:28.225495   27284 start.go:494] detecting cgroup driver to use...
	I0401 18:21:28.225560   27284 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 18:21:28.243950   27284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 18:21:28.259036   27284 docker.go:217] disabling cri-docker service (if available) ...
	I0401 18:21:28.259091   27284 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 18:21:28.275329   27284 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 18:21:28.293101   27284 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 18:21:28.429784   27284 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 18:21:28.565922   27284 docker.go:233] disabling docker service ...
	I0401 18:21:28.565979   27284 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 18:21:28.582906   27284 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 18:21:28.597090   27284 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 18:21:28.735892   27284 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 18:21:28.857513   27284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 18:21:28.873206   27284 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 18:21:28.893313   27284 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0401 18:21:28.893378   27284 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:21:28.905459   27284 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 18:21:28.905506   27284 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:21:28.917308   27284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:21:28.928964   27284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:21:28.940983   27284 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 18:21:28.953029   27284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:21:28.964924   27284 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:21:28.983890   27284 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:21:28.995693   27284 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 18:21:29.006566   27284 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0401 18:21:29.006619   27284 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0401 18:21:29.021111   27284 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 18:21:29.032582   27284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 18:21:29.155407   27284 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 18:21:29.315079   27284 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 18:21:29.315175   27284 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 18:21:29.320619   27284 start.go:562] Will wait 60s for crictl version
	I0401 18:21:29.320677   27284 ssh_runner.go:195] Run: which crictl
	I0401 18:21:29.325296   27284 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 18:21:29.366380   27284 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 18:21:29.366512   27284 ssh_runner.go:195] Run: crio --version
	I0401 18:21:29.397051   27284 ssh_runner.go:195] Run: crio --version
	I0401 18:21:29.434216   27284 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0401 18:21:29.435828   27284 out.go:177]   - env NO_PROXY=192.168.39.74
	I0401 18:21:29.437067   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetIP
	I0401 18:21:29.439778   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:29.440175   27284 main.go:141] libmachine: (ha-293078-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7f:87", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:21:19 +0000 UTC Type:0 Mac:52:54:00:25:7f:87 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-293078-m02 Clientid:01:52:54:00:25:7f:87}
	I0401 18:21:29.440199   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined IP address 192.168.39.161 and MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:29.440477   27284 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0401 18:21:29.445003   27284 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 18:21:29.459388   27284 mustload.go:65] Loading cluster: ha-293078
	I0401 18:21:29.459600   27284 config.go:182] Loaded profile config "ha-293078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 18:21:29.459883   27284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:21:29.459917   27284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:21:29.474595   27284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42923
	I0401 18:21:29.475076   27284 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:21:29.475614   27284 main.go:141] libmachine: Using API Version  1
	I0401 18:21:29.475641   27284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:21:29.475959   27284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:21:29.476112   27284 main.go:141] libmachine: (ha-293078) Calling .GetState
	I0401 18:21:29.477531   27284 host.go:66] Checking if "ha-293078" exists ...
	I0401 18:21:29.477979   27284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:21:29.478024   27284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:21:29.492417   27284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34683
	I0401 18:21:29.492773   27284 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:21:29.493228   27284 main.go:141] libmachine: Using API Version  1
	I0401 18:21:29.493248   27284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:21:29.493565   27284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:21:29.493762   27284 main.go:141] libmachine: (ha-293078) Calling .DriverName
	I0401 18:21:29.493915   27284 certs.go:68] Setting up /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078 for IP: 192.168.39.161
	I0401 18:21:29.493927   27284 certs.go:194] generating shared ca certs ...
	I0401 18:21:29.493945   27284 certs.go:226] acquiring lock for ca certs: {Name:mk348b3e250c104b662139cd7212c6c6dfda3180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:21:29.494073   27284 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key
	I0401 18:21:29.494125   27284 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key
	I0401 18:21:29.494138   27284 certs.go:256] generating profile certs ...
	I0401 18:21:29.494228   27284 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/client.key
	I0401 18:21:29.494256   27284 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.key.f03fd53c
	I0401 18:21:29.494273   27284 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.crt.f03fd53c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.74 192.168.39.161 192.168.39.254]
	I0401 18:21:29.870971   27284 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.crt.f03fd53c ...
	I0401 18:21:29.871001   27284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.crt.f03fd53c: {Name:mke372832f38ab7a4216acc7c1af71be3e4ec4f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:21:29.871164   27284 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.key.f03fd53c ...
	I0401 18:21:29.871182   27284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.key.f03fd53c: {Name:mk8efe869f9d01338acd73cedfd8d1cae8bb0860 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:21:29.871254   27284 certs.go:381] copying /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.crt.f03fd53c -> /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.crt
	I0401 18:21:29.871373   27284 certs.go:385] copying /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.key.f03fd53c -> /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.key
	I0401 18:21:29.871541   27284 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/proxy-client.key
	I0401 18:21:29.871563   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0401 18:21:29.871583   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0401 18:21:29.871602   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0401 18:21:29.871620   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0401 18:21:29.871639   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0401 18:21:29.871658   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0401 18:21:29.871674   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0401 18:21:29.871691   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0401 18:21:29.871765   27284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem (1338 bytes)
	W0401 18:21:29.871814   27284 certs.go:480] ignoring /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751_empty.pem, impossibly tiny 0 bytes
	I0401 18:21:29.871827   27284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem (1675 bytes)
	I0401 18:21:29.871870   27284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem (1082 bytes)
	I0401 18:21:29.871898   27284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem (1123 bytes)
	I0401 18:21:29.871925   27284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem (1679 bytes)
	I0401 18:21:29.871967   27284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem (1708 bytes)
	I0401 18:21:29.871996   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem -> /usr/share/ca-certificates/17751.pem
	I0401 18:21:29.872006   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> /usr/share/ca-certificates/177512.pem
	I0401 18:21:29.872015   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0401 18:21:29.872044   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:21:29.875227   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:21:29.875638   27284 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:21:29.875668   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:21:29.875816   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:21:29.875993   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:21:29.876159   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:21:29.876283   27284 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078/id_rsa Username:docker}
	I0401 18:21:29.953963   27284 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0401 18:21:29.959680   27284 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0401 18:21:29.976628   27284 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0401 18:21:29.981919   27284 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0401 18:21:29.995078   27284 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0401 18:21:29.999828   27284 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0401 18:21:30.013735   27284 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0401 18:21:30.018722   27284 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0401 18:21:30.039963   27284 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0401 18:21:30.044688   27284 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0401 18:21:30.064396   27284 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0401 18:21:30.070169   27284 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0401 18:21:30.082483   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 18:21:30.109594   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 18:21:30.135600   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 18:21:30.161192   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 18:21:30.187105   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0401 18:21:30.213458   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0401 18:21:30.239352   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 18:21:30.265197   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 18:21:30.291678   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem --> /usr/share/ca-certificates/17751.pem (1338 bytes)
	I0401 18:21:30.318717   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /usr/share/ca-certificates/177512.pem (1708 bytes)
	I0401 18:21:30.344296   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 18:21:30.369529   27284 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0401 18:21:30.387557   27284 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0401 18:21:30.405503   27284 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0401 18:21:30.424468   27284 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0401 18:21:30.443647   27284 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0401 18:21:30.465042   27284 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0401 18:21:30.484989   27284 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (758 bytes)
	I0401 18:21:30.504775   27284 ssh_runner.go:195] Run: openssl version
	I0401 18:21:30.511640   27284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17751.pem && ln -fs /usr/share/ca-certificates/17751.pem /etc/ssl/certs/17751.pem"
	I0401 18:21:30.525146   27284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17751.pem
	I0401 18:21:30.530456   27284 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 18:15 /usr/share/ca-certificates/17751.pem
	I0401 18:21:30.530514   27284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17751.pem
	I0401 18:21:30.537052   27284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17751.pem /etc/ssl/certs/51391683.0"
	I0401 18:21:30.550061   27284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177512.pem && ln -fs /usr/share/ca-certificates/177512.pem /etc/ssl/certs/177512.pem"
	I0401 18:21:30.563523   27284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177512.pem
	I0401 18:21:30.568383   27284 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 18:15 /usr/share/ca-certificates/177512.pem
	I0401 18:21:30.568430   27284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177512.pem
	I0401 18:21:30.574495   27284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177512.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 18:21:30.586895   27284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 18:21:30.599226   27284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 18:21:30.604131   27284 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 18:07 /usr/share/ca-certificates/minikubeCA.pem
	I0401 18:21:30.604187   27284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 18:21:30.610966   27284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 18:21:30.623311   27284 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 18:21:30.627965   27284 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 18:21:30.628011   27284 kubeadm.go:928] updating node {m02 192.168.39.161 8443 v1.29.3 crio true true} ...
	I0401 18:21:30.628083   27284 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-293078-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.161
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-293078 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 18:21:30.628107   27284 kube-vip.go:111] generating kube-vip config ...
	I0401 18:21:30.628139   27284 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0401 18:21:30.647325   27284 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0401 18:21:30.647833   27284 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0401 18:21:30.647928   27284 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0401 18:21:30.660414   27284 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.29.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.29.3': No such file or directory
	
	Initiating transfer...
	I0401 18:21:30.660467   27284 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.29.3
	I0401 18:21:30.671923   27284 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl.sha256
	I0401 18:21:30.671943   27284 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18233-10493/.minikube/cache/linux/amd64/v1.29.3/kubelet
	I0401 18:21:30.671955   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/linux/amd64/v1.29.3/kubectl -> /var/lib/minikube/binaries/v1.29.3/kubectl
	I0401 18:21:30.671961   27284 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18233-10493/.minikube/cache/linux/amd64/v1.29.3/kubeadm
	I0401 18:21:30.672032   27284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl
	I0401 18:21:30.677820   27284 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubectl': No such file or directory
	I0401 18:21:30.677851   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/cache/linux/amd64/v1.29.3/kubectl --> /var/lib/minikube/binaries/v1.29.3/kubectl (49799168 bytes)
	I0401 18:21:31.695308   27284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 18:21:31.710741   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/linux/amd64/v1.29.3/kubelet -> /var/lib/minikube/binaries/v1.29.3/kubelet
	I0401 18:21:31.710854   27284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet
	I0401 18:21:31.715776   27284 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubelet': No such file or directory
	I0401 18:21:31.715803   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/cache/linux/amd64/v1.29.3/kubelet --> /var/lib/minikube/binaries/v1.29.3/kubelet (111919104 bytes)
	I0401 18:21:34.245735   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/linux/amd64/v1.29.3/kubeadm -> /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0401 18:21:34.245806   27284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0401 18:21:34.251520   27284 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubeadm': No such file or directory
	I0401 18:21:34.251548   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/cache/linux/amd64/v1.29.3/kubeadm --> /var/lib/minikube/binaries/v1.29.3/kubeadm (48340992 bytes)
	I0401 18:21:34.507577   27284 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0401 18:21:34.517903   27284 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0401 18:21:34.536870   27284 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 18:21:34.556070   27284 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0401 18:21:34.574843   27284 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0401 18:21:34.579428   27284 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 18:21:34.593517   27284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 18:21:34.731800   27284 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 18:21:34.753012   27284 host.go:66] Checking if "ha-293078" exists ...
	I0401 18:21:34.753348   27284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:21:34.753395   27284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:21:34.772501   27284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44899
	I0401 18:21:34.772965   27284 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:21:34.773483   27284 main.go:141] libmachine: Using API Version  1
	I0401 18:21:34.773510   27284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:21:34.773865   27284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:21:34.774108   27284 main.go:141] libmachine: (ha-293078) Calling .DriverName
	I0401 18:21:34.774255   27284 start.go:316] joinCluster: &{Name:ha-293078 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cluster
Name:ha-293078 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.74 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.161 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 18:21:34.774371   27284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0401 18:21:34.774395   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:21:34.777425   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:21:34.777860   27284 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:21:34.777888   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:21:34.778025   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:21:34.778182   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:21:34.778322   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:21:34.778488   27284 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078/id_rsa Username:docker}
	I0401 18:21:34.965606   27284 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.161 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 18:21:34.965663   27284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ofhm0a.qkw6l4ee4v53jhnf --discovery-token-ca-cert-hash sha256:b8a0197ad47aa27a5800307c57228d22e61e4d31af785fa8a896f2b7fab267b8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-293078-m02 --control-plane --apiserver-advertise-address=192.168.39.161 --apiserver-bind-port=8443"
	I0401 18:21:59.525215   27284 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ofhm0a.qkw6l4ee4v53jhnf --discovery-token-ca-cert-hash sha256:b8a0197ad47aa27a5800307c57228d22e61e4d31af785fa8a896f2b7fab267b8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-293078-m02 --control-plane --apiserver-advertise-address=192.168.39.161 --apiserver-bind-port=8443": (24.559527937s)
	I0401 18:21:59.525250   27284 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0401 18:22:00.071046   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-293078-m02 minikube.k8s.io/updated_at=2024_04_01T18_22_00_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2 minikube.k8s.io/name=ha-293078 minikube.k8s.io/primary=false
	I0401 18:22:00.282010   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-293078-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0401 18:22:00.453543   27284 start.go:318] duration metric: took 25.679284795s to joinCluster
	I0401 18:22:00.453612   27284 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.161 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 18:22:00.455031   27284 out.go:177] * Verifying Kubernetes components...
	I0401 18:22:00.453905   27284 config.go:182] Loaded profile config "ha-293078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 18:22:00.456414   27284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 18:22:00.728746   27284 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 18:22:00.777673   27284 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 18:22:00.777997   27284 kapi.go:59] client config for ha-293078: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/client.crt", KeyFile:"/home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/client.key", CAFile:"/home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5ca00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0401 18:22:00.778058   27284 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.74:8443
	I0401 18:22:00.778281   27284 node_ready.go:35] waiting up to 6m0s for node "ha-293078-m02" to be "Ready" ...
	I0401 18:22:00.778364   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:00.778374   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:00.778384   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:00.778393   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:00.789042   27284 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0401 18:22:01.279023   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:01.279052   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:01.279065   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:01.279073   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:01.307534   27284 round_trippers.go:574] Response Status: 200 OK in 28 milliseconds
	I0401 18:22:01.779315   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:01.779344   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:01.779356   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:01.779363   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:01.783215   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:02.278842   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:02.278862   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:02.278873   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:02.278878   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:02.284787   27284 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 18:22:02.778873   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:02.778897   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:02.778909   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:02.778917   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:02.782128   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:02.783305   27284 node_ready.go:53] node "ha-293078-m02" has status "Ready":"False"
	I0401 18:22:03.279117   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:03.279135   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:03.279143   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:03.279147   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:03.283600   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:22:03.778553   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:03.778574   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:03.778583   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:03.778587   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:03.884749   27284 round_trippers.go:574] Response Status: 200 OK in 106 milliseconds
	I0401 18:22:04.278810   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:04.278829   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:04.278838   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:04.278843   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:04.282332   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:04.778503   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:04.778524   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:04.778533   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:04.778538   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:04.783222   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:22:04.783797   27284 node_ready.go:49] node "ha-293078-m02" has status "Ready":"True"
	I0401 18:22:04.783830   27284 node_ready.go:38] duration metric: took 4.005519623s for node "ha-293078-m02" to be "Ready" ...
	I0401 18:22:04.783842   27284 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 18:22:04.783912   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods
	I0401 18:22:04.783944   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:04.783954   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:04.783960   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:04.791208   27284 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0401 18:22:04.798444   27284 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-8v456" in "kube-system" namespace to be "Ready" ...
	I0401 18:22:04.798536   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-8v456
	I0401 18:22:04.798547   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:04.798557   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:04.798566   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:04.802242   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:04.802982   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078
	I0401 18:22:04.802999   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:04.803005   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:04.803009   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:04.806438   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:04.807430   27284 pod_ready.go:92] pod "coredns-76f75df574-8v456" in "kube-system" namespace has status "Ready":"True"
	I0401 18:22:04.807444   27284 pod_ready.go:81] duration metric: took 8.97802ms for pod "coredns-76f75df574-8v456" in "kube-system" namespace to be "Ready" ...
	I0401 18:22:04.807452   27284 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-sqxnb" in "kube-system" namespace to be "Ready" ...
	I0401 18:22:04.807504   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-sqxnb
	I0401 18:22:04.807513   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:04.807520   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:04.807523   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:04.811176   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:04.812196   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078
	I0401 18:22:04.812212   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:04.812239   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:04.812252   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:04.816308   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:22:04.817575   27284 pod_ready.go:92] pod "coredns-76f75df574-sqxnb" in "kube-system" namespace has status "Ready":"True"
	I0401 18:22:04.817588   27284 pod_ready.go:81] duration metric: took 10.130855ms for pod "coredns-76f75df574-sqxnb" in "kube-system" namespace to be "Ready" ...
	I0401 18:22:04.817596   27284 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-293078" in "kube-system" namespace to be "Ready" ...
	I0401 18:22:04.817632   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078
	I0401 18:22:04.817640   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:04.817669   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:04.817679   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:04.821221   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:04.822718   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078
	I0401 18:22:04.822740   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:04.822750   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:04.822755   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:04.825701   27284 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 18:22:04.826315   27284 pod_ready.go:92] pod "etcd-ha-293078" in "kube-system" namespace has status "Ready":"True"
	I0401 18:22:04.826328   27284 pod_ready.go:81] duration metric: took 8.726774ms for pod "etcd-ha-293078" in "kube-system" namespace to be "Ready" ...
	I0401 18:22:04.826335   27284 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-293078-m02" in "kube-system" namespace to be "Ready" ...
	I0401 18:22:04.826387   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m02
	I0401 18:22:04.826399   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:04.826405   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:04.826410   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:04.829258   27284 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 18:22:04.829923   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:04.829939   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:04.829949   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:04.829956   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:04.832707   27284 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 18:22:05.326706   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m02
	I0401 18:22:05.326736   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:05.326756   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:05.326766   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:05.334458   27284 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0401 18:22:05.335602   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:05.335615   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:05.335622   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:05.335626   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:05.338260   27284 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 18:22:05.827277   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m02
	I0401 18:22:05.827298   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:05.827306   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:05.827310   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:05.831742   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:22:05.832715   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:05.832732   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:05.832744   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:05.832752   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:05.837361   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:22:06.327168   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m02
	I0401 18:22:06.327192   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:06.327202   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:06.327207   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:06.336534   27284 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0401 18:22:06.337580   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:06.337593   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:06.337600   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:06.337603   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:06.340686   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:06.826711   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m02
	I0401 18:22:06.826736   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:06.826749   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:06.826756   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:06.830585   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:06.831349   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:06.831364   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:06.831371   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:06.831374   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:06.834217   27284 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 18:22:06.835019   27284 pod_ready.go:102] pod "etcd-ha-293078-m02" in "kube-system" namespace has status "Ready":"False"
	I0401 18:22:07.327487   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m02
	I0401 18:22:07.327506   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:07.327514   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:07.327517   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:07.331111   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:07.331797   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:07.331813   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:07.331826   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:07.331831   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:07.334667   27284 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 18:22:07.826898   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m02
	I0401 18:22:07.826920   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:07.826932   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:07.826938   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:07.830645   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:07.831546   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:07.831560   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:07.831566   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:07.831570   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:07.834451   27284 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 18:22:08.327484   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m02
	I0401 18:22:08.327507   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:08.327519   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:08.327525   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:08.330783   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:08.331544   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:08.331561   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:08.331571   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:08.331576   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:08.334362   27284 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 18:22:08.826689   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m02
	I0401 18:22:08.826715   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:08.826726   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:08.826731   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:08.830433   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:08.831292   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:08.831305   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:08.831312   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:08.831316   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:08.834778   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:08.835556   27284 pod_ready.go:102] pod "etcd-ha-293078-m02" in "kube-system" namespace has status "Ready":"False"
	I0401 18:22:09.326898   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m02
	I0401 18:22:09.326920   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:09.326926   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:09.326931   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:09.331521   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:22:09.332596   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:09.332613   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:09.332624   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:09.332631   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:09.335403   27284 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 18:22:09.827031   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m02
	I0401 18:22:09.827051   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:09.827059   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:09.827063   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:09.830688   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:09.831534   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:09.831550   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:09.831558   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:09.831564   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:09.834158   27284 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 18:22:10.327268   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m02
	I0401 18:22:10.327299   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:10.327313   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:10.327319   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:10.331738   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:22:10.332555   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:10.332568   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:10.332575   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:10.332582   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:10.335800   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:10.826731   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m02
	I0401 18:22:10.826756   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:10.826762   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:10.826767   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:10.830932   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:22:10.832090   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:10.832106   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:10.832116   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:10.832121   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:10.835606   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:10.836626   27284 pod_ready.go:102] pod "etcd-ha-293078-m02" in "kube-system" namespace has status "Ready":"False"
	I0401 18:22:11.327167   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m02
	I0401 18:22:11.327188   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:11.327196   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:11.327200   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:11.331583   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:22:11.332547   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:11.332565   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:11.332572   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:11.332576   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:11.336818   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:22:11.826902   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m02
	I0401 18:22:11.826922   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:11.826930   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:11.826933   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:11.830821   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:11.831489   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:11.831503   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:11.831511   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:11.831518   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:11.834772   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:12.327209   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m02
	I0401 18:22:12.327229   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:12.327237   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:12.327242   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:12.330514   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:12.331380   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:12.331401   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:12.331411   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:12.331417   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:12.333986   27284 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 18:22:12.826955   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m02
	I0401 18:22:12.826990   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:12.827002   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:12.827009   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:12.831259   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:22:12.832243   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:12.832258   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:12.832266   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:12.832270   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:12.835430   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:12.836219   27284 pod_ready.go:92] pod "etcd-ha-293078-m02" in "kube-system" namespace has status "Ready":"True"
	I0401 18:22:12.836240   27284 pod_ready.go:81] duration metric: took 8.009896637s for pod "etcd-ha-293078-m02" in "kube-system" namespace to be "Ready" ...
	I0401 18:22:12.836253   27284 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-293078" in "kube-system" namespace to be "Ready" ...
	I0401 18:22:12.836299   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-293078
	I0401 18:22:12.836307   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:12.836314   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:12.836318   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:12.839517   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:12.840372   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078
	I0401 18:22:12.840386   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:12.840394   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:12.840398   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:12.843655   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:12.844210   27284 pod_ready.go:92] pod "kube-apiserver-ha-293078" in "kube-system" namespace has status "Ready":"True"
	I0401 18:22:12.844226   27284 pod_ready.go:81] duration metric: took 7.966941ms for pod "kube-apiserver-ha-293078" in "kube-system" namespace to be "Ready" ...
	I0401 18:22:12.844235   27284 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-293078-m02" in "kube-system" namespace to be "Ready" ...
	I0401 18:22:12.844277   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-293078-m02
	I0401 18:22:12.844285   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:12.844292   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:12.844296   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:12.847446   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:12.848196   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:12.848209   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:12.848214   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:12.848217   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:12.851037   27284 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 18:22:12.851650   27284 pod_ready.go:92] pod "kube-apiserver-ha-293078-m02" in "kube-system" namespace has status "Ready":"True"
	I0401 18:22:12.851669   27284 pod_ready.go:81] duration metric: took 7.426737ms for pod "kube-apiserver-ha-293078-m02" in "kube-system" namespace to be "Ready" ...
	I0401 18:22:12.851690   27284 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-293078" in "kube-system" namespace to be "Ready" ...
	I0401 18:22:12.851748   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-293078
	I0401 18:22:12.851759   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:12.851769   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:12.851778   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:12.855044   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:12.855667   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078
	I0401 18:22:12.855682   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:12.855689   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:12.855692   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:12.858465   27284 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 18:22:12.858957   27284 pod_ready.go:92] pod "kube-controller-manager-ha-293078" in "kube-system" namespace has status "Ready":"True"
	I0401 18:22:12.858973   27284 pod_ready.go:81] duration metric: took 7.275432ms for pod "kube-controller-manager-ha-293078" in "kube-system" namespace to be "Ready" ...
	I0401 18:22:12.858982   27284 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-293078-m02" in "kube-system" namespace to be "Ready" ...
	I0401 18:22:12.859022   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-293078-m02
	I0401 18:22:12.859031   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:12.859038   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:12.859042   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:12.861085   27284 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 18:22:12.862016   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:12.862031   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:12.862039   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:12.862043   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:12.864616   27284 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 18:22:12.865108   27284 pod_ready.go:92] pod "kube-controller-manager-ha-293078-m02" in "kube-system" namespace has status "Ready":"True"
	I0401 18:22:12.865126   27284 pod_ready.go:81] duration metric: took 6.138033ms for pod "kube-controller-manager-ha-293078-m02" in "kube-system" namespace to be "Ready" ...
	I0401 18:22:12.865134   27284 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8s2xk" in "kube-system" namespace to be "Ready" ...
	I0401 18:22:13.027544   27284 request.go:629] Waited for 162.351985ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8s2xk
	I0401 18:22:13.027629   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8s2xk
	I0401 18:22:13.027636   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:13.027643   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:13.027650   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:13.031974   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:22:13.227010   27284 request.go:629] Waited for 194.019992ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:13.227094   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:13.227100   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:13.227107   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:13.227112   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:13.231013   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:13.231701   27284 pod_ready.go:92] pod "kube-proxy-8s2xk" in "kube-system" namespace has status "Ready":"True"
	I0401 18:22:13.231718   27284 pod_ready.go:81] duration metric: took 366.578291ms for pod "kube-proxy-8s2xk" in "kube-system" namespace to be "Ready" ...
	I0401 18:22:13.231727   27284 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-l5q2p" in "kube-system" namespace to be "Ready" ...
	I0401 18:22:13.427765   27284 request.go:629] Waited for 195.984154ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-proxy-l5q2p
	I0401 18:22:13.427877   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-proxy-l5q2p
	I0401 18:22:13.427887   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:13.427895   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:13.427902   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:13.431624   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:13.627544   27284 request.go:629] Waited for 195.356488ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/nodes/ha-293078
	I0401 18:22:13.627593   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078
	I0401 18:22:13.627598   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:13.627605   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:13.627609   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:13.631300   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:13.631993   27284 pod_ready.go:92] pod "kube-proxy-l5q2p" in "kube-system" namespace has status "Ready":"True"
	I0401 18:22:13.632008   27284 pod_ready.go:81] duration metric: took 400.275723ms for pod "kube-proxy-l5q2p" in "kube-system" namespace to be "Ready" ...
	I0401 18:22:13.632017   27284 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-293078" in "kube-system" namespace to be "Ready" ...
	I0401 18:22:13.827038   27284 request.go:629] Waited for 194.954892ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-293078
	I0401 18:22:13.827120   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-293078
	I0401 18:22:13.827132   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:13.827140   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:13.827143   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:13.830031   27284 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 18:22:14.026947   27284 request.go:629] Waited for 196.301082ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/nodes/ha-293078
	I0401 18:22:14.027054   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078
	I0401 18:22:14.027066   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:14.027076   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:14.027085   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:14.030795   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:14.031457   27284 pod_ready.go:92] pod "kube-scheduler-ha-293078" in "kube-system" namespace has status "Ready":"True"
	I0401 18:22:14.031475   27284 pod_ready.go:81] duration metric: took 399.452485ms for pod "kube-scheduler-ha-293078" in "kube-system" namespace to be "Ready" ...
	I0401 18:22:14.031486   27284 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-293078-m02" in "kube-system" namespace to be "Ready" ...
	I0401 18:22:14.227580   27284 request.go:629] Waited for 196.015009ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-293078-m02
	I0401 18:22:14.227630   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-293078-m02
	I0401 18:22:14.227635   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:14.227643   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:14.227647   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:14.231946   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:22:14.427080   27284 request.go:629] Waited for 194.287738ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:14.427191   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:14.427203   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:14.427215   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:14.427224   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:14.431562   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:22:14.432377   27284 pod_ready.go:92] pod "kube-scheduler-ha-293078-m02" in "kube-system" namespace has status "Ready":"True"
	I0401 18:22:14.432395   27284 pod_ready.go:81] duration metric: took 400.902592ms for pod "kube-scheduler-ha-293078-m02" in "kube-system" namespace to be "Ready" ...
	I0401 18:22:14.432406   27284 pod_ready.go:38] duration metric: took 9.648548345s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 18:22:14.432418   27284 api_server.go:52] waiting for apiserver process to appear ...
	I0401 18:22:14.432473   27284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 18:22:14.453096   27284 api_server.go:72] duration metric: took 13.999441922s to wait for apiserver process to appear ...
	I0401 18:22:14.453126   27284 api_server.go:88] waiting for apiserver healthz status ...
	I0401 18:22:14.453147   27284 api_server.go:253] Checking apiserver healthz at https://192.168.39.74:8443/healthz ...
	I0401 18:22:14.458725   27284 api_server.go:279] https://192.168.39.74:8443/healthz returned 200:
	ok
	I0401 18:22:14.458792   27284 round_trippers.go:463] GET https://192.168.39.74:8443/version
	I0401 18:22:14.458803   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:14.458810   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:14.458814   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:14.459980   27284 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0401 18:22:14.460095   27284 api_server.go:141] control plane version: v1.29.3
	I0401 18:22:14.460117   27284 api_server.go:131] duration metric: took 6.983041ms to wait for apiserver health ...
	I0401 18:22:14.460124   27284 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 18:22:14.627204   27284 request.go:629] Waited for 167.017662ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods
	I0401 18:22:14.627251   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods
	I0401 18:22:14.627256   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:14.627263   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:14.627266   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:14.634703   27284 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0401 18:22:14.641337   27284 system_pods.go:59] 17 kube-system pods found
	I0401 18:22:14.641363   27284 system_pods.go:61] "coredns-76f75df574-8v456" [28cf6a1d-90df-4802-ad3c-9c0276380a44] Running
	I0401 18:22:14.641368   27284 system_pods.go:61] "coredns-76f75df574-sqxnb" [17868bbd-b0e9-460c-b191-9707f613af0a] Running
	I0401 18:22:14.641371   27284 system_pods.go:61] "etcd-ha-293078" [0cf5a089-d409-4fa2-85de-fcc012d79ff3] Running
	I0401 18:22:14.641375   27284 system_pods.go:61] "etcd-ha-293078-m02" [8acd3424-a11f-4a40-97cf-b7e8b4a0975f] Running
	I0401 18:22:14.641378   27284 system_pods.go:61] "kindnet-f4djp" [5b26be41-434f-4908-95aa-64da9fe7ecb0] Running
	I0401 18:22:14.641381   27284 system_pods.go:61] "kindnet-rjfcj" [63f6ecc3-4bd0-406b-8096-ffd6115a2de3] Running
	I0401 18:22:14.641384   27284 system_pods.go:61] "kube-apiserver-ha-293078" [a0e08a32-b673-46b9-b965-9d321e4db6f1] Running
	I0401 18:22:14.641387   27284 system_pods.go:61] "kube-apiserver-ha-293078-m02" [533b0e64-f078-44f0-be6f-a8a3d880138a] Running
	I0401 18:22:14.641390   27284 system_pods.go:61] "kube-controller-manager-ha-293078" [3e9c2dbe-f437-4619-9b04-f30d9dab7f61] Running
	I0401 18:22:14.641392   27284 system_pods.go:61] "kube-controller-manager-ha-293078-m02" [e8879a89-4775-488b-9229-e86c2c891b5f] Running
	I0401 18:22:14.641395   27284 system_pods.go:61] "kube-proxy-8s2xk" [4fc029ea-1f23-497b-8fe3-38fc0e0a4c38] Running
	I0401 18:22:14.641398   27284 system_pods.go:61] "kube-proxy-l5q2p" [167db687-ac11-4f57-83c1-048c31a7b2cb] Running
	I0401 18:22:14.641400   27284 system_pods.go:61] "kube-scheduler-ha-293078" [87acbf1d-d53b-47d7-816a-492ba644ad0e] Running
	I0401 18:22:14.641403   27284 system_pods.go:61] "kube-scheduler-ha-293078-m02" [17a9003c-fd9f-48e2-b4b7-1ee6606ef480] Running
	I0401 18:22:14.641406   27284 system_pods.go:61] "kube-vip-ha-293078" [543de9ec-6f50-46b9-b6ec-f58964f81f12] Running
	I0401 18:22:14.641408   27284 system_pods.go:61] "kube-vip-ha-293078-m02" [6714926d-3bce-4773-92d6-e3811f532a37] Running
	I0401 18:22:14.641411   27284 system_pods.go:61] "storage-provisioner" [3d7c42eb-192e-4ae0-b5ae-0883ef5e740c] Running
	I0401 18:22:14.641416   27284 system_pods.go:74] duration metric: took 181.287454ms to wait for pod list to return data ...
	I0401 18:22:14.641425   27284 default_sa.go:34] waiting for default service account to be created ...
	I0401 18:22:14.827644   27284 request.go:629] Waited for 186.160719ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/namespaces/default/serviceaccounts
	I0401 18:22:14.827690   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/default/serviceaccounts
	I0401 18:22:14.827695   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:14.827703   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:14.827706   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:14.831978   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:22:14.832185   27284 default_sa.go:45] found service account: "default"
	I0401 18:22:14.832208   27284 default_sa.go:55] duration metric: took 190.776754ms for default service account to be created ...
	I0401 18:22:14.832216   27284 system_pods.go:116] waiting for k8s-apps to be running ...
	I0401 18:22:15.027643   27284 request.go:629] Waited for 195.364671ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods
	I0401 18:22:15.027690   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods
	I0401 18:22:15.027704   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:15.027734   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:15.027746   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:15.033998   27284 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 18:22:15.039475   27284 system_pods.go:86] 17 kube-system pods found
	I0401 18:22:15.039500   27284 system_pods.go:89] "coredns-76f75df574-8v456" [28cf6a1d-90df-4802-ad3c-9c0276380a44] Running
	I0401 18:22:15.039506   27284 system_pods.go:89] "coredns-76f75df574-sqxnb" [17868bbd-b0e9-460c-b191-9707f613af0a] Running
	I0401 18:22:15.039511   27284 system_pods.go:89] "etcd-ha-293078" [0cf5a089-d409-4fa2-85de-fcc012d79ff3] Running
	I0401 18:22:15.039515   27284 system_pods.go:89] "etcd-ha-293078-m02" [8acd3424-a11f-4a40-97cf-b7e8b4a0975f] Running
	I0401 18:22:15.039519   27284 system_pods.go:89] "kindnet-f4djp" [5b26be41-434f-4908-95aa-64da9fe7ecb0] Running
	I0401 18:22:15.039524   27284 system_pods.go:89] "kindnet-rjfcj" [63f6ecc3-4bd0-406b-8096-ffd6115a2de3] Running
	I0401 18:22:15.039528   27284 system_pods.go:89] "kube-apiserver-ha-293078" [a0e08a32-b673-46b9-b965-9d321e4db6f1] Running
	I0401 18:22:15.039532   27284 system_pods.go:89] "kube-apiserver-ha-293078-m02" [533b0e64-f078-44f0-be6f-a8a3d880138a] Running
	I0401 18:22:15.039536   27284 system_pods.go:89] "kube-controller-manager-ha-293078" [3e9c2dbe-f437-4619-9b04-f30d9dab7f61] Running
	I0401 18:22:15.039540   27284 system_pods.go:89] "kube-controller-manager-ha-293078-m02" [e8879a89-4775-488b-9229-e86c2c891b5f] Running
	I0401 18:22:15.039544   27284 system_pods.go:89] "kube-proxy-8s2xk" [4fc029ea-1f23-497b-8fe3-38fc0e0a4c38] Running
	I0401 18:22:15.039548   27284 system_pods.go:89] "kube-proxy-l5q2p" [167db687-ac11-4f57-83c1-048c31a7b2cb] Running
	I0401 18:22:15.039552   27284 system_pods.go:89] "kube-scheduler-ha-293078" [87acbf1d-d53b-47d7-816a-492ba644ad0e] Running
	I0401 18:22:15.039556   27284 system_pods.go:89] "kube-scheduler-ha-293078-m02" [17a9003c-fd9f-48e2-b4b7-1ee6606ef480] Running
	I0401 18:22:15.039560   27284 system_pods.go:89] "kube-vip-ha-293078" [543de9ec-6f50-46b9-b6ec-f58964f81f12] Running
	I0401 18:22:15.039564   27284 system_pods.go:89] "kube-vip-ha-293078-m02" [6714926d-3bce-4773-92d6-e3811f532a37] Running
	I0401 18:22:15.039567   27284 system_pods.go:89] "storage-provisioner" [3d7c42eb-192e-4ae0-b5ae-0883ef5e740c] Running
	I0401 18:22:15.039573   27284 system_pods.go:126] duration metric: took 207.352029ms to wait for k8s-apps to be running ...
	I0401 18:22:15.039583   27284 system_svc.go:44] waiting for kubelet service to be running ....
	I0401 18:22:15.039624   27284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 18:22:15.056116   27284 system_svc.go:56] duration metric: took 16.524636ms WaitForService to wait for kubelet
	I0401 18:22:15.056148   27284 kubeadm.go:576] duration metric: took 14.602509719s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 18:22:15.056166   27284 node_conditions.go:102] verifying NodePressure condition ...
	I0401 18:22:15.227566   27284 request.go:629] Waited for 171.325356ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/nodes
	I0401 18:22:15.227614   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes
	I0401 18:22:15.227620   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:15.227634   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:15.227638   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:15.231685   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:22:15.232381   27284 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 18:22:15.232404   27284 node_conditions.go:123] node cpu capacity is 2
	I0401 18:22:15.232433   27284 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 18:22:15.232439   27284 node_conditions.go:123] node cpu capacity is 2
	I0401 18:22:15.232459   27284 node_conditions.go:105] duration metric: took 176.287569ms to run NodePressure ...
	I0401 18:22:15.232477   27284 start.go:240] waiting for startup goroutines ...
	I0401 18:22:15.232520   27284 start.go:254] writing updated cluster config ...
	I0401 18:22:15.234875   27284 out.go:177] 
	I0401 18:22:15.236604   27284 config.go:182] Loaded profile config "ha-293078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 18:22:15.236698   27284 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/config.json ...
	I0401 18:22:15.238641   27284 out.go:177] * Starting "ha-293078-m03" control-plane node in "ha-293078" cluster
	I0401 18:22:15.240260   27284 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0401 18:22:15.240290   27284 cache.go:56] Caching tarball of preloaded images
	I0401 18:22:15.240404   27284 preload.go:173] Found /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 18:22:15.240418   27284 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0401 18:22:15.240543   27284 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/config.json ...
	I0401 18:22:15.240778   27284 start.go:360] acquireMachinesLock for ha-293078-m03: {Name:mk6b7472209a8db5f40be4c2f0565da7e0094c19 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 18:22:15.240832   27284 start.go:364] duration metric: took 30.303µs to acquireMachinesLock for "ha-293078-m03"
	I0401 18:22:15.240860   27284 start.go:93] Provisioning new machine with config: &{Name:ha-293078 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:ha-293078 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.74 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.161 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 18:22:15.240993   27284 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0401 18:22:15.242691   27284 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0401 18:22:15.242789   27284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:22:15.242834   27284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:22:15.257972   27284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43817
	I0401 18:22:15.258429   27284 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:22:15.258845   27284 main.go:141] libmachine: Using API Version  1
	I0401 18:22:15.258869   27284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:22:15.259209   27284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:22:15.259398   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetMachineName
	I0401 18:22:15.259542   27284 main.go:141] libmachine: (ha-293078-m03) Calling .DriverName
	I0401 18:22:15.259715   27284 start.go:159] libmachine.API.Create for "ha-293078" (driver="kvm2")
	I0401 18:22:15.259750   27284 client.go:168] LocalClient.Create starting
	I0401 18:22:15.259798   27284 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem
	I0401 18:22:15.259839   27284 main.go:141] libmachine: Decoding PEM data...
	I0401 18:22:15.259859   27284 main.go:141] libmachine: Parsing certificate...
	I0401 18:22:15.259921   27284 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem
	I0401 18:22:15.259946   27284 main.go:141] libmachine: Decoding PEM data...
	I0401 18:22:15.259963   27284 main.go:141] libmachine: Parsing certificate...
	I0401 18:22:15.259987   27284 main.go:141] libmachine: Running pre-create checks...
	I0401 18:22:15.259999   27284 main.go:141] libmachine: (ha-293078-m03) Calling .PreCreateCheck
	I0401 18:22:15.260182   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetConfigRaw
	I0401 18:22:15.260627   27284 main.go:141] libmachine: Creating machine...
	I0401 18:22:15.260646   27284 main.go:141] libmachine: (ha-293078-m03) Calling .Create
	I0401 18:22:15.260790   27284 main.go:141] libmachine: (ha-293078-m03) Creating KVM machine...
	I0401 18:22:15.262008   27284 main.go:141] libmachine: (ha-293078-m03) DBG | found existing default KVM network
	I0401 18:22:15.262207   27284 main.go:141] libmachine: (ha-293078-m03) DBG | found existing private KVM network mk-ha-293078
	I0401 18:22:15.262323   27284 main.go:141] libmachine: (ha-293078-m03) Setting up store path in /home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m03 ...
	I0401 18:22:15.262357   27284 main.go:141] libmachine: (ha-293078-m03) Building disk image from file:///home/jenkins/minikube-integration/18233-10493/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso
	I0401 18:22:15.262397   27284 main.go:141] libmachine: (ha-293078-m03) DBG | I0401 18:22:15.262312   27927 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18233-10493/.minikube
	I0401 18:22:15.262503   27284 main.go:141] libmachine: (ha-293078-m03) Downloading /home/jenkins/minikube-integration/18233-10493/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18233-10493/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso...
	I0401 18:22:15.480658   27284 main.go:141] libmachine: (ha-293078-m03) DBG | I0401 18:22:15.480550   27927 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m03/id_rsa...
	I0401 18:22:15.697387   27284 main.go:141] libmachine: (ha-293078-m03) DBG | I0401 18:22:15.697262   27927 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m03/ha-293078-m03.rawdisk...
	I0401 18:22:15.697420   27284 main.go:141] libmachine: (ha-293078-m03) DBG | Writing magic tar header
	I0401 18:22:15.697434   27284 main.go:141] libmachine: (ha-293078-m03) DBG | Writing SSH key tar header
	I0401 18:22:15.697447   27284 main.go:141] libmachine: (ha-293078-m03) DBG | I0401 18:22:15.697381   27927 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m03 ...
	I0401 18:22:15.697533   27284 main.go:141] libmachine: (ha-293078-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m03
	I0401 18:22:15.697567   27284 main.go:141] libmachine: (ha-293078-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18233-10493/.minikube/machines
	I0401 18:22:15.697581   27284 main.go:141] libmachine: (ha-293078-m03) Setting executable bit set on /home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m03 (perms=drwx------)
	I0401 18:22:15.697597   27284 main.go:141] libmachine: (ha-293078-m03) Setting executable bit set on /home/jenkins/minikube-integration/18233-10493/.minikube/machines (perms=drwxr-xr-x)
	I0401 18:22:15.697606   27284 main.go:141] libmachine: (ha-293078-m03) Setting executable bit set on /home/jenkins/minikube-integration/18233-10493/.minikube (perms=drwxr-xr-x)
	I0401 18:22:15.697616   27284 main.go:141] libmachine: (ha-293078-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18233-10493/.minikube
	I0401 18:22:15.697635   27284 main.go:141] libmachine: (ha-293078-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18233-10493
	I0401 18:22:15.697668   27284 main.go:141] libmachine: (ha-293078-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0401 18:22:15.697682   27284 main.go:141] libmachine: (ha-293078-m03) Setting executable bit set on /home/jenkins/minikube-integration/18233-10493 (perms=drwxrwxr-x)
	I0401 18:22:15.697694   27284 main.go:141] libmachine: (ha-293078-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0401 18:22:15.697702   27284 main.go:141] libmachine: (ha-293078-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0401 18:22:15.697713   27284 main.go:141] libmachine: (ha-293078-m03) Creating domain...
	I0401 18:22:15.697723   27284 main.go:141] libmachine: (ha-293078-m03) DBG | Checking permissions on dir: /home/jenkins
	I0401 18:22:15.697731   27284 main.go:141] libmachine: (ha-293078-m03) DBG | Checking permissions on dir: /home
	I0401 18:22:15.697740   27284 main.go:141] libmachine: (ha-293078-m03) DBG | Skipping /home - not owner
	I0401 18:22:15.698636   27284 main.go:141] libmachine: (ha-293078-m03) define libvirt domain using xml: 
	I0401 18:22:15.698654   27284 main.go:141] libmachine: (ha-293078-m03) <domain type='kvm'>
	I0401 18:22:15.698664   27284 main.go:141] libmachine: (ha-293078-m03)   <name>ha-293078-m03</name>
	I0401 18:22:15.698672   27284 main.go:141] libmachine: (ha-293078-m03)   <memory unit='MiB'>2200</memory>
	I0401 18:22:15.698681   27284 main.go:141] libmachine: (ha-293078-m03)   <vcpu>2</vcpu>
	I0401 18:22:15.698695   27284 main.go:141] libmachine: (ha-293078-m03)   <features>
	I0401 18:22:15.698705   27284 main.go:141] libmachine: (ha-293078-m03)     <acpi/>
	I0401 18:22:15.698712   27284 main.go:141] libmachine: (ha-293078-m03)     <apic/>
	I0401 18:22:15.698722   27284 main.go:141] libmachine: (ha-293078-m03)     <pae/>
	I0401 18:22:15.698738   27284 main.go:141] libmachine: (ha-293078-m03)     
	I0401 18:22:15.698750   27284 main.go:141] libmachine: (ha-293078-m03)   </features>
	I0401 18:22:15.698758   27284 main.go:141] libmachine: (ha-293078-m03)   <cpu mode='host-passthrough'>
	I0401 18:22:15.698784   27284 main.go:141] libmachine: (ha-293078-m03)   
	I0401 18:22:15.698806   27284 main.go:141] libmachine: (ha-293078-m03)   </cpu>
	I0401 18:22:15.698829   27284 main.go:141] libmachine: (ha-293078-m03)   <os>
	I0401 18:22:15.698847   27284 main.go:141] libmachine: (ha-293078-m03)     <type>hvm</type>
	I0401 18:22:15.698857   27284 main.go:141] libmachine: (ha-293078-m03)     <boot dev='cdrom'/>
	I0401 18:22:15.698867   27284 main.go:141] libmachine: (ha-293078-m03)     <boot dev='hd'/>
	I0401 18:22:15.698877   27284 main.go:141] libmachine: (ha-293078-m03)     <bootmenu enable='no'/>
	I0401 18:22:15.698891   27284 main.go:141] libmachine: (ha-293078-m03)   </os>
	I0401 18:22:15.698910   27284 main.go:141] libmachine: (ha-293078-m03)   <devices>
	I0401 18:22:15.698924   27284 main.go:141] libmachine: (ha-293078-m03)     <disk type='file' device='cdrom'>
	I0401 18:22:15.698939   27284 main.go:141] libmachine: (ha-293078-m03)       <source file='/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m03/boot2docker.iso'/>
	I0401 18:22:15.698952   27284 main.go:141] libmachine: (ha-293078-m03)       <target dev='hdc' bus='scsi'/>
	I0401 18:22:15.698965   27284 main.go:141] libmachine: (ha-293078-m03)       <readonly/>
	I0401 18:22:15.698977   27284 main.go:141] libmachine: (ha-293078-m03)     </disk>
	I0401 18:22:15.698988   27284 main.go:141] libmachine: (ha-293078-m03)     <disk type='file' device='disk'>
	I0401 18:22:15.699001   27284 main.go:141] libmachine: (ha-293078-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0401 18:22:15.699020   27284 main.go:141] libmachine: (ha-293078-m03)       <source file='/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m03/ha-293078-m03.rawdisk'/>
	I0401 18:22:15.699036   27284 main.go:141] libmachine: (ha-293078-m03)       <target dev='hda' bus='virtio'/>
	I0401 18:22:15.699049   27284 main.go:141] libmachine: (ha-293078-m03)     </disk>
	I0401 18:22:15.699061   27284 main.go:141] libmachine: (ha-293078-m03)     <interface type='network'>
	I0401 18:22:15.699075   27284 main.go:141] libmachine: (ha-293078-m03)       <source network='mk-ha-293078'/>
	I0401 18:22:15.699087   27284 main.go:141] libmachine: (ha-293078-m03)       <model type='virtio'/>
	I0401 18:22:15.699114   27284 main.go:141] libmachine: (ha-293078-m03)     </interface>
	I0401 18:22:15.699128   27284 main.go:141] libmachine: (ha-293078-m03)     <interface type='network'>
	I0401 18:22:15.699136   27284 main.go:141] libmachine: (ha-293078-m03)       <source network='default'/>
	I0401 18:22:15.699143   27284 main.go:141] libmachine: (ha-293078-m03)       <model type='virtio'/>
	I0401 18:22:15.699149   27284 main.go:141] libmachine: (ha-293078-m03)     </interface>
	I0401 18:22:15.699156   27284 main.go:141] libmachine: (ha-293078-m03)     <serial type='pty'>
	I0401 18:22:15.699162   27284 main.go:141] libmachine: (ha-293078-m03)       <target port='0'/>
	I0401 18:22:15.699168   27284 main.go:141] libmachine: (ha-293078-m03)     </serial>
	I0401 18:22:15.699174   27284 main.go:141] libmachine: (ha-293078-m03)     <console type='pty'>
	I0401 18:22:15.699179   27284 main.go:141] libmachine: (ha-293078-m03)       <target type='serial' port='0'/>
	I0401 18:22:15.699184   27284 main.go:141] libmachine: (ha-293078-m03)     </console>
	I0401 18:22:15.699190   27284 main.go:141] libmachine: (ha-293078-m03)     <rng model='virtio'>
	I0401 18:22:15.699197   27284 main.go:141] libmachine: (ha-293078-m03)       <backend model='random'>/dev/random</backend>
	I0401 18:22:15.699207   27284 main.go:141] libmachine: (ha-293078-m03)     </rng>
	I0401 18:22:15.699220   27284 main.go:141] libmachine: (ha-293078-m03)     
	I0401 18:22:15.699229   27284 main.go:141] libmachine: (ha-293078-m03)     
	I0401 18:22:15.699237   27284 main.go:141] libmachine: (ha-293078-m03)   </devices>
	I0401 18:22:15.699249   27284 main.go:141] libmachine: (ha-293078-m03) </domain>
	I0401 18:22:15.699284   27284 main.go:141] libmachine: (ha-293078-m03) 
	I0401 18:22:15.706407   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:f3:b3:14 in network default
	I0401 18:22:15.706938   27284 main.go:141] libmachine: (ha-293078-m03) Ensuring networks are active...
	I0401 18:22:15.706962   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:15.707640   27284 main.go:141] libmachine: (ha-293078-m03) Ensuring network default is active
	I0401 18:22:15.707975   27284 main.go:141] libmachine: (ha-293078-m03) Ensuring network mk-ha-293078 is active
	I0401 18:22:15.708235   27284 main.go:141] libmachine: (ha-293078-m03) Getting domain xml...
	I0401 18:22:15.708926   27284 main.go:141] libmachine: (ha-293078-m03) Creating domain...
	I0401 18:22:16.934106   27284 main.go:141] libmachine: (ha-293078-m03) Waiting to get IP...
	I0401 18:22:16.934793   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:16.935189   27284 main.go:141] libmachine: (ha-293078-m03) DBG | unable to find current IP address of domain ha-293078-m03 in network mk-ha-293078
	I0401 18:22:16.935223   27284 main.go:141] libmachine: (ha-293078-m03) DBG | I0401 18:22:16.935181   27927 retry.go:31] will retry after 274.998784ms: waiting for machine to come up
	I0401 18:22:17.211745   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:17.212222   27284 main.go:141] libmachine: (ha-293078-m03) DBG | unable to find current IP address of domain ha-293078-m03 in network mk-ha-293078
	I0401 18:22:17.212247   27284 main.go:141] libmachine: (ha-293078-m03) DBG | I0401 18:22:17.212194   27927 retry.go:31] will retry after 343.27575ms: waiting for machine to come up
	I0401 18:22:17.556896   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:17.557376   27284 main.go:141] libmachine: (ha-293078-m03) DBG | unable to find current IP address of domain ha-293078-m03 in network mk-ha-293078
	I0401 18:22:17.557407   27284 main.go:141] libmachine: (ha-293078-m03) DBG | I0401 18:22:17.557329   27927 retry.go:31] will retry after 324.461798ms: waiting for machine to come up
	I0401 18:22:17.883686   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:17.884228   27284 main.go:141] libmachine: (ha-293078-m03) DBG | unable to find current IP address of domain ha-293078-m03 in network mk-ha-293078
	I0401 18:22:17.884252   27284 main.go:141] libmachine: (ha-293078-m03) DBG | I0401 18:22:17.884197   27927 retry.go:31] will retry after 570.272916ms: waiting for machine to come up
	I0401 18:22:18.455961   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:18.456493   27284 main.go:141] libmachine: (ha-293078-m03) DBG | unable to find current IP address of domain ha-293078-m03 in network mk-ha-293078
	I0401 18:22:18.456519   27284 main.go:141] libmachine: (ha-293078-m03) DBG | I0401 18:22:18.456448   27927 retry.go:31] will retry after 574.872908ms: waiting for machine to come up
	I0401 18:22:19.033116   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:19.033611   27284 main.go:141] libmachine: (ha-293078-m03) DBG | unable to find current IP address of domain ha-293078-m03 in network mk-ha-293078
	I0401 18:22:19.033660   27284 main.go:141] libmachine: (ha-293078-m03) DBG | I0401 18:22:19.033584   27927 retry.go:31] will retry after 712.864102ms: waiting for machine to come up
	I0401 18:22:19.747796   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:19.748252   27284 main.go:141] libmachine: (ha-293078-m03) DBG | unable to find current IP address of domain ha-293078-m03 in network mk-ha-293078
	I0401 18:22:19.748284   27284 main.go:141] libmachine: (ha-293078-m03) DBG | I0401 18:22:19.748204   27927 retry.go:31] will retry after 802.917773ms: waiting for machine to come up
	I0401 18:22:20.552842   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:20.553261   27284 main.go:141] libmachine: (ha-293078-m03) DBG | unable to find current IP address of domain ha-293078-m03 in network mk-ha-293078
	I0401 18:22:20.553304   27284 main.go:141] libmachine: (ha-293078-m03) DBG | I0401 18:22:20.553230   27927 retry.go:31] will retry after 1.335699542s: waiting for machine to come up
	I0401 18:22:21.889998   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:21.890536   27284 main.go:141] libmachine: (ha-293078-m03) DBG | unable to find current IP address of domain ha-293078-m03 in network mk-ha-293078
	I0401 18:22:21.890560   27284 main.go:141] libmachine: (ha-293078-m03) DBG | I0401 18:22:21.890491   27927 retry.go:31] will retry after 1.340623586s: waiting for machine to come up
	I0401 18:22:23.232366   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:23.232762   27284 main.go:141] libmachine: (ha-293078-m03) DBG | unable to find current IP address of domain ha-293078-m03 in network mk-ha-293078
	I0401 18:22:23.232784   27284 main.go:141] libmachine: (ha-293078-m03) DBG | I0401 18:22:23.232718   27927 retry.go:31] will retry after 1.518373355s: waiting for machine to come up
	I0401 18:22:24.753484   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:24.754025   27284 main.go:141] libmachine: (ha-293078-m03) DBG | unable to find current IP address of domain ha-293078-m03 in network mk-ha-293078
	I0401 18:22:24.754078   27284 main.go:141] libmachine: (ha-293078-m03) DBG | I0401 18:22:24.753975   27927 retry.go:31] will retry after 2.792717607s: waiting for machine to come up
	I0401 18:22:27.548044   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:27.548363   27284 main.go:141] libmachine: (ha-293078-m03) DBG | unable to find current IP address of domain ha-293078-m03 in network mk-ha-293078
	I0401 18:22:27.548389   27284 main.go:141] libmachine: (ha-293078-m03) DBG | I0401 18:22:27.548324   27927 retry.go:31] will retry after 3.534393293s: waiting for machine to come up
	I0401 18:22:31.084675   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:31.085143   27284 main.go:141] libmachine: (ha-293078-m03) DBG | unable to find current IP address of domain ha-293078-m03 in network mk-ha-293078
	I0401 18:22:31.085168   27284 main.go:141] libmachine: (ha-293078-m03) DBG | I0401 18:22:31.085102   27927 retry.go:31] will retry after 3.093541151s: waiting for machine to come up
	I0401 18:22:34.181384   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:34.181872   27284 main.go:141] libmachine: (ha-293078-m03) DBG | unable to find current IP address of domain ha-293078-m03 in network mk-ha-293078
	I0401 18:22:34.181901   27284 main.go:141] libmachine: (ha-293078-m03) DBG | I0401 18:22:34.181831   27927 retry.go:31] will retry after 4.953837373s: waiting for machine to come up
	I0401 18:22:39.138773   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:39.139329   27284 main.go:141] libmachine: (ha-293078-m03) Found IP for machine: 192.168.39.210
	I0401 18:22:39.139349   27284 main.go:141] libmachine: (ha-293078-m03) Reserving static IP address...
	I0401 18:22:39.139359   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has current primary IP address 192.168.39.210 and MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:39.139669   27284 main.go:141] libmachine: (ha-293078-m03) DBG | unable to find host DHCP lease matching {name: "ha-293078-m03", mac: "52:54:00:48:33:4d", ip: "192.168.39.210"} in network mk-ha-293078
	I0401 18:22:39.210594   27284 main.go:141] libmachine: (ha-293078-m03) DBG | Getting to WaitForSSH function...
	I0401 18:22:39.210623   27284 main.go:141] libmachine: (ha-293078-m03) Reserved static IP address: 192.168.39.210
	I0401 18:22:39.210641   27284 main.go:141] libmachine: (ha-293078-m03) Waiting for SSH to be available...
	I0401 18:22:39.213525   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:39.213907   27284 main.go:141] libmachine: (ha-293078-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:48:33:4d", ip: ""} in network mk-ha-293078
	I0401 18:22:39.213934   27284 main.go:141] libmachine: (ha-293078-m03) DBG | unable to find defined IP address of network mk-ha-293078 interface with MAC address 52:54:00:48:33:4d
	I0401 18:22:39.214010   27284 main.go:141] libmachine: (ha-293078-m03) DBG | Using SSH client type: external
	I0401 18:22:39.214033   27284 main.go:141] libmachine: (ha-293078-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m03/id_rsa (-rw-------)
	I0401 18:22:39.214063   27284 main.go:141] libmachine: (ha-293078-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0401 18:22:39.214088   27284 main.go:141] libmachine: (ha-293078-m03) DBG | About to run SSH command:
	I0401 18:22:39.214125   27284 main.go:141] libmachine: (ha-293078-m03) DBG | exit 0
	I0401 18:22:39.217897   27284 main.go:141] libmachine: (ha-293078-m03) DBG | SSH cmd err, output: exit status 255: 
	I0401 18:22:39.217913   27284 main.go:141] libmachine: (ha-293078-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0401 18:22:39.217923   27284 main.go:141] libmachine: (ha-293078-m03) DBG | command : exit 0
	I0401 18:22:39.217931   27284 main.go:141] libmachine: (ha-293078-m03) DBG | err     : exit status 255
	I0401 18:22:39.217942   27284 main.go:141] libmachine: (ha-293078-m03) DBG | output  : 
	I0401 18:22:42.218406   27284 main.go:141] libmachine: (ha-293078-m03) DBG | Getting to WaitForSSH function...
	I0401 18:22:42.220893   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:42.221309   27284 main.go:141] libmachine: (ha-293078-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:33:4d", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:22:31 +0000 UTC Type:0 Mac:52:54:00:48:33:4d Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-293078-m03 Clientid:01:52:54:00:48:33:4d}
	I0401 18:22:42.221341   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:42.221481   27284 main.go:141] libmachine: (ha-293078-m03) DBG | Using SSH client type: external
	I0401 18:22:42.221500   27284 main.go:141] libmachine: (ha-293078-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m03/id_rsa (-rw-------)
	I0401 18:22:42.221518   27284 main.go:141] libmachine: (ha-293078-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.210 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0401 18:22:42.221528   27284 main.go:141] libmachine: (ha-293078-m03) DBG | About to run SSH command:
	I0401 18:22:42.221540   27284 main.go:141] libmachine: (ha-293078-m03) DBG | exit 0
	I0401 18:22:42.350186   27284 main.go:141] libmachine: (ha-293078-m03) DBG | SSH cmd err, output: <nil>: 
	I0401 18:22:42.350514   27284 main.go:141] libmachine: (ha-293078-m03) KVM machine creation complete!
	I0401 18:22:42.350907   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetConfigRaw
	I0401 18:22:42.351491   27284 main.go:141] libmachine: (ha-293078-m03) Calling .DriverName
	I0401 18:22:42.351695   27284 main.go:141] libmachine: (ha-293078-m03) Calling .DriverName
	I0401 18:22:42.351877   27284 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0401 18:22:42.351893   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetState
	I0401 18:22:42.353328   27284 main.go:141] libmachine: Detecting operating system of created instance...
	I0401 18:22:42.353344   27284 main.go:141] libmachine: Waiting for SSH to be available...
	I0401 18:22:42.353353   27284 main.go:141] libmachine: Getting to WaitForSSH function...
	I0401 18:22:42.353361   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHHostname
	I0401 18:22:42.355867   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:42.356217   27284 main.go:141] libmachine: (ha-293078-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:33:4d", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:22:31 +0000 UTC Type:0 Mac:52:54:00:48:33:4d Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-293078-m03 Clientid:01:52:54:00:48:33:4d}
	I0401 18:22:42.356244   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:42.356387   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHPort
	I0401 18:22:42.356589   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHKeyPath
	I0401 18:22:42.356761   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHKeyPath
	I0401 18:22:42.356906   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHUsername
	I0401 18:22:42.357068   27284 main.go:141] libmachine: Using SSH client type: native
	I0401 18:22:42.357354   27284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0401 18:22:42.357370   27284 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0401 18:22:42.469324   27284 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 18:22:42.469346   27284 main.go:141] libmachine: Detecting the provisioner...
	I0401 18:22:42.469356   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHHostname
	I0401 18:22:42.472136   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:42.472608   27284 main.go:141] libmachine: (ha-293078-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:33:4d", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:22:31 +0000 UTC Type:0 Mac:52:54:00:48:33:4d Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-293078-m03 Clientid:01:52:54:00:48:33:4d}
	I0401 18:22:42.472640   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:42.472927   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHPort
	I0401 18:22:42.473124   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHKeyPath
	I0401 18:22:42.473349   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHKeyPath
	I0401 18:22:42.473510   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHUsername
	I0401 18:22:42.473769   27284 main.go:141] libmachine: Using SSH client type: native
	I0401 18:22:42.473989   27284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0401 18:22:42.474007   27284 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0401 18:22:42.587119   27284 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0401 18:22:42.587197   27284 main.go:141] libmachine: found compatible host: buildroot
	I0401 18:22:42.587208   27284 main.go:141] libmachine: Provisioning with buildroot...
	I0401 18:22:42.587215   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetMachineName
	I0401 18:22:42.587512   27284 buildroot.go:166] provisioning hostname "ha-293078-m03"
	I0401 18:22:42.587540   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetMachineName
	I0401 18:22:42.587740   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHHostname
	I0401 18:22:42.590585   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:42.590866   27284 main.go:141] libmachine: (ha-293078-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:33:4d", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:22:31 +0000 UTC Type:0 Mac:52:54:00:48:33:4d Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-293078-m03 Clientid:01:52:54:00:48:33:4d}
	I0401 18:22:42.590909   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:42.591022   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHPort
	I0401 18:22:42.591263   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHKeyPath
	I0401 18:22:42.591423   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHKeyPath
	I0401 18:22:42.591528   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHUsername
	I0401 18:22:42.591685   27284 main.go:141] libmachine: Using SSH client type: native
	I0401 18:22:42.591832   27284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0401 18:22:42.591844   27284 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-293078-m03 && echo "ha-293078-m03" | sudo tee /etc/hostname
	I0401 18:22:42.722982   27284 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-293078-m03
	
	I0401 18:22:42.723057   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHHostname
	I0401 18:22:42.726506   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:42.726906   27284 main.go:141] libmachine: (ha-293078-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:33:4d", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:22:31 +0000 UTC Type:0 Mac:52:54:00:48:33:4d Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-293078-m03 Clientid:01:52:54:00:48:33:4d}
	I0401 18:22:42.726929   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:42.727143   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHPort
	I0401 18:22:42.727315   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHKeyPath
	I0401 18:22:42.727475   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHKeyPath
	I0401 18:22:42.727608   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHUsername
	I0401 18:22:42.727796   27284 main.go:141] libmachine: Using SSH client type: native
	I0401 18:22:42.728012   27284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0401 18:22:42.728036   27284 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-293078-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-293078-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-293078-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 18:22:42.853808   27284 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 18:22:42.853838   27284 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18233-10493/.minikube CaCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18233-10493/.minikube}
	I0401 18:22:42.853857   27284 buildroot.go:174] setting up certificates
	I0401 18:22:42.853869   27284 provision.go:84] configureAuth start
	I0401 18:22:42.853881   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetMachineName
	I0401 18:22:42.854202   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetIP
	I0401 18:22:42.856795   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:42.857151   27284 main.go:141] libmachine: (ha-293078-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:33:4d", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:22:31 +0000 UTC Type:0 Mac:52:54:00:48:33:4d Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-293078-m03 Clientid:01:52:54:00:48:33:4d}
	I0401 18:22:42.857177   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:42.857343   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHHostname
	I0401 18:22:42.859327   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:42.859700   27284 main.go:141] libmachine: (ha-293078-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:33:4d", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:22:31 +0000 UTC Type:0 Mac:52:54:00:48:33:4d Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-293078-m03 Clientid:01:52:54:00:48:33:4d}
	I0401 18:22:42.859727   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:42.859864   27284 provision.go:143] copyHostCerts
	I0401 18:22:42.859892   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem
	I0401 18:22:42.859929   27284 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem, removing ...
	I0401 18:22:42.859942   27284 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem
	I0401 18:22:42.860016   27284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem (1082 bytes)
	I0401 18:22:42.860104   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem
	I0401 18:22:42.860132   27284 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem, removing ...
	I0401 18:22:42.860142   27284 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem
	I0401 18:22:42.860180   27284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem (1123 bytes)
	I0401 18:22:42.860275   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem
	I0401 18:22:42.860299   27284 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem, removing ...
	I0401 18:22:42.860318   27284 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem
	I0401 18:22:42.860377   27284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem (1679 bytes)
	I0401 18:22:42.860445   27284 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem org=jenkins.ha-293078-m03 san=[127.0.0.1 192.168.39.210 ha-293078-m03 localhost minikube]
	I0401 18:22:43.069193   27284 provision.go:177] copyRemoteCerts
	I0401 18:22:43.069245   27284 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 18:22:43.069265   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHHostname
	I0401 18:22:43.072120   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:43.072524   27284 main.go:141] libmachine: (ha-293078-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:33:4d", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:22:31 +0000 UTC Type:0 Mac:52:54:00:48:33:4d Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-293078-m03 Clientid:01:52:54:00:48:33:4d}
	I0401 18:22:43.072558   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:43.072758   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHPort
	I0401 18:22:43.072958   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHKeyPath
	I0401 18:22:43.073150   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHUsername
	I0401 18:22:43.073348   27284 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m03/id_rsa Username:docker}
	I0401 18:22:43.160885   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0401 18:22:43.161038   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 18:22:43.189775   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0401 18:22:43.189846   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 18:22:43.217958   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0401 18:22:43.218044   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0401 18:22:43.246492   27284 provision.go:87] duration metric: took 392.611337ms to configureAuth
	I0401 18:22:43.246516   27284 buildroot.go:189] setting minikube options for container-runtime
	I0401 18:22:43.246728   27284 config.go:182] Loaded profile config "ha-293078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 18:22:43.246805   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHHostname
	I0401 18:22:43.250048   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:43.250413   27284 main.go:141] libmachine: (ha-293078-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:33:4d", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:22:31 +0000 UTC Type:0 Mac:52:54:00:48:33:4d Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-293078-m03 Clientid:01:52:54:00:48:33:4d}
	I0401 18:22:43.250436   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:43.250629   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHPort
	I0401 18:22:43.250848   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHKeyPath
	I0401 18:22:43.251032   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHKeyPath
	I0401 18:22:43.251197   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHUsername
	I0401 18:22:43.251358   27284 main.go:141] libmachine: Using SSH client type: native
	I0401 18:22:43.251558   27284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0401 18:22:43.251577   27284 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 18:22:43.559740   27284 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 18:22:43.559778   27284 main.go:141] libmachine: Checking connection to Docker...
	I0401 18:22:43.559790   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetURL
	I0401 18:22:43.561050   27284 main.go:141] libmachine: (ha-293078-m03) DBG | Using libvirt version 6000000
	I0401 18:22:43.563234   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:43.563588   27284 main.go:141] libmachine: (ha-293078-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:33:4d", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:22:31 +0000 UTC Type:0 Mac:52:54:00:48:33:4d Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-293078-m03 Clientid:01:52:54:00:48:33:4d}
	I0401 18:22:43.563618   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:43.563855   27284 main.go:141] libmachine: Docker is up and running!
	I0401 18:22:43.563875   27284 main.go:141] libmachine: Reticulating splines...
	I0401 18:22:43.563886   27284 client.go:171] duration metric: took 28.304121653s to LocalClient.Create
	I0401 18:22:43.563928   27284 start.go:167] duration metric: took 28.304201294s to libmachine.API.Create "ha-293078"
	I0401 18:22:43.563942   27284 start.go:293] postStartSetup for "ha-293078-m03" (driver="kvm2")
	I0401 18:22:43.563957   27284 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 18:22:43.563978   27284 main.go:141] libmachine: (ha-293078-m03) Calling .DriverName
	I0401 18:22:43.564208   27284 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 18:22:43.564231   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHHostname
	I0401 18:22:43.566382   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:43.566669   27284 main.go:141] libmachine: (ha-293078-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:33:4d", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:22:31 +0000 UTC Type:0 Mac:52:54:00:48:33:4d Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-293078-m03 Clientid:01:52:54:00:48:33:4d}
	I0401 18:22:43.566696   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:43.566840   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHPort
	I0401 18:22:43.567044   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHKeyPath
	I0401 18:22:43.567216   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHUsername
	I0401 18:22:43.567360   27284 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m03/id_rsa Username:docker}
	I0401 18:22:43.658485   27284 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 18:22:43.663610   27284 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 18:22:43.663634   27284 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/addons for local assets ...
	I0401 18:22:43.663699   27284 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/files for local assets ...
	I0401 18:22:43.663813   27284 filesync.go:149] local asset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> 177512.pem in /etc/ssl/certs
	I0401 18:22:43.663826   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> /etc/ssl/certs/177512.pem
	I0401 18:22:43.663946   27284 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 18:22:43.674306   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /etc/ssl/certs/177512.pem (1708 bytes)
	I0401 18:22:43.707812   27284 start.go:296] duration metric: took 143.85525ms for postStartSetup
	I0401 18:22:43.707865   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetConfigRaw
	I0401 18:22:43.708531   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetIP
	I0401 18:22:43.711192   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:43.711524   27284 main.go:141] libmachine: (ha-293078-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:33:4d", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:22:31 +0000 UTC Type:0 Mac:52:54:00:48:33:4d Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-293078-m03 Clientid:01:52:54:00:48:33:4d}
	I0401 18:22:43.711553   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:43.711756   27284 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/config.json ...
	I0401 18:22:43.711950   27284 start.go:128] duration metric: took 28.470946976s to createHost
	I0401 18:22:43.711978   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHHostname
	I0401 18:22:43.714466   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:43.714826   27284 main.go:141] libmachine: (ha-293078-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:33:4d", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:22:31 +0000 UTC Type:0 Mac:52:54:00:48:33:4d Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-293078-m03 Clientid:01:52:54:00:48:33:4d}
	I0401 18:22:43.714854   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:43.715058   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHPort
	I0401 18:22:43.715263   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHKeyPath
	I0401 18:22:43.715460   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHKeyPath
	I0401 18:22:43.715657   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHUsername
	I0401 18:22:43.715928   27284 main.go:141] libmachine: Using SSH client type: native
	I0401 18:22:43.716121   27284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0401 18:22:43.716137   27284 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 18:22:43.831268   27284 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711995763.818865754
	
	I0401 18:22:43.831295   27284 fix.go:216] guest clock: 1711995763.818865754
	I0401 18:22:43.831304   27284 fix.go:229] Guest: 2024-04-01 18:22:43.818865754 +0000 UTC Remote: 2024-04-01 18:22:43.711963464 +0000 UTC m=+155.586958148 (delta=106.90229ms)
	I0401 18:22:43.831323   27284 fix.go:200] guest clock delta is within tolerance: 106.90229ms
	I0401 18:22:43.831331   27284 start.go:83] releasing machines lock for "ha-293078-m03", held for 28.590480335s
	I0401 18:22:43.831356   27284 main.go:141] libmachine: (ha-293078-m03) Calling .DriverName
	I0401 18:22:43.831656   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetIP
	I0401 18:22:43.834240   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:43.834620   27284 main.go:141] libmachine: (ha-293078-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:33:4d", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:22:31 +0000 UTC Type:0 Mac:52:54:00:48:33:4d Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-293078-m03 Clientid:01:52:54:00:48:33:4d}
	I0401 18:22:43.834650   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:43.837074   27284 out.go:177] * Found network options:
	I0401 18:22:43.838633   27284 out.go:177]   - NO_PROXY=192.168.39.74,192.168.39.161
	W0401 18:22:43.840028   27284 proxy.go:119] fail to check proxy env: Error ip not in block
	W0401 18:22:43.840051   27284 proxy.go:119] fail to check proxy env: Error ip not in block
	I0401 18:22:43.840067   27284 main.go:141] libmachine: (ha-293078-m03) Calling .DriverName
	I0401 18:22:43.840552   27284 main.go:141] libmachine: (ha-293078-m03) Calling .DriverName
	I0401 18:22:43.840682   27284 main.go:141] libmachine: (ha-293078-m03) Calling .DriverName
	I0401 18:22:43.840802   27284 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 18:22:43.840846   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHHostname
	W0401 18:22:43.840962   27284 proxy.go:119] fail to check proxy env: Error ip not in block
	W0401 18:22:43.840990   27284 proxy.go:119] fail to check proxy env: Error ip not in block
	I0401 18:22:43.841048   27284 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 18:22:43.841070   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHHostname
	I0401 18:22:43.843535   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:43.843912   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:43.843951   27284 main.go:141] libmachine: (ha-293078-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:33:4d", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:22:31 +0000 UTC Type:0 Mac:52:54:00:48:33:4d Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-293078-m03 Clientid:01:52:54:00:48:33:4d}
	I0401 18:22:43.843973   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:43.844132   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHPort
	I0401 18:22:43.844301   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHKeyPath
	I0401 18:22:43.844371   27284 main.go:141] libmachine: (ha-293078-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:33:4d", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:22:31 +0000 UTC Type:0 Mac:52:54:00:48:33:4d Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-293078-m03 Clientid:01:52:54:00:48:33:4d}
	I0401 18:22:43.844396   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:43.844458   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHUsername
	I0401 18:22:43.844612   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHPort
	I0401 18:22:43.844630   27284 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m03/id_rsa Username:docker}
	I0401 18:22:43.844804   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHKeyPath
	I0401 18:22:43.844948   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHUsername
	I0401 18:22:43.845092   27284 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m03/id_rsa Username:docker}
	I0401 18:22:44.088753   27284 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 18:22:44.096862   27284 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 18:22:44.096933   27284 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 18:22:44.116332   27284 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 18:22:44.116354   27284 start.go:494] detecting cgroup driver to use...
	I0401 18:22:44.116426   27284 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 18:22:44.134504   27284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 18:22:44.150718   27284 docker.go:217] disabling cri-docker service (if available) ...
	I0401 18:22:44.150777   27284 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 18:22:44.166834   27284 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 18:22:44.182147   27284 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 18:22:44.301129   27284 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 18:22:44.463554   27284 docker.go:233] disabling docker service ...
	I0401 18:22:44.463608   27284 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 18:22:44.479887   27284 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 18:22:44.495528   27284 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 18:22:44.621231   27284 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 18:22:44.756683   27284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 18:22:44.773052   27284 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 18:22:44.795770   27284 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0401 18:22:44.795842   27284 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:22:44.808660   27284 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 18:22:44.808719   27284 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:22:44.820537   27284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:22:44.832408   27284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:22:44.844498   27284 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 18:22:44.858051   27284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:22:44.871522   27284 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:22:44.893438   27284 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:22:44.906913   27284 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 18:22:44.916966   27284 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0401 18:22:44.917022   27284 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0401 18:22:44.931059   27284 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 18:22:44.943888   27284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 18:22:45.065749   27284 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 18:22:45.216685   27284 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 18:22:45.216747   27284 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 18:22:45.222543   27284 start.go:562] Will wait 60s for crictl version
	I0401 18:22:45.222606   27284 ssh_runner.go:195] Run: which crictl
	I0401 18:22:45.226850   27284 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 18:22:45.275028   27284 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 18:22:45.275103   27284 ssh_runner.go:195] Run: crio --version
	I0401 18:22:45.306557   27284 ssh_runner.go:195] Run: crio --version
	I0401 18:22:45.345397   27284 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0401 18:22:45.346780   27284 out.go:177]   - env NO_PROXY=192.168.39.74
	I0401 18:22:45.348069   27284 out.go:177]   - env NO_PROXY=192.168.39.74,192.168.39.161
	I0401 18:22:45.349221   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetIP
	I0401 18:22:45.352039   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:45.352397   27284 main.go:141] libmachine: (ha-293078-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:33:4d", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:22:31 +0000 UTC Type:0 Mac:52:54:00:48:33:4d Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-293078-m03 Clientid:01:52:54:00:48:33:4d}
	I0401 18:22:45.352420   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:45.352637   27284 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0401 18:22:45.357452   27284 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 18:22:45.374262   27284 mustload.go:65] Loading cluster: ha-293078
	I0401 18:22:45.374525   27284 config.go:182] Loaded profile config "ha-293078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 18:22:45.374841   27284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:22:45.374880   27284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:22:45.390376   27284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34985
	I0401 18:22:45.390855   27284 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:22:45.391310   27284 main.go:141] libmachine: Using API Version  1
	I0401 18:22:45.391339   27284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:22:45.391689   27284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:22:45.391880   27284 main.go:141] libmachine: (ha-293078) Calling .GetState
	I0401 18:22:45.393476   27284 host.go:66] Checking if "ha-293078" exists ...
	I0401 18:22:45.393795   27284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:22:45.393835   27284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:22:45.409540   27284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42477
	I0401 18:22:45.410102   27284 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:22:45.410576   27284 main.go:141] libmachine: Using API Version  1
	I0401 18:22:45.410598   27284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:22:45.410902   27284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:22:45.411103   27284 main.go:141] libmachine: (ha-293078) Calling .DriverName
	I0401 18:22:45.411351   27284 certs.go:68] Setting up /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078 for IP: 192.168.39.210
	I0401 18:22:45.411363   27284 certs.go:194] generating shared ca certs ...
	I0401 18:22:45.411378   27284 certs.go:226] acquiring lock for ca certs: {Name:mk348b3e250c104b662139cd7212c6c6dfda3180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:22:45.411516   27284 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key
	I0401 18:22:45.411585   27284 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key
	I0401 18:22:45.411601   27284 certs.go:256] generating profile certs ...
	I0401 18:22:45.411689   27284 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/client.key
	I0401 18:22:45.411722   27284 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.key.b60f2778
	I0401 18:22:45.411741   27284 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.crt.b60f2778 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.74 192.168.39.161 192.168.39.210 192.168.39.254]
	I0401 18:22:45.477539   27284 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.crt.b60f2778 ...
	I0401 18:22:45.477567   27284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.crt.b60f2778: {Name:mk94d9c7e7188961a9f9c22990b934c3aa1a24dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:22:45.477762   27284 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.key.b60f2778 ...
	I0401 18:22:45.477777   27284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.key.b60f2778: {Name:mk111f6467b10a108cf38d970880495e36f6720e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:22:45.477856   27284 certs.go:381] copying /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.crt.b60f2778 -> /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.crt
	I0401 18:22:45.477978   27284 certs.go:385] copying /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.key.b60f2778 -> /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.key
	I0401 18:22:45.478092   27284 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/proxy-client.key
	I0401 18:22:45.478108   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0401 18:22:45.478126   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0401 18:22:45.478139   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0401 18:22:45.478152   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0401 18:22:45.478165   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0401 18:22:45.478180   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0401 18:22:45.478202   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0401 18:22:45.478216   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0401 18:22:45.478272   27284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem (1338 bytes)
	W0401 18:22:45.478311   27284 certs.go:480] ignoring /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751_empty.pem, impossibly tiny 0 bytes
	I0401 18:22:45.478328   27284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem (1675 bytes)
	I0401 18:22:45.478361   27284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem (1082 bytes)
	I0401 18:22:45.478392   27284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem (1123 bytes)
	I0401 18:22:45.478426   27284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem (1679 bytes)
	I0401 18:22:45.478477   27284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem (1708 bytes)
	I0401 18:22:45.478514   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem -> /usr/share/ca-certificates/17751.pem
	I0401 18:22:45.478534   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> /usr/share/ca-certificates/177512.pem
	I0401 18:22:45.478553   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0401 18:22:45.478592   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:22:45.481548   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:22:45.482060   27284 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:22:45.482091   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:22:45.482329   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:22:45.482502   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:22:45.482724   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:22:45.482872   27284 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078/id_rsa Username:docker}
	I0401 18:22:45.566043   27284 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0401 18:22:45.572928   27284 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0401 18:22:45.587934   27284 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0401 18:22:45.593056   27284 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0401 18:22:45.606420   27284 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0401 18:22:45.611489   27284 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0401 18:22:45.623901   27284 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0401 18:22:45.629467   27284 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0401 18:22:45.644835   27284 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0401 18:22:45.650094   27284 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0401 18:22:45.664304   27284 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0401 18:22:45.669609   27284 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0401 18:22:45.686761   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 18:22:45.721908   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 18:22:45.751085   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 18:22:45.780434   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 18:22:45.809788   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0401 18:22:45.841183   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 18:22:45.870562   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 18:22:45.898873   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 18:22:45.925581   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem --> /usr/share/ca-certificates/17751.pem (1338 bytes)
	I0401 18:22:45.954742   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /usr/share/ca-certificates/177512.pem (1708 bytes)
	I0401 18:22:45.982628   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 18:22:46.009204   27284 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0401 18:22:46.028338   27284 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0401 18:22:46.047940   27284 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0401 18:22:46.067139   27284 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0401 18:22:46.087039   27284 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0401 18:22:46.107123   27284 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0401 18:22:46.127244   27284 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (758 bytes)
	I0401 18:22:46.147755   27284 ssh_runner.go:195] Run: openssl version
	I0401 18:22:46.154170   27284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177512.pem && ln -fs /usr/share/ca-certificates/177512.pem /etc/ssl/certs/177512.pem"
	I0401 18:22:46.167851   27284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177512.pem
	I0401 18:22:46.172914   27284 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 18:15 /usr/share/ca-certificates/177512.pem
	I0401 18:22:46.172960   27284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177512.pem
	I0401 18:22:46.179198   27284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177512.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 18:22:46.192958   27284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 18:22:46.205944   27284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 18:22:46.210934   27284 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 18:07 /usr/share/ca-certificates/minikubeCA.pem
	I0401 18:22:46.210976   27284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 18:22:46.217079   27284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 18:22:46.230500   27284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17751.pem && ln -fs /usr/share/ca-certificates/17751.pem /etc/ssl/certs/17751.pem"
	I0401 18:22:46.245390   27284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17751.pem
	I0401 18:22:46.250183   27284 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 18:15 /usr/share/ca-certificates/17751.pem
	I0401 18:22:46.250231   27284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17751.pem
	I0401 18:22:46.257033   27284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17751.pem /etc/ssl/certs/51391683.0"
	I0401 18:22:46.270296   27284 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 18:22:46.275209   27284 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 18:22:46.275263   27284 kubeadm.go:928] updating node {m03 192.168.39.210 8443 v1.29.3 crio true true} ...
	I0401 18:22:46.275333   27284 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-293078-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.210
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-293078 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 18:22:46.275355   27284 kube-vip.go:111] generating kube-vip config ...
	I0401 18:22:46.275392   27284 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0401 18:22:46.295926   27284 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0401 18:22:46.295991   27284 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0401 18:22:46.296045   27284 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0401 18:22:46.316543   27284 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.29.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.29.3': No such file or directory
	
	Initiating transfer...
	I0401 18:22:46.316630   27284 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.29.3
	I0401 18:22:46.329508   27284 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm.sha256
	I0401 18:22:46.329541   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/linux/amd64/v1.29.3/kubeadm -> /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0401 18:22:46.329607   27284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0401 18:22:46.329508   27284 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl.sha256
	I0401 18:22:46.329513   27284 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet.sha256
	I0401 18:22:46.329657   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/linux/amd64/v1.29.3/kubectl -> /var/lib/minikube/binaries/v1.29.3/kubectl
	I0401 18:22:46.329699   27284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 18:22:46.329773   27284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl
	I0401 18:22:46.348542   27284 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubeadm': No such file or directory
	I0401 18:22:46.348583   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/cache/linux/amd64/v1.29.3/kubeadm --> /var/lib/minikube/binaries/v1.29.3/kubeadm (48340992 bytes)
	I0401 18:22:46.348597   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/linux/amd64/v1.29.3/kubelet -> /var/lib/minikube/binaries/v1.29.3/kubelet
	I0401 18:22:46.348633   27284 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubectl': No such file or directory
	I0401 18:22:46.348658   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/cache/linux/amd64/v1.29.3/kubectl --> /var/lib/minikube/binaries/v1.29.3/kubectl (49799168 bytes)
	I0401 18:22:46.348703   27284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet
	I0401 18:22:46.387206   27284 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubelet': No such file or directory
	I0401 18:22:46.387253   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/cache/linux/amd64/v1.29.3/kubelet --> /var/lib/minikube/binaries/v1.29.3/kubelet (111919104 bytes)
	I0401 18:22:47.461486   27284 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0401 18:22:47.472555   27284 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0401 18:22:47.493624   27284 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 18:22:47.514093   27284 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0401 18:22:47.533490   27284 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0401 18:22:47.538325   27284 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 18:22:47.553275   27284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 18:22:47.678379   27284 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 18:22:47.700985   27284 host.go:66] Checking if "ha-293078" exists ...
	I0401 18:22:47.701374   27284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:22:47.701417   27284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:22:47.717953   27284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39563
	I0401 18:22:47.718395   27284 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:22:47.718861   27284 main.go:141] libmachine: Using API Version  1
	I0401 18:22:47.718887   27284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:22:47.719230   27284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:22:47.719427   27284 main.go:141] libmachine: (ha-293078) Calling .DriverName
	I0401 18:22:47.719576   27284 start.go:316] joinCluster: &{Name:ha-293078 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cluster
Name:ha-293078 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.74 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.161 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 18:22:47.719684   27284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0401 18:22:47.719705   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:22:47.722784   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:22:47.723256   27284 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:22:47.723283   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:22:47.723430   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:22:47.723592   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:22:47.723772   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:22:47.723904   27284 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078/id_rsa Username:docker}
	I0401 18:22:47.897889   27284 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 18:22:47.897928   27284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zlz8yw.nr6jjfmjltmu3ae7 --discovery-token-ca-cert-hash sha256:b8a0197ad47aa27a5800307c57228d22e61e4d31af785fa8a896f2b7fab267b8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-293078-m03 --control-plane --apiserver-advertise-address=192.168.39.210 --apiserver-bind-port=8443"
	I0401 18:23:14.465535   27284 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zlz8yw.nr6jjfmjltmu3ae7 --discovery-token-ca-cert-hash sha256:b8a0197ad47aa27a5800307c57228d22e61e4d31af785fa8a896f2b7fab267b8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-293078-m03 --control-plane --apiserver-advertise-address=192.168.39.210 --apiserver-bind-port=8443": (26.567580255s)
	I0401 18:23:14.465575   27284 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0401 18:23:15.164461   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-293078-m03 minikube.k8s.io/updated_at=2024_04_01T18_23_15_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2 minikube.k8s.io/name=ha-293078 minikube.k8s.io/primary=false
	I0401 18:23:15.331409   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-293078-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0401 18:23:15.534561   27284 start.go:318] duration metric: took 27.814978339s to joinCluster
	I0401 18:23:15.534645   27284 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 18:23:15.536312   27284 out.go:177] * Verifying Kubernetes components...
	I0401 18:23:15.535095   27284 config.go:182] Loaded profile config "ha-293078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 18:23:15.537738   27284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 18:23:15.846349   27284 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 18:23:15.888554   27284 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 18:23:15.888927   27284 kapi.go:59] client config for ha-293078: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/client.crt", KeyFile:"/home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/client.key", CAFile:"/home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5ca00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0401 18:23:15.889004   27284 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.74:8443
	I0401 18:23:15.889291   27284 node_ready.go:35] waiting up to 6m0s for node "ha-293078-m03" to be "Ready" ...
	I0401 18:23:15.889384   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:15.889396   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:15.889406   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:15.889412   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:15.893245   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:16.389548   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:16.389570   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:16.389580   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:16.389585   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:16.393292   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:16.889673   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:16.889709   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:16.889722   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:16.889729   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:16.893549   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:17.389820   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:17.389845   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:17.389857   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:17.389865   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:17.394046   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:23:17.890405   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:17.890431   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:17.890442   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:17.890448   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:17.893911   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:17.894753   27284 node_ready.go:53] node "ha-293078-m03" has status "Ready":"False"
	I0401 18:23:18.389954   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:18.389979   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:18.389987   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:18.389992   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:18.392851   27284 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 18:23:18.890430   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:18.890452   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:18.890463   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:18.890473   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:18.901311   27284 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0401 18:23:18.902247   27284 node_ready.go:49] node "ha-293078-m03" has status "Ready":"True"
	I0401 18:23:18.902271   27284 node_ready.go:38] duration metric: took 3.012956296s for node "ha-293078-m03" to be "Ready" ...
	I0401 18:23:18.902282   27284 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 18:23:18.902357   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods
	I0401 18:23:18.902371   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:18.902380   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:18.902388   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:18.908821   27284 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 18:23:18.917997   27284 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-8v456" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:18.918068   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-8v456
	I0401 18:23:18.918078   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:18.918086   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:18.918090   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:18.921933   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:18.922667   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078
	I0401 18:23:18.922680   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:18.922688   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:18.922692   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:18.926297   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:18.927021   27284 pod_ready.go:92] pod "coredns-76f75df574-8v456" in "kube-system" namespace has status "Ready":"True"
	I0401 18:23:18.927042   27284 pod_ready.go:81] duration metric: took 9.022032ms for pod "coredns-76f75df574-8v456" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:18.927050   27284 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-sqxnb" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:18.927098   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-sqxnb
	I0401 18:23:18.927126   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:18.927133   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:18.927137   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:18.930084   27284 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 18:23:18.930729   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078
	I0401 18:23:18.930746   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:18.930755   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:18.930761   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:18.933368   27284 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 18:23:18.933987   27284 pod_ready.go:92] pod "coredns-76f75df574-sqxnb" in "kube-system" namespace has status "Ready":"True"
	I0401 18:23:18.934003   27284 pod_ready.go:81] duration metric: took 6.947943ms for pod "coredns-76f75df574-sqxnb" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:18.934011   27284 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-293078" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:18.934050   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078
	I0401 18:23:18.934057   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:18.934063   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:18.934071   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:18.936855   27284 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 18:23:18.937424   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078
	I0401 18:23:18.937438   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:18.937445   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:18.937448   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:18.939930   27284 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 18:23:18.940414   27284 pod_ready.go:92] pod "etcd-ha-293078" in "kube-system" namespace has status "Ready":"True"
	I0401 18:23:18.940432   27284 pod_ready.go:81] duration metric: took 6.414484ms for pod "etcd-ha-293078" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:18.940445   27284 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-293078-m02" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:18.940500   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m02
	I0401 18:23:18.940510   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:18.940520   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:18.940529   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:18.943265   27284 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 18:23:18.943837   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:23:18.943856   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:18.943865   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:18.943869   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:18.946965   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:18.948091   27284 pod_ready.go:92] pod "etcd-ha-293078-m02" in "kube-system" namespace has status "Ready":"True"
	I0401 18:23:18.948105   27284 pod_ready.go:81] duration metric: took 7.654042ms for pod "etcd-ha-293078-m02" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:18.948112   27284 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-293078-m03" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:19.090402   27284 request.go:629] Waited for 142.236463ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m03
	I0401 18:23:19.090485   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m03
	I0401 18:23:19.090497   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:19.090504   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:19.090509   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:19.094389   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:19.290798   27284 request.go:629] Waited for 195.392287ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:19.290849   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:19.290854   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:19.290868   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:19.290879   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:19.294707   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:19.490696   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m03
	I0401 18:23:19.490718   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:19.490730   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:19.490736   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:19.494934   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:23:19.691066   27284 request.go:629] Waited for 195.382723ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:19.691114   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:19.691119   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:19.691127   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:19.691133   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:19.694536   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:19.948893   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m03
	I0401 18:23:19.948914   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:19.948921   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:19.948926   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:19.953796   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:23:20.090832   27284 request.go:629] Waited for 136.206148ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:20.090909   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:20.090917   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:20.090926   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:20.090932   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:20.094356   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:20.448982   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m03
	I0401 18:23:20.449001   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:20.449009   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:20.449013   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:20.454151   27284 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 18:23:20.491189   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:20.491213   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:20.491225   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:20.491232   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:20.495220   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:20.948349   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m03
	I0401 18:23:20.948371   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:20.948379   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:20.948383   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:20.951977   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:20.952768   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:20.952793   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:20.952805   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:20.952810   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:20.955572   27284 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 18:23:20.956401   27284 pod_ready.go:102] pod "etcd-ha-293078-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 18:23:21.448709   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m03
	I0401 18:23:21.448730   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:21.448741   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:21.448746   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:21.453047   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:23:21.453827   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:21.453846   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:21.453857   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:21.453862   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:21.459506   27284 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 18:23:21.948488   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m03
	I0401 18:23:21.948510   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:21.948518   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:21.948522   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:21.952961   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:23:21.954159   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:21.954177   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:21.954188   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:21.954192   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:21.957281   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:22.448514   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m03
	I0401 18:23:22.448531   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:22.448539   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:22.448543   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:22.454087   27284 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 18:23:22.455015   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:22.455034   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:22.455044   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:22.455052   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:22.458813   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:22.949284   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m03
	I0401 18:23:22.949307   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:22.949320   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:22.949327   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:22.952746   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:22.953718   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:22.953736   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:22.953747   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:22.953757   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:22.957590   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:22.958751   27284 pod_ready.go:102] pod "etcd-ha-293078-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 18:23:23.449230   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m03
	I0401 18:23:23.449255   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:23.449264   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:23.449271   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:23.455030   27284 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 18:23:23.455927   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:23.455949   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:23.455960   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:23.455967   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:23.462037   27284 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 18:23:23.948885   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m03
	I0401 18:23:23.948907   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:23.948917   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:23.948921   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:23.952749   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:23.953885   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:23.953905   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:23.953915   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:23.953919   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:23.957406   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:24.448874   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m03
	I0401 18:23:24.448904   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:24.448916   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:24.448922   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:24.456101   27284 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0401 18:23:24.456945   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:24.456967   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:24.456977   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:24.456982   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:24.460705   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:24.949033   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m03
	I0401 18:23:24.949052   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:24.949060   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:24.949064   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:24.952838   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:24.954000   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:24.954014   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:24.954022   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:24.954027   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:24.957518   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:24.958453   27284 pod_ready.go:92] pod "etcd-ha-293078-m03" in "kube-system" namespace has status "Ready":"True"
	I0401 18:23:24.958478   27284 pod_ready.go:81] duration metric: took 6.010358402s for pod "etcd-ha-293078-m03" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:24.958500   27284 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-293078" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:24.958573   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-293078
	I0401 18:23:24.958585   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:24.958595   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:24.958605   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:24.961697   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:24.962630   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078
	I0401 18:23:24.962648   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:24.962658   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:24.962662   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:24.965776   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:24.966674   27284 pod_ready.go:92] pod "kube-apiserver-ha-293078" in "kube-system" namespace has status "Ready":"True"
	I0401 18:23:24.966696   27284 pod_ready.go:81] duration metric: took 8.18025ms for pod "kube-apiserver-ha-293078" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:24.966708   27284 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-293078-m02" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:24.966772   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-293078-m02
	I0401 18:23:24.966783   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:24.966793   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:24.966804   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:24.969662   27284 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 18:23:24.970502   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:23:24.970517   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:24.970525   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:24.970531   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:24.973202   27284 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 18:23:24.973799   27284 pod_ready.go:92] pod "kube-apiserver-ha-293078-m02" in "kube-system" namespace has status "Ready":"True"
	I0401 18:23:24.973813   27284 pod_ready.go:81] duration metric: took 7.09873ms for pod "kube-apiserver-ha-293078-m02" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:24.973827   27284 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-293078-m03" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:25.091128   27284 request.go:629] Waited for 117.24775ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-293078-m03
	I0401 18:23:25.091215   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-293078-m03
	I0401 18:23:25.091226   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:25.091238   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:25.091248   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:25.097772   27284 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 18:23:25.291420   27284 request.go:629] Waited for 192.202464ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:25.291485   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:25.291490   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:25.291501   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:25.291505   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:25.295096   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:25.490850   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-293078-m03
	I0401 18:23:25.490872   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:25.490880   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:25.490885   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:25.494669   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:25.690833   27284 request.go:629] Waited for 195.192282ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:25.690924   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:25.690935   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:25.690943   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:25.690951   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:25.694528   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:25.974711   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-293078-m03
	I0401 18:23:25.974741   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:25.974753   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:25.974777   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:25.978565   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:26.091037   27284 request.go:629] Waited for 111.305411ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:26.091102   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:26.091109   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:26.091121   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:26.091133   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:26.095369   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:23:26.095925   27284 pod_ready.go:92] pod "kube-apiserver-ha-293078-m03" in "kube-system" namespace has status "Ready":"True"
	I0401 18:23:26.095941   27284 pod_ready.go:81] duration metric: took 1.12210596s for pod "kube-apiserver-ha-293078-m03" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:26.095951   27284 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-293078" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:26.291382   27284 request.go:629] Waited for 195.357086ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-293078
	I0401 18:23:26.291451   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-293078
	I0401 18:23:26.291459   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:26.291469   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:26.291483   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:26.294977   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:26.490960   27284 request.go:629] Waited for 195.101001ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/nodes/ha-293078
	I0401 18:23:26.491035   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078
	I0401 18:23:26.491044   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:26.491052   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:26.491061   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:26.497846   27284 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 18:23:26.498954   27284 pod_ready.go:92] pod "kube-controller-manager-ha-293078" in "kube-system" namespace has status "Ready":"True"
	I0401 18:23:26.498971   27284 pod_ready.go:81] duration metric: took 403.014452ms for pod "kube-controller-manager-ha-293078" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:26.498981   27284 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-293078-m02" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:26.691040   27284 request.go:629] Waited for 192.000125ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-293078-m02
	I0401 18:23:26.691105   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-293078-m02
	I0401 18:23:26.691113   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:26.691121   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:26.691128   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:26.695475   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:23:26.890804   27284 request.go:629] Waited for 194.161305ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:23:26.890856   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:23:26.890862   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:26.890869   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:26.890874   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:26.894943   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:23:26.895672   27284 pod_ready.go:92] pod "kube-controller-manager-ha-293078-m02" in "kube-system" namespace has status "Ready":"True"
	I0401 18:23:26.895688   27284 pod_ready.go:81] duration metric: took 396.701752ms for pod "kube-controller-manager-ha-293078-m02" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:26.895700   27284 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-293078-m03" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:27.090831   27284 request.go:629] Waited for 195.062948ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-293078-m03
	I0401 18:23:27.090887   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-293078-m03
	I0401 18:23:27.090907   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:27.090938   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:27.090946   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:27.094642   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:27.290846   27284 request.go:629] Waited for 195.388812ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:27.290925   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:27.290935   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:27.290950   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:27.290958   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:27.294816   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:27.490793   27284 request.go:629] Waited for 94.269817ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-293078-m03
	I0401 18:23:27.490843   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-293078-m03
	I0401 18:23:27.490849   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:27.490857   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:27.490863   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:27.494415   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:27.690578   27284 request.go:629] Waited for 195.27528ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:27.690627   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:27.690632   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:27.690639   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:27.690645   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:27.694778   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:23:27.896224   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-293078-m03
	I0401 18:23:27.896247   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:27.896255   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:27.896259   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:27.899642   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:28.091172   27284 request.go:629] Waited for 190.354464ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:28.091255   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:28.091331   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:28.091368   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:28.091382   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:28.095777   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:23:28.396434   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-293078-m03
	I0401 18:23:28.396459   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:28.396471   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:28.396476   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:28.401744   27284 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 18:23:28.490810   27284 request.go:629] Waited for 88.074658ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:28.490869   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:28.490880   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:28.490891   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:28.490901   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:28.494508   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:28.896766   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-293078-m03
	I0401 18:23:28.896791   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:28.896803   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:28.896808   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:28.901043   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:23:28.902086   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:28.902104   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:28.902114   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:28.902119   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:28.904997   27284 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 18:23:28.905761   27284 pod_ready.go:92] pod "kube-controller-manager-ha-293078-m03" in "kube-system" namespace has status "Ready":"True"
	I0401 18:23:28.905778   27284 pod_ready.go:81] duration metric: took 2.010067506s for pod "kube-controller-manager-ha-293078-m03" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:28.905787   27284 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8s2xk" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:29.091199   27284 request.go:629] Waited for 185.33684ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8s2xk
	I0401 18:23:29.091265   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8s2xk
	I0401 18:23:29.091272   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:29.091288   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:29.091294   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:29.095118   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:29.291460   27284 request.go:629] Waited for 195.339258ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:23:29.291536   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:23:29.291550   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:29.291558   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:29.291565   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:29.297770   27284 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 18:23:29.298430   27284 pod_ready.go:92] pod "kube-proxy-8s2xk" in "kube-system" namespace has status "Ready":"True"
	I0401 18:23:29.298446   27284 pod_ready.go:81] duration metric: took 392.653814ms for pod "kube-proxy-8s2xk" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:29.298457   27284 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-l5q2p" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:29.491448   27284 request.go:629] Waited for 192.915985ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-proxy-l5q2p
	I0401 18:23:29.491496   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-proxy-l5q2p
	I0401 18:23:29.491502   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:29.491519   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:29.491531   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:29.495189   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:29.690845   27284 request.go:629] Waited for 194.460949ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/nodes/ha-293078
	I0401 18:23:29.690928   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078
	I0401 18:23:29.690939   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:29.690950   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:29.690960   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:29.695740   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:23:29.696844   27284 pod_ready.go:92] pod "kube-proxy-l5q2p" in "kube-system" namespace has status "Ready":"True"
	I0401 18:23:29.696861   27284 pod_ready.go:81] duration metric: took 398.393999ms for pod "kube-proxy-l5q2p" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:29.696871   27284 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xjx5z" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:29.891428   27284 request.go:629] Waited for 194.489302ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xjx5z
	I0401 18:23:29.891499   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xjx5z
	I0401 18:23:29.891511   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:29.891528   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:29.891541   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:29.895446   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:30.090662   27284 request.go:629] Waited for 194.28593ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:30.090737   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:30.090745   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:30.090754   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:30.090767   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:30.094756   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:30.095328   27284 pod_ready.go:92] pod "kube-proxy-xjx5z" in "kube-system" namespace has status "Ready":"True"
	I0401 18:23:30.095346   27284 pod_ready.go:81] duration metric: took 398.469601ms for pod "kube-proxy-xjx5z" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:30.095355   27284 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-293078" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:30.291055   27284 request.go:629] Waited for 195.637359ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-293078
	I0401 18:23:30.291123   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-293078
	I0401 18:23:30.291135   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:30.291144   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:30.291188   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:30.295411   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:23:30.490842   27284 request.go:629] Waited for 194.702568ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/nodes/ha-293078
	I0401 18:23:30.490893   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078
	I0401 18:23:30.490899   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:30.490907   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:30.490917   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:30.494562   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:30.495041   27284 pod_ready.go:92] pod "kube-scheduler-ha-293078" in "kube-system" namespace has status "Ready":"True"
	I0401 18:23:30.495059   27284 pod_ready.go:81] duration metric: took 399.697969ms for pod "kube-scheduler-ha-293078" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:30.495069   27284 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-293078-m02" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:30.691215   27284 request.go:629] Waited for 196.08517ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-293078-m02
	I0401 18:23:30.691301   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-293078-m02
	I0401 18:23:30.691309   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:30.691320   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:30.691330   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:30.695045   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:30.891331   27284 request.go:629] Waited for 195.360086ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:23:30.891408   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:23:30.891413   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:30.891424   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:30.891435   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:30.895318   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:30.895896   27284 pod_ready.go:92] pod "kube-scheduler-ha-293078-m02" in "kube-system" namespace has status "Ready":"True"
	I0401 18:23:30.895918   27284 pod_ready.go:81] duration metric: took 400.84198ms for pod "kube-scheduler-ha-293078-m02" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:30.895934   27284 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-293078-m03" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:31.090980   27284 request.go:629] Waited for 194.941422ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-293078-m03
	I0401 18:23:31.091102   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-293078-m03
	I0401 18:23:31.091117   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:31.091129   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:31.091140   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:31.095751   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:23:31.290824   27284 request.go:629] Waited for 194.375107ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:31.290876   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:31.290881   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:31.290893   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:31.290911   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:31.294665   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:31.295287   27284 pod_ready.go:92] pod "kube-scheduler-ha-293078-m03" in "kube-system" namespace has status "Ready":"True"
	I0401 18:23:31.295308   27284 pod_ready.go:81] duration metric: took 399.359654ms for pod "kube-scheduler-ha-293078-m03" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:31.295326   27284 pod_ready.go:38] duration metric: took 12.393032861s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 18:23:31.295348   27284 api_server.go:52] waiting for apiserver process to appear ...
	I0401 18:23:31.295409   27284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 18:23:31.311779   27284 api_server.go:72] duration metric: took 15.777106233s to wait for apiserver process to appear ...
	I0401 18:23:31.311796   27284 api_server.go:88] waiting for apiserver healthz status ...
	I0401 18:23:31.311811   27284 api_server.go:253] Checking apiserver healthz at https://192.168.39.74:8443/healthz ...
	I0401 18:23:31.317726   27284 api_server.go:279] https://192.168.39.74:8443/healthz returned 200:
	ok
	I0401 18:23:31.317790   27284 round_trippers.go:463] GET https://192.168.39.74:8443/version
	I0401 18:23:31.317803   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:31.317814   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:31.317820   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:31.318781   27284 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0401 18:23:31.318832   27284 api_server.go:141] control plane version: v1.29.3
	I0401 18:23:31.318850   27284 api_server.go:131] duration metric: took 7.047195ms to wait for apiserver health ...
	I0401 18:23:31.318858   27284 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 18:23:31.491267   27284 request.go:629] Waited for 172.34838ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods
	I0401 18:23:31.491326   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods
	I0401 18:23:31.491333   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:31.491340   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:31.491345   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:31.499252   27284 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0401 18:23:31.506086   27284 system_pods.go:59] 24 kube-system pods found
	I0401 18:23:31.506121   27284 system_pods.go:61] "coredns-76f75df574-8v456" [28cf6a1d-90df-4802-ad3c-9c0276380a44] Running
	I0401 18:23:31.506129   27284 system_pods.go:61] "coredns-76f75df574-sqxnb" [17868bbd-b0e9-460c-b191-9707f613af0a] Running
	I0401 18:23:31.506136   27284 system_pods.go:61] "etcd-ha-293078" [0cf5a089-d409-4fa2-85de-fcc012d79ff3] Running
	I0401 18:23:31.506143   27284 system_pods.go:61] "etcd-ha-293078-m02" [8acd3424-a11f-4a40-97cf-b7e8b4a0975f] Running
	I0401 18:23:31.506151   27284 system_pods.go:61] "etcd-ha-293078-m03" [473cf563-e7fb-4aee-8faa-eda7611bdff1] Running
	I0401 18:23:31.506157   27284 system_pods.go:61] "kindnet-ccxmv" [d3c6474c-bc4a-43fe-85cf-1f250eaaf7a9] Running
	I0401 18:23:31.506165   27284 system_pods.go:61] "kindnet-f4djp" [5b26be41-434f-4908-95aa-64da9fe7ecb0] Running
	I0401 18:23:31.506170   27284 system_pods.go:61] "kindnet-rjfcj" [63f6ecc3-4bd0-406b-8096-ffd6115a2de3] Running
	I0401 18:23:31.506176   27284 system_pods.go:61] "kube-apiserver-ha-293078" [a0e08a32-b673-46b9-b965-9d321e4db6f1] Running
	I0401 18:23:31.506183   27284 system_pods.go:61] "kube-apiserver-ha-293078-m02" [533b0e64-f078-44f0-be6f-a8a3d880138a] Running
	I0401 18:23:31.506189   27284 system_pods.go:61] "kube-apiserver-ha-293078-m03" [ba831509-c5d3-459b-a79e-fbaead3e632d] Running
	I0401 18:23:31.506196   27284 system_pods.go:61] "kube-controller-manager-ha-293078" [3e9c2dbe-f437-4619-9b04-f30d9dab7f61] Running
	I0401 18:23:31.506203   27284 system_pods.go:61] "kube-controller-manager-ha-293078-m02" [e8879a89-4775-488b-9229-e86c2c891b5f] Running
	I0401 18:23:31.506209   27284 system_pods.go:61] "kube-controller-manager-ha-293078-m03" [d38e0572-a059-44bb-a05a-ddf69667c6f6] Running
	I0401 18:23:31.506217   27284 system_pods.go:61] "kube-proxy-8s2xk" [4fc029ea-1f23-497b-8fe3-38fc0e0a4c38] Running
	I0401 18:23:31.506229   27284 system_pods.go:61] "kube-proxy-l5q2p" [167db687-ac11-4f57-83c1-048c31a7b2cb] Running
	I0401 18:23:31.506237   27284 system_pods.go:61] "kube-proxy-xjx5z" [7278ced7-d2eb-4c92-b78a-3d76ba7ad4c8] Running
	I0401 18:23:31.506241   27284 system_pods.go:61] "kube-scheduler-ha-293078" [87acbf1d-d53b-47d7-816a-492ba644ad0e] Running
	I0401 18:23:31.506244   27284 system_pods.go:61] "kube-scheduler-ha-293078-m02" [17a9003c-fd9f-48e2-b4b7-1ee6606ef480] Running
	I0401 18:23:31.506247   27284 system_pods.go:61] "kube-scheduler-ha-293078-m03" [2a7eb692-9006-42af-9cbf-e8c0101b08ce] Running
	I0401 18:23:31.506250   27284 system_pods.go:61] "kube-vip-ha-293078" [543de9ec-6f50-46b9-b6ec-f58964f81f12] Running
	I0401 18:23:31.506253   27284 system_pods.go:61] "kube-vip-ha-293078-m02" [6714926d-3bce-4773-92d6-e3811f532a37] Running
	I0401 18:23:31.506257   27284 system_pods.go:61] "kube-vip-ha-293078-m03" [36491063-d52a-4b27-bded-7d615c52cb80] Running
	I0401 18:23:31.506260   27284 system_pods.go:61] "storage-provisioner" [3d7c42eb-192e-4ae0-b5ae-0883ef5e740c] Running
	I0401 18:23:31.506266   27284 system_pods.go:74] duration metric: took 187.399526ms to wait for pod list to return data ...
	I0401 18:23:31.506282   27284 default_sa.go:34] waiting for default service account to be created ...
	I0401 18:23:31.690506   27284 request.go:629] Waited for 184.152285ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/namespaces/default/serviceaccounts
	I0401 18:23:31.690562   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/default/serviceaccounts
	I0401 18:23:31.690568   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:31.690576   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:31.690580   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:31.695667   27284 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 18:23:31.695775   27284 default_sa.go:45] found service account: "default"
	I0401 18:23:31.695792   27284 default_sa.go:55] duration metric: took 189.503133ms for default service account to be created ...
	I0401 18:23:31.695802   27284 system_pods.go:116] waiting for k8s-apps to be running ...
	I0401 18:23:31.891161   27284 request.go:629] Waited for 195.268872ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods
	I0401 18:23:31.891217   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods
	I0401 18:23:31.891224   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:31.891235   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:31.891245   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:31.903457   27284 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0401 18:23:31.910479   27284 system_pods.go:86] 24 kube-system pods found
	I0401 18:23:31.910503   27284 system_pods.go:89] "coredns-76f75df574-8v456" [28cf6a1d-90df-4802-ad3c-9c0276380a44] Running
	I0401 18:23:31.910508   27284 system_pods.go:89] "coredns-76f75df574-sqxnb" [17868bbd-b0e9-460c-b191-9707f613af0a] Running
	I0401 18:23:31.910512   27284 system_pods.go:89] "etcd-ha-293078" [0cf5a089-d409-4fa2-85de-fcc012d79ff3] Running
	I0401 18:23:31.910516   27284 system_pods.go:89] "etcd-ha-293078-m02" [8acd3424-a11f-4a40-97cf-b7e8b4a0975f] Running
	I0401 18:23:31.910520   27284 system_pods.go:89] "etcd-ha-293078-m03" [473cf563-e7fb-4aee-8faa-eda7611bdff1] Running
	I0401 18:23:31.910523   27284 system_pods.go:89] "kindnet-ccxmv" [d3c6474c-bc4a-43fe-85cf-1f250eaaf7a9] Running
	I0401 18:23:31.910527   27284 system_pods.go:89] "kindnet-f4djp" [5b26be41-434f-4908-95aa-64da9fe7ecb0] Running
	I0401 18:23:31.910531   27284 system_pods.go:89] "kindnet-rjfcj" [63f6ecc3-4bd0-406b-8096-ffd6115a2de3] Running
	I0401 18:23:31.910535   27284 system_pods.go:89] "kube-apiserver-ha-293078" [a0e08a32-b673-46b9-b965-9d321e4db6f1] Running
	I0401 18:23:31.910539   27284 system_pods.go:89] "kube-apiserver-ha-293078-m02" [533b0e64-f078-44f0-be6f-a8a3d880138a] Running
	I0401 18:23:31.910543   27284 system_pods.go:89] "kube-apiserver-ha-293078-m03" [ba831509-c5d3-459b-a79e-fbaead3e632d] Running
	I0401 18:23:31.910546   27284 system_pods.go:89] "kube-controller-manager-ha-293078" [3e9c2dbe-f437-4619-9b04-f30d9dab7f61] Running
	I0401 18:23:31.910550   27284 system_pods.go:89] "kube-controller-manager-ha-293078-m02" [e8879a89-4775-488b-9229-e86c2c891b5f] Running
	I0401 18:23:31.910554   27284 system_pods.go:89] "kube-controller-manager-ha-293078-m03" [d38e0572-a059-44bb-a05a-ddf69667c6f6] Running
	I0401 18:23:31.910558   27284 system_pods.go:89] "kube-proxy-8s2xk" [4fc029ea-1f23-497b-8fe3-38fc0e0a4c38] Running
	I0401 18:23:31.910561   27284 system_pods.go:89] "kube-proxy-l5q2p" [167db687-ac11-4f57-83c1-048c31a7b2cb] Running
	I0401 18:23:31.910565   27284 system_pods.go:89] "kube-proxy-xjx5z" [7278ced7-d2eb-4c92-b78a-3d76ba7ad4c8] Running
	I0401 18:23:31.910569   27284 system_pods.go:89] "kube-scheduler-ha-293078" [87acbf1d-d53b-47d7-816a-492ba644ad0e] Running
	I0401 18:23:31.910574   27284 system_pods.go:89] "kube-scheduler-ha-293078-m02" [17a9003c-fd9f-48e2-b4b7-1ee6606ef480] Running
	I0401 18:23:31.910582   27284 system_pods.go:89] "kube-scheduler-ha-293078-m03" [2a7eb692-9006-42af-9cbf-e8c0101b08ce] Running
	I0401 18:23:31.910585   27284 system_pods.go:89] "kube-vip-ha-293078" [543de9ec-6f50-46b9-b6ec-f58964f81f12] Running
	I0401 18:23:31.910588   27284 system_pods.go:89] "kube-vip-ha-293078-m02" [6714926d-3bce-4773-92d6-e3811f532a37] Running
	I0401 18:23:31.910591   27284 system_pods.go:89] "kube-vip-ha-293078-m03" [36491063-d52a-4b27-bded-7d615c52cb80] Running
	I0401 18:23:31.910595   27284 system_pods.go:89] "storage-provisioner" [3d7c42eb-192e-4ae0-b5ae-0883ef5e740c] Running
	I0401 18:23:31.910601   27284 system_pods.go:126] duration metric: took 214.793197ms to wait for k8s-apps to be running ...
	I0401 18:23:31.910610   27284 system_svc.go:44] waiting for kubelet service to be running ....
	I0401 18:23:31.910660   27284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 18:23:31.928486   27284 system_svc.go:56] duration metric: took 17.86774ms WaitForService to wait for kubelet
	I0401 18:23:31.928520   27284 kubeadm.go:576] duration metric: took 16.39384603s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 18:23:31.928545   27284 node_conditions.go:102] verifying NodePressure condition ...
	I0401 18:23:32.090928   27284 request.go:629] Waited for 162.316288ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/nodes
	I0401 18:23:32.090980   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes
	I0401 18:23:32.090985   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:32.090992   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:32.090996   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:32.094874   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:32.096205   27284 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 18:23:32.096229   27284 node_conditions.go:123] node cpu capacity is 2
	I0401 18:23:32.096242   27284 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 18:23:32.096247   27284 node_conditions.go:123] node cpu capacity is 2
	I0401 18:23:32.096253   27284 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 18:23:32.096258   27284 node_conditions.go:123] node cpu capacity is 2
	I0401 18:23:32.096267   27284 node_conditions.go:105] duration metric: took 167.715883ms to run NodePressure ...
	I0401 18:23:32.096281   27284 start.go:240] waiting for startup goroutines ...
	I0401 18:23:32.096309   27284 start.go:254] writing updated cluster config ...
	I0401 18:23:32.096594   27284 ssh_runner.go:195] Run: rm -f paused
	I0401 18:23:32.148580   27284 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0401 18:23:32.150915   27284 out.go:177] * Done! kubectl is now configured to use "ha-293078" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 01 18:27:01 ha-293078 crio[679]: time="2024-04-01 18:27:01.553169284Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=85557876-4228-499f-a0bb-4f6d55d3e808 name=/runtime.v1.RuntimeService/Version
	Apr 01 18:27:01 ha-293078 crio[679]: time="2024-04-01 18:27:01.554815317Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2da2cb86-537c-4ae9-acad-ff99a6c8e781 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 18:27:01 ha-293078 crio[679]: time="2024-04-01 18:27:01.555266487Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711996021555242675,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2da2cb86-537c-4ae9-acad-ff99a6c8e781 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 18:27:01 ha-293078 crio[679]: time="2024-04-01 18:27:01.555939196Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=759a9663-d8ae-4338-bec3-ca04280c8d84 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:27:01 ha-293078 crio[679]: time="2024-04-01 18:27:01.555993244Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=759a9663-d8ae-4338-bec3-ca04280c8d84 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:27:01 ha-293078 crio[679]: time="2024-04-01 18:27:01.556234289Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:61d746cfabdcf1e527c0a0136c923d19be52285d3c766da6faaba4eb3b3c013d,PodSandboxId:d2ac86b05a9f4d146abfc431861426b75aa121e86155e33f6885c2287d35c2d9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1711995814759224430,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-7tn8z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cf87f47-0b2d-42b9-9aa6-e4e3736ca728,},Annotations:map[string]string{io.kubernetes.container.hash: 94944394,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4afd34fc1a474daf3c2e777ef35aa4ae136ec34f86760a743d050e2e52749213,PodSandboxId:55c5a220e09f3ccc632cd8580e6c21d3fd866632a80c3f27ffa1c7eba62a598b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711995665052098243,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d7c42eb-192e-4ae0-b5ae-0883ef5e740c,},Annotations:map[string]string{io.kubernetes.container.hash: 245032af,io.kubernetes.container.restartCount: 0,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce906a6132be484cf993679eea95d6637b9e3b3e9884820e95723b2b2c33e7e6,PodSandboxId:184b6f8a0b09d310e6167558bc2e043f793ec8069ada3f99f07f8c4bf5bbe2a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711995665008742384,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-8v456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28cf6a1d-90df-4802-ad3c-9c0276380a44,},Annotations:map[string]string{io.kubernetes.container.hash: 286c3144,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"n
ame\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be43b3abd52fcb26f579806533a081948a895cdd479befbbc9bd5446fdc060e9,PodSandboxId:f885d7f062d4925a0c12a93de7fab4a08ad786e7dc47a543daf4c046acd992d8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711995665020678327,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-sqxnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17868bbd-b
0e9-460c-b191-9707f613af0a,},Annotations:map[string]string{io.kubernetes.container.hash: 48f6bb3c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39a744dcbdcbe85e94e3ddfb1c32297919a24a7d666cb56091bb090ab4f1b169,PodSandboxId:478784c20d5b4ddab5f45c2a97205bec4962f4b790bbc0e5366d0feba71d6a56,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1711995
663098635767,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rjfcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f6ecc3-4bd0-406b-8096-ffd6115a2de3,},Annotations:map[string]string{io.kubernetes.container.hash: 1c24bf0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d7ab06dacb1f801ea9714513d3f23a0bad938d609fb9f291d0ec0c4903d8d6a,PodSandboxId:849ffff6ee9e4b1fed8bc9e2950a7f2d227adf1318502c7d46a0e03e73165ca2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711995662809497703,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l5q2p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 167db687-ac11-4f57-83c1-048c31a7b2cb,},Annotations:map[string]string{io.kubernetes.container.hash: a09407a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1af36287bacaf83243c8481c963e2cf6f3ec89e4ffb87b80a135b18652a2c9d,PodSandboxId:ac02e9b682f1fb8db19ffd11802dd48a07afe084c904748e3e5127b031338d62,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1711995644476713228,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cee692bcccd6b0feab0f0ba7206df66e,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bd1ccbceec8c5056f450169f49c17acf202e064825e6c51a55ca89e591e25b5,PodSandboxId:91aa9ea508a082ce745f620d0c3c5161f596f6efef8dca30ddfad2fdc5376338,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711995642771196752,Labels:map[string]string{io.kubernetes.container.name: ku
be-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14a552ff6182f687744d2f77e0ce85cc,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d9284db03ef8c515d8a7475c032ebbaa4d501954b6e1f5c383cdcdb3ebf6afb,PodSandboxId:141c3ab4ae279ab738ee7ad84077cefbc2db4a8489f0ea7b3526708562786979,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711995642820315476,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kuber
netes.pod.name: kube-apiserver-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 111b7388841713ed3598aaf599c56758,},Annotations:map[string]string{io.kubernetes.container.hash: 886f76f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8471f59f3de235b71fe57e79412f27884ceb62d668027d7fe3730009d2fbb1fa,PodSandboxId:34af251b6243e69ca34eeeb959254863f3933b8142c33d2027be0d4f7647ea8b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711995642748010111,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-293078,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed3d89e46aa7fdf04d31b28a37841ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 5bcf3746,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e36af39fdf13dd3cf98d2d4a8e7666aea913228d31de663d19c302848663d798,PodSandboxId:4706bec6244a3acd46c920d54796080f4432348e280610cc7f24ee816e251423,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711995642730928046,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-293078,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 431f977c37ad2da28fe70e24f8f4cfb5,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=759a9663-d8ae-4338-bec3-ca04280c8d84 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:27:01 ha-293078 crio[679]: time="2024-04-01 18:27:01.604181595Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f618b63a-55aa-42cd-b0fe-ce8b1d3480d4 name=/runtime.v1.RuntimeService/Version
	Apr 01 18:27:01 ha-293078 crio[679]: time="2024-04-01 18:27:01.604255008Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f618b63a-55aa-42cd-b0fe-ce8b1d3480d4 name=/runtime.v1.RuntimeService/Version
	Apr 01 18:27:01 ha-293078 crio[679]: time="2024-04-01 18:27:01.606168645Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7f138c2f-4882-4f24-bf47-d63b571fe672 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 18:27:01 ha-293078 crio[679]: time="2024-04-01 18:27:01.606703388Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711996021606677546,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7f138c2f-4882-4f24-bf47-d63b571fe672 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 18:27:01 ha-293078 crio[679]: time="2024-04-01 18:27:01.607788259Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=50a6773b-d91a-4b3a-ae5c-a7409639d040 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:27:01 ha-293078 crio[679]: time="2024-04-01 18:27:01.607886873Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=50a6773b-d91a-4b3a-ae5c-a7409639d040 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:27:01 ha-293078 crio[679]: time="2024-04-01 18:27:01.608156573Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:61d746cfabdcf1e527c0a0136c923d19be52285d3c766da6faaba4eb3b3c013d,PodSandboxId:d2ac86b05a9f4d146abfc431861426b75aa121e86155e33f6885c2287d35c2d9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1711995814759224430,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-7tn8z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cf87f47-0b2d-42b9-9aa6-e4e3736ca728,},Annotations:map[string]string{io.kubernetes.container.hash: 94944394,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4afd34fc1a474daf3c2e777ef35aa4ae136ec34f86760a743d050e2e52749213,PodSandboxId:55c5a220e09f3ccc632cd8580e6c21d3fd866632a80c3f27ffa1c7eba62a598b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711995665052098243,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d7c42eb-192e-4ae0-b5ae-0883ef5e740c,},Annotations:map[string]string{io.kubernetes.container.hash: 245032af,io.kubernetes.container.restartCount: 0,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce906a6132be484cf993679eea95d6637b9e3b3e9884820e95723b2b2c33e7e6,PodSandboxId:184b6f8a0b09d310e6167558bc2e043f793ec8069ada3f99f07f8c4bf5bbe2a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711995665008742384,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-8v456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28cf6a1d-90df-4802-ad3c-9c0276380a44,},Annotations:map[string]string{io.kubernetes.container.hash: 286c3144,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"n
ame\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be43b3abd52fcb26f579806533a081948a895cdd479befbbc9bd5446fdc060e9,PodSandboxId:f885d7f062d4925a0c12a93de7fab4a08ad786e7dc47a543daf4c046acd992d8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711995665020678327,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-sqxnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17868bbd-b
0e9-460c-b191-9707f613af0a,},Annotations:map[string]string{io.kubernetes.container.hash: 48f6bb3c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39a744dcbdcbe85e94e3ddfb1c32297919a24a7d666cb56091bb090ab4f1b169,PodSandboxId:478784c20d5b4ddab5f45c2a97205bec4962f4b790bbc0e5366d0feba71d6a56,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1711995
663098635767,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rjfcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f6ecc3-4bd0-406b-8096-ffd6115a2de3,},Annotations:map[string]string{io.kubernetes.container.hash: 1c24bf0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d7ab06dacb1f801ea9714513d3f23a0bad938d609fb9f291d0ec0c4903d8d6a,PodSandboxId:849ffff6ee9e4b1fed8bc9e2950a7f2d227adf1318502c7d46a0e03e73165ca2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711995662809497703,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l5q2p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 167db687-ac11-4f57-83c1-048c31a7b2cb,},Annotations:map[string]string{io.kubernetes.container.hash: a09407a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1af36287bacaf83243c8481c963e2cf6f3ec89e4ffb87b80a135b18652a2c9d,PodSandboxId:ac02e9b682f1fb8db19ffd11802dd48a07afe084c904748e3e5127b031338d62,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1711995644476713228,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cee692bcccd6b0feab0f0ba7206df66e,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bd1ccbceec8c5056f450169f49c17acf202e064825e6c51a55ca89e591e25b5,PodSandboxId:91aa9ea508a082ce745f620d0c3c5161f596f6efef8dca30ddfad2fdc5376338,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711995642771196752,Labels:map[string]string{io.kubernetes.container.name: ku
be-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14a552ff6182f687744d2f77e0ce85cc,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d9284db03ef8c515d8a7475c032ebbaa4d501954b6e1f5c383cdcdb3ebf6afb,PodSandboxId:141c3ab4ae279ab738ee7ad84077cefbc2db4a8489f0ea7b3526708562786979,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711995642820315476,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kuber
netes.pod.name: kube-apiserver-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 111b7388841713ed3598aaf599c56758,},Annotations:map[string]string{io.kubernetes.container.hash: 886f76f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8471f59f3de235b71fe57e79412f27884ceb62d668027d7fe3730009d2fbb1fa,PodSandboxId:34af251b6243e69ca34eeeb959254863f3933b8142c33d2027be0d4f7647ea8b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711995642748010111,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-293078,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed3d89e46aa7fdf04d31b28a37841ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 5bcf3746,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e36af39fdf13dd3cf98d2d4a8e7666aea913228d31de663d19c302848663d798,PodSandboxId:4706bec6244a3acd46c920d54796080f4432348e280610cc7f24ee816e251423,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711995642730928046,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-293078,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 431f977c37ad2da28fe70e24f8f4cfb5,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=50a6773b-d91a-4b3a-ae5c-a7409639d040 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:27:01 ha-293078 crio[679]: time="2024-04-01 18:27:01.615373399Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=a7b5d10d-2bc9-44bb-95f1-63f356e3e522 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 01 18:27:01 ha-293078 crio[679]: time="2024-04-01 18:27:01.615885043Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:d2ac86b05a9f4d146abfc431861426b75aa121e86155e33f6885c2287d35c2d9,Metadata:&PodSandboxMetadata{Name:busybox-7fdf7869d9-7tn8z,Uid:0cf87f47-0b2d-42b9-9aa6-e4e3736ca728,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1711995813445010514,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7fdf7869d9-7tn8z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cf87f47-0b2d-42b9-9aa6-e4e3736ca728,pod-template-hash: 7fdf7869d9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-01T18:23:33.111482875Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:55c5a220e09f3ccc632cd8580e6c21d3fd866632a80c3f27ffa1c7eba62a598b,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:3d7c42eb-192e-4ae0-b5ae-0883ef5e740c,Namespace:kube-system,Attempt:0,},State:SAN
DBOX_READY,CreatedAt:1711995664781922086,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d7c42eb-192e-4ae0-b5ae-0883ef5e740c,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\
"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-04-01T18:21:04.457438180Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:184b6f8a0b09d310e6167558bc2e043f793ec8069ada3f99f07f8c4bf5bbe2a3,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-8v456,Uid:28cf6a1d-90df-4802-ad3c-9c0276380a44,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1711995664780792673,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-8v456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28cf6a1d-90df-4802-ad3c-9c0276380a44,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-01T18:21:04.458943684Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f885d7f062d4925a0c12a93de7fab4a08ad786e7dc47a543daf4c046acd992d8,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-sqxnb,Uid:17868bbd-b0e9-460c-b191-9707f613af0a,Namespace:kube-system,A
ttempt:0,},State:SANDBOX_READY,CreatedAt:1711995664752813870,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-sqxnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17868bbd-b0e9-460c-b191-9707f613af0a,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-01T18:21:04.445579090Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:478784c20d5b4ddab5f45c2a97205bec4962f4b790bbc0e5366d0feba71d6a56,Metadata:&PodSandboxMetadata{Name:kindnet-rjfcj,Uid:63f6ecc3-4bd0-406b-8096-ffd6115a2de3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1711995662478550212,Labels:map[string]string{app: kindnet,controller-revision-hash: bb65b84c4,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-rjfcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f6ecc3-4bd0-406b-8096-ffd6115a2de3,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annota
tions:map[string]string{kubernetes.io/config.seen: 2024-04-01T18:21:02.164551892Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:849ffff6ee9e4b1fed8bc9e2950a7f2d227adf1318502c7d46a0e03e73165ca2,Metadata:&PodSandboxMetadata{Name:kube-proxy-l5q2p,Uid:167db687-ac11-4f57-83c1-048c31a7b2cb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1711995662464835312,Labels:map[string]string{controller-revision-hash: 7659797656,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-l5q2p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 167db687-ac11-4f57-83c1-048c31a7b2cb,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-01T18:21:02.139775812Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:34af251b6243e69ca34eeeb959254863f3933b8142c33d2027be0d4f7647ea8b,Metadata:&PodSandboxMetadata{Name:etcd-ha-293078,Uid:ed3d89e46aa7fdf04d31b28a37841ad5,Namespace:kube-system,Attempt:0,},State
:SANDBOX_READY,CreatedAt:1711995642479178891,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed3d89e46aa7fdf04d31b28a37841ad5,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.74:2379,kubernetes.io/config.hash: ed3d89e46aa7fdf04d31b28a37841ad5,kubernetes.io/config.seen: 2024-04-01T18:20:41.977515320Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:141c3ab4ae279ab738ee7ad84077cefbc2db4a8489f0ea7b3526708562786979,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-293078,Uid:111b7388841713ed3598aaf599c56758,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1711995642474868499,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 111b73888
41713ed3598aaf599c56758,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.74:8443,kubernetes.io/config.hash: 111b7388841713ed3598aaf599c56758,kubernetes.io/config.seen: 2024-04-01T18:20:41.977516588Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4706bec6244a3acd46c920d54796080f4432348e280610cc7f24ee816e251423,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-293078,Uid:431f977c37ad2da28fe70e24f8f4cfb5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1711995642464688892,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 431f977c37ad2da28fe70e24f8f4cfb5,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 431f977c37ad2da28fe70e24f8f4cfb5,kubernetes.io/config.seen: 2024-04-01T18:20:41.977517684Z,kube
rnetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:91aa9ea508a082ce745f620d0c3c5161f596f6efef8dca30ddfad2fdc5376338,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-293078,Uid:14a552ff6182f687744d2f77e0ce85cc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1711995642463307147,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14a552ff6182f687744d2f77e0ce85cc,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 14a552ff6182f687744d2f77e0ce85cc,kubernetes.io/config.seen: 2024-04-01T18:20:41.977518552Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ac02e9b682f1fb8db19ffd11802dd48a07afe084c904748e3e5127b031338d62,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-293078,Uid:cee692bcccd6b0feab0f0ba7206df66e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1711995642446686959,Label
s:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cee692bcccd6b0feab0f0ba7206df66e,},Annotations:map[string]string{kubernetes.io/config.hash: cee692bcccd6b0feab0f0ba7206df66e,kubernetes.io/config.seen: 2024-04-01T18:20:41.977511707Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=a7b5d10d-2bc9-44bb-95f1-63f356e3e522 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 01 18:27:01 ha-293078 crio[679]: time="2024-04-01 18:27:01.616958218Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1a9ea136-12b8-452c-9dec-a7b826ed75b3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:27:01 ha-293078 crio[679]: time="2024-04-01 18:27:01.617203517Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1a9ea136-12b8-452c-9dec-a7b826ed75b3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:27:01 ha-293078 crio[679]: time="2024-04-01 18:27:01.618059024Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:61d746cfabdcf1e527c0a0136c923d19be52285d3c766da6faaba4eb3b3c013d,PodSandboxId:d2ac86b05a9f4d146abfc431861426b75aa121e86155e33f6885c2287d35c2d9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1711995814759224430,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-7tn8z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cf87f47-0b2d-42b9-9aa6-e4e3736ca728,},Annotations:map[string]string{io.kubernetes.container.hash: 94944394,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4afd34fc1a474daf3c2e777ef35aa4ae136ec34f86760a743d050e2e52749213,PodSandboxId:55c5a220e09f3ccc632cd8580e6c21d3fd866632a80c3f27ffa1c7eba62a598b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711995665052098243,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d7c42eb-192e-4ae0-b5ae-0883ef5e740c,},Annotations:map[string]string{io.kubernetes.container.hash: 245032af,io.kubernetes.container.restartCount: 0,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce906a6132be484cf993679eea95d6637b9e3b3e9884820e95723b2b2c33e7e6,PodSandboxId:184b6f8a0b09d310e6167558bc2e043f793ec8069ada3f99f07f8c4bf5bbe2a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711995665008742384,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-8v456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28cf6a1d-90df-4802-ad3c-9c0276380a44,},Annotations:map[string]string{io.kubernetes.container.hash: 286c3144,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"n
ame\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be43b3abd52fcb26f579806533a081948a895cdd479befbbc9bd5446fdc060e9,PodSandboxId:f885d7f062d4925a0c12a93de7fab4a08ad786e7dc47a543daf4c046acd992d8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711995665020678327,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-sqxnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17868bbd-b
0e9-460c-b191-9707f613af0a,},Annotations:map[string]string{io.kubernetes.container.hash: 48f6bb3c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39a744dcbdcbe85e94e3ddfb1c32297919a24a7d666cb56091bb090ab4f1b169,PodSandboxId:478784c20d5b4ddab5f45c2a97205bec4962f4b790bbc0e5366d0feba71d6a56,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1711995
663098635767,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rjfcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f6ecc3-4bd0-406b-8096-ffd6115a2de3,},Annotations:map[string]string{io.kubernetes.container.hash: 1c24bf0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d7ab06dacb1f801ea9714513d3f23a0bad938d609fb9f291d0ec0c4903d8d6a,PodSandboxId:849ffff6ee9e4b1fed8bc9e2950a7f2d227adf1318502c7d46a0e03e73165ca2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711995662809497703,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l5q2p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 167db687-ac11-4f57-83c1-048c31a7b2cb,},Annotations:map[string]string{io.kubernetes.container.hash: a09407a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1af36287bacaf83243c8481c963e2cf6f3ec89e4ffb87b80a135b18652a2c9d,PodSandboxId:ac02e9b682f1fb8db19ffd11802dd48a07afe084c904748e3e5127b031338d62,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1711995644476713228,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cee692bcccd6b0feab0f0ba7206df66e,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bd1ccbceec8c5056f450169f49c17acf202e064825e6c51a55ca89e591e25b5,PodSandboxId:91aa9ea508a082ce745f620d0c3c5161f596f6efef8dca30ddfad2fdc5376338,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711995642771196752,Labels:map[string]string{io.kubernetes.container.name: ku
be-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14a552ff6182f687744d2f77e0ce85cc,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d9284db03ef8c515d8a7475c032ebbaa4d501954b6e1f5c383cdcdb3ebf6afb,PodSandboxId:141c3ab4ae279ab738ee7ad84077cefbc2db4a8489f0ea7b3526708562786979,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711995642820315476,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kuber
netes.pod.name: kube-apiserver-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 111b7388841713ed3598aaf599c56758,},Annotations:map[string]string{io.kubernetes.container.hash: 886f76f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8471f59f3de235b71fe57e79412f27884ceb62d668027d7fe3730009d2fbb1fa,PodSandboxId:34af251b6243e69ca34eeeb959254863f3933b8142c33d2027be0d4f7647ea8b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711995642748010111,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-293078,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed3d89e46aa7fdf04d31b28a37841ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 5bcf3746,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e36af39fdf13dd3cf98d2d4a8e7666aea913228d31de663d19c302848663d798,PodSandboxId:4706bec6244a3acd46c920d54796080f4432348e280610cc7f24ee816e251423,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711995642730928046,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-293078,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 431f977c37ad2da28fe70e24f8f4cfb5,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1a9ea136-12b8-452c-9dec-a7b826ed75b3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:27:01 ha-293078 crio[679]: time="2024-04-01 18:27:01.659115160Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6b3060be-31a4-4368-8d49-733031409fa1 name=/runtime.v1.RuntimeService/Version
	Apr 01 18:27:01 ha-293078 crio[679]: time="2024-04-01 18:27:01.659440082Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6b3060be-31a4-4368-8d49-733031409fa1 name=/runtime.v1.RuntimeService/Version
	Apr 01 18:27:01 ha-293078 crio[679]: time="2024-04-01 18:27:01.660878556Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=de6ec796-be78-41a3-b482-4963821bf89e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 18:27:01 ha-293078 crio[679]: time="2024-04-01 18:27:01.662514318Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711996021662485613,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=de6ec796-be78-41a3-b482-4963821bf89e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 18:27:01 ha-293078 crio[679]: time="2024-04-01 18:27:01.663283564Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b5237cc4-e256-4f85-855a-ea92e3783215 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:27:01 ha-293078 crio[679]: time="2024-04-01 18:27:01.663337065Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b5237cc4-e256-4f85-855a-ea92e3783215 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:27:01 ha-293078 crio[679]: time="2024-04-01 18:27:01.663723852Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:61d746cfabdcf1e527c0a0136c923d19be52285d3c766da6faaba4eb3b3c013d,PodSandboxId:d2ac86b05a9f4d146abfc431861426b75aa121e86155e33f6885c2287d35c2d9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1711995814759224430,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-7tn8z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cf87f47-0b2d-42b9-9aa6-e4e3736ca728,},Annotations:map[string]string{io.kubernetes.container.hash: 94944394,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4afd34fc1a474daf3c2e777ef35aa4ae136ec34f86760a743d050e2e52749213,PodSandboxId:55c5a220e09f3ccc632cd8580e6c21d3fd866632a80c3f27ffa1c7eba62a598b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711995665052098243,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d7c42eb-192e-4ae0-b5ae-0883ef5e740c,},Annotations:map[string]string{io.kubernetes.container.hash: 245032af,io.kubernetes.container.restartCount: 0,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce906a6132be484cf993679eea95d6637b9e3b3e9884820e95723b2b2c33e7e6,PodSandboxId:184b6f8a0b09d310e6167558bc2e043f793ec8069ada3f99f07f8c4bf5bbe2a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711995665008742384,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-8v456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28cf6a1d-90df-4802-ad3c-9c0276380a44,},Annotations:map[string]string{io.kubernetes.container.hash: 286c3144,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"n
ame\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be43b3abd52fcb26f579806533a081948a895cdd479befbbc9bd5446fdc060e9,PodSandboxId:f885d7f062d4925a0c12a93de7fab4a08ad786e7dc47a543daf4c046acd992d8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711995665020678327,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-sqxnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17868bbd-b
0e9-460c-b191-9707f613af0a,},Annotations:map[string]string{io.kubernetes.container.hash: 48f6bb3c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39a744dcbdcbe85e94e3ddfb1c32297919a24a7d666cb56091bb090ab4f1b169,PodSandboxId:478784c20d5b4ddab5f45c2a97205bec4962f4b790bbc0e5366d0feba71d6a56,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1711995
663098635767,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rjfcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f6ecc3-4bd0-406b-8096-ffd6115a2de3,},Annotations:map[string]string{io.kubernetes.container.hash: 1c24bf0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d7ab06dacb1f801ea9714513d3f23a0bad938d609fb9f291d0ec0c4903d8d6a,PodSandboxId:849ffff6ee9e4b1fed8bc9e2950a7f2d227adf1318502c7d46a0e03e73165ca2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711995662809497703,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l5q2p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 167db687-ac11-4f57-83c1-048c31a7b2cb,},Annotations:map[string]string{io.kubernetes.container.hash: a09407a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1af36287bacaf83243c8481c963e2cf6f3ec89e4ffb87b80a135b18652a2c9d,PodSandboxId:ac02e9b682f1fb8db19ffd11802dd48a07afe084c904748e3e5127b031338d62,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1711995644476713228,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cee692bcccd6b0feab0f0ba7206df66e,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bd1ccbceec8c5056f450169f49c17acf202e064825e6c51a55ca89e591e25b5,PodSandboxId:91aa9ea508a082ce745f620d0c3c5161f596f6efef8dca30ddfad2fdc5376338,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711995642771196752,Labels:map[string]string{io.kubernetes.container.name: ku
be-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14a552ff6182f687744d2f77e0ce85cc,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d9284db03ef8c515d8a7475c032ebbaa4d501954b6e1f5c383cdcdb3ebf6afb,PodSandboxId:141c3ab4ae279ab738ee7ad84077cefbc2db4a8489f0ea7b3526708562786979,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711995642820315476,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kuber
netes.pod.name: kube-apiserver-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 111b7388841713ed3598aaf599c56758,},Annotations:map[string]string{io.kubernetes.container.hash: 886f76f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8471f59f3de235b71fe57e79412f27884ceb62d668027d7fe3730009d2fbb1fa,PodSandboxId:34af251b6243e69ca34eeeb959254863f3933b8142c33d2027be0d4f7647ea8b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711995642748010111,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-293078,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed3d89e46aa7fdf04d31b28a37841ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 5bcf3746,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e36af39fdf13dd3cf98d2d4a8e7666aea913228d31de663d19c302848663d798,PodSandboxId:4706bec6244a3acd46c920d54796080f4432348e280610cc7f24ee816e251423,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711995642730928046,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-293078,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 431f977c37ad2da28fe70e24f8f4cfb5,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b5237cc4-e256-4f85-855a-ea92e3783215 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	61d746cfabdcf       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   d2ac86b05a9f4       busybox-7fdf7869d9-7tn8z
	4afd34fc1a474       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   55c5a220e09f3       storage-provisioner
	be43b3abd52fc       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   f885d7f062d49       coredns-76f75df574-sqxnb
	ce906a6132be4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   184b6f8a0b09d       coredns-76f75df574-8v456
	39a744dcbdcbe       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      5 minutes ago       Running             kindnet-cni               0                   478784c20d5b4       kindnet-rjfcj
	8d7ab06dacb1f       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      5 minutes ago       Running             kube-proxy                0                   849ffff6ee9e4       kube-proxy-l5q2p
	c1af36287baca       ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a     6 minutes ago       Running             kube-vip                  0                   ac02e9b682f1f       kube-vip-ha-293078
	9d9284db03ef8       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      6 minutes ago       Running             kube-apiserver            0                   141c3ab4ae279       kube-apiserver-ha-293078
	6bd1ccbceec8c       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      6 minutes ago       Running             kube-scheduler            0                   91aa9ea508a08       kube-scheduler-ha-293078
	8471f59f3de23       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      6 minutes ago       Running             etcd                      0                   34af251b6243e       etcd-ha-293078
	e36af39fdf13d       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      6 minutes ago       Running             kube-controller-manager   0                   4706bec6244a3       kube-controller-manager-ha-293078
	
	
	==> coredns [be43b3abd52fcb26f579806533a081948a895cdd479befbbc9bd5446fdc060e9] <==
	[INFO] 127.0.0.1:60623 - 22139 "HINFO IN 659470979403797556.9141881756457822511. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.009678045s
	[INFO] 10.244.0.4:33543 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.003949293s
	[INFO] 10.244.1.2:36542 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000304415s
	[INFO] 10.244.1.2:60003 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000141661s
	[INFO] 10.244.1.2:49897 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002016415s
	[INFO] 10.244.0.4:48954 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004445287s
	[INFO] 10.244.0.4:41430 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00325614s
	[INFO] 10.244.0.4:43938 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000214694s
	[INFO] 10.244.0.4:55272 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000150031s
	[INFO] 10.244.1.2:53484 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00036286s
	[INFO] 10.244.1.2:40882 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000191317s
	[INFO] 10.244.1.2:44362 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000231809s
	[INFO] 10.244.2.2:38878 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000130983s
	[INFO] 10.244.2.2:55123 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000140829s
	[INFO] 10.244.2.2:60293 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000207687s
	[INFO] 10.244.2.2:42748 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000162463s
	[INFO] 10.244.0.4:51962 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000171832s
	[INFO] 10.244.1.2:34522 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169219s
	[INFO] 10.244.1.2:45853 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000149138s
	[INFO] 10.244.0.4:34814 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000154553s
	[INFO] 10.244.1.2:51449 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125618s
	[INFO] 10.244.1.2:53188 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000205396s
	[INFO] 10.244.2.2:55517 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00011978s
	[INFO] 10.244.2.2:58847 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00014087s
	[INFO] 10.244.2.2:55721 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000148617s
	
	
	==> coredns [ce906a6132be484cf993679eea95d6637b9e3b3e9884820e95723b2b2c33e7e6] <==
	[INFO] 10.244.0.4:39293 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000139049s
	[INFO] 10.244.1.2:34347 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153693s
	[INFO] 10.244.1.2:53017 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002482407s
	[INFO] 10.244.1.2:42256 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00177498s
	[INFO] 10.244.1.2:45121 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00042512s
	[INFO] 10.244.1.2:46630 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000135925s
	[INFO] 10.244.2.2:37886 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147427s
	[INFO] 10.244.2.2:47974 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002026718s
	[INFO] 10.244.2.2:36742 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000132507s
	[INFO] 10.244.2.2:60458 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001236853s
	[INFO] 10.244.0.4:36514 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000079136s
	[INFO] 10.244.0.4:54146 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000061884s
	[INFO] 10.244.0.4:48422 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000049796s
	[INFO] 10.244.1.2:53602 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000174827s
	[INFO] 10.244.1.2:52752 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000123202s
	[INFO] 10.244.2.2:42824 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122778s
	[INFO] 10.244.2.2:39412 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000138599s
	[INFO] 10.244.2.2:46213 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000134624s
	[INFO] 10.244.2.2:41423 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000104186s
	[INFO] 10.244.0.4:56317 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189039s
	[INFO] 10.244.0.4:49692 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000121271s
	[INFO] 10.244.0.4:55372 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000369332s
	[INFO] 10.244.1.2:44134 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000161425s
	[INFO] 10.244.1.2:45595 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000086429s
	[INFO] 10.244.2.2:52399 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000233085s
	
	
	==> describe nodes <==
	Name:               ha-293078
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-293078
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2
	                    minikube.k8s.io/name=ha-293078
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_01T18_20_50_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Apr 2024 18:20:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-293078
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Apr 2024 18:26:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Apr 2024 18:23:53 +0000   Mon, 01 Apr 2024 18:20:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Apr 2024 18:23:53 +0000   Mon, 01 Apr 2024 18:20:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Apr 2024 18:23:53 +0000   Mon, 01 Apr 2024 18:20:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Apr 2024 18:23:53 +0000   Mon, 01 Apr 2024 18:21:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.74
	  Hostname:    ha-293078
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3e3b54c701944ac9af1db6484a71e599
	  System UUID:                3e3b54c7-0194-4ac9-af1d-b6484a71e599
	  Boot ID:                    7f2e19c7-2c6d-417a-9d2d-1c4d117eee25
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-7tn8z             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  kube-system                 coredns-76f75df574-8v456             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m
	  kube-system                 coredns-76f75df574-sqxnb             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m
	  kube-system                 etcd-ha-293078                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m12s
	  kube-system                 kindnet-rjfcj                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m
	  kube-system                 kube-apiserver-ha-293078             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m12s
	  kube-system                 kube-controller-manager-ha-293078    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m12s
	  kube-system                 kube-proxy-l5q2p                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m
	  kube-system                 kube-scheduler-ha-293078             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m12s
	  kube-system                 kube-vip-ha-293078                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m12s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m58s  kube-proxy       
	  Normal  Starting                 6m13s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m13s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m13s  kubelet          Node ha-293078 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m13s  kubelet          Node ha-293078 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m13s  kubelet          Node ha-293078 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m1s   node-controller  Node ha-293078 event: Registered Node ha-293078 in Controller
	  Normal  NodeReady                5m58s  kubelet          Node ha-293078 status is now: NodeReady
	  Normal  RegisteredNode           4m47s  node-controller  Node ha-293078 event: Registered Node ha-293078 in Controller
	  Normal  RegisteredNode           3m34s  node-controller  Node ha-293078 event: Registered Node ha-293078 in Controller
	
	
	Name:               ha-293078-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-293078-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2
	                    minikube.k8s.io/name=ha-293078
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_01T18_22_00_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Apr 2024 18:21:55 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-293078-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Apr 2024 18:24:39 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 01 Apr 2024 18:23:58 +0000   Mon, 01 Apr 2024 18:25:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 01 Apr 2024 18:23:58 +0000   Mon, 01 Apr 2024 18:25:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 01 Apr 2024 18:23:58 +0000   Mon, 01 Apr 2024 18:25:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 01 Apr 2024 18:23:58 +0000   Mon, 01 Apr 2024 18:25:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.161
	  Hostname:    ha-293078-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ca6adfb154a0459d8158168bf9a31bb6
	  System UUID:                ca6adfb1-54a0-459d-8158-168bf9a31bb6
	  Boot ID:                    f909c6ea-f445-457c-a1c2-304f35f07b9d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-ntbk4                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  kube-system                 etcd-ha-293078-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m4s
	  kube-system                 kindnet-f4djp                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m7s
	  kube-system                 kube-apiserver-ha-293078-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m5s
	  kube-system                 kube-controller-manager-ha-293078-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m5s
	  kube-system                 kube-proxy-8s2xk                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m7s
	  kube-system                 kube-scheduler-ha-293078-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m2s
	  kube-system                 kube-vip-ha-293078-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m2s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  5m7s (x8 over 5m7s)  kubelet          Node ha-293078-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m7s (x8 over 5m7s)  kubelet          Node ha-293078-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m7s (x7 over 5m7s)  kubelet          Node ha-293078-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m6s                 node-controller  Node ha-293078-m02 event: Registered Node ha-293078-m02 in Controller
	  Normal  RegisteredNode           4m47s                node-controller  Node ha-293078-m02 event: Registered Node ha-293078-m02 in Controller
	  Normal  RegisteredNode           3m34s                node-controller  Node ha-293078-m02 event: Registered Node ha-293078-m02 in Controller
	  Normal  NodeNotReady             101s                 node-controller  Node ha-293078-m02 status is now: NodeNotReady
	
	
	Name:               ha-293078-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-293078-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2
	                    minikube.k8s.io/name=ha-293078
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_01T18_23_15_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Apr 2024 18:23:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-293078-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Apr 2024 18:26:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Apr 2024 18:23:40 +0000   Mon, 01 Apr 2024 18:23:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Apr 2024 18:23:40 +0000   Mon, 01 Apr 2024 18:23:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Apr 2024 18:23:40 +0000   Mon, 01 Apr 2024 18:23:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Apr 2024 18:23:40 +0000   Mon, 01 Apr 2024 18:23:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.210
	  Hostname:    ha-293078-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c0e3d05a853946ce973ab987568f85f7
	  System UUID:                c0e3d05a-8539-46ce-973a-b987568f85f7
	  Boot ID:                    4961ebe8-8ffa-4300-aa70-cb90bb457245
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-z89qx                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  kube-system                 etcd-ha-293078-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m50s
	  kube-system                 kindnet-ccxmv                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m53s
	  kube-system                 kube-apiserver-ha-293078-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m51s
	  kube-system                 kube-controller-manager-ha-293078-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m44s
	  kube-system                 kube-proxy-xjx5z                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	  kube-system                 kube-scheduler-ha-293078-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m46s
	  kube-system                 kube-vip-ha-293078-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m48s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m53s (x8 over 3m53s)  kubelet          Node ha-293078-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m53s (x8 over 3m53s)  kubelet          Node ha-293078-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m53s (x7 over 3m53s)  kubelet          Node ha-293078-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m52s                  node-controller  Node ha-293078-m03 event: Registered Node ha-293078-m03 in Controller
	  Normal  RegisteredNode           3m51s                  node-controller  Node ha-293078-m03 event: Registered Node ha-293078-m03 in Controller
	  Normal  RegisteredNode           3m34s                  node-controller  Node ha-293078-m03 event: Registered Node ha-293078-m03 in Controller
	
	
	Name:               ha-293078-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-293078-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2
	                    minikube.k8s.io/name=ha-293078
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_01T18_24_11_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Apr 2024 18:24:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-293078-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Apr 2024 18:26:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Apr 2024 18:24:41 +0000   Mon, 01 Apr 2024 18:24:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Apr 2024 18:24:41 +0000   Mon, 01 Apr 2024 18:24:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Apr 2024 18:24:41 +0000   Mon, 01 Apr 2024 18:24:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Apr 2024 18:24:41 +0000   Mon, 01 Apr 2024 18:24:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.14
	  Hostname:    ha-293078-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 071d9c818e6d4564a98e9da52a34ff25
	  System UUID:                071d9c81-8e6d-4564-a98e-9da52a34ff25
	  Boot ID:                    5d2c1342-0a3a-4951-b2be-ba9d3591daef
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-qhwr4       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m52s
	  kube-system                 kube-proxy-49cqh    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m45s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m52s (x2 over 2m52s)  kubelet          Node ha-293078-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m52s (x2 over 2m52s)  kubelet          Node ha-293078-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m52s (x2 over 2m52s)  kubelet          Node ha-293078-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m51s                  node-controller  Node ha-293078-m04 event: Registered Node ha-293078-m04 in Controller
	  Normal  RegisteredNode           2m49s                  node-controller  Node ha-293078-m04 event: Registered Node ha-293078-m04 in Controller
	  Normal  RegisteredNode           2m47s                  node-controller  Node ha-293078-m04 event: Registered Node ha-293078-m04 in Controller
	  Normal  NodeReady                2m43s                  kubelet          Node ha-293078-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Apr 1 18:20] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051964] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042596] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.580851] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.458007] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.704034] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.937253] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.062108] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066440] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.214972] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.138486] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.294622] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +4.757712] systemd-fstab-generator[764]: Ignoring "noauto" option for root device
	[  +0.062342] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.163879] systemd-fstab-generator[939]: Ignoring "noauto" option for root device
	[  +0.840426] kauditd_printk_skb: 57 callbacks suppressed
	[  +7.059574] systemd-fstab-generator[1362]: Ignoring "noauto" option for root device
	[  +0.076658] kauditd_printk_skb: 40 callbacks suppressed
	[Apr 1 18:21] kauditd_printk_skb: 21 callbacks suppressed
	[Apr 1 18:22] kauditd_printk_skb: 74 callbacks suppressed
	
	
	==> etcd [8471f59f3de235b71fe57e79412f27884ceb62d668027d7fe3730009d2fbb1fa] <==
	{"level":"warn","ts":"2024-04-01T18:27:01.960934Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2c3239b60c033d0c","from":"2c3239b60c033d0c","remote-peer-id":"7d555fa605d0a4f8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-01T18:27:01.967819Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2c3239b60c033d0c","from":"2c3239b60c033d0c","remote-peer-id":"7d555fa605d0a4f8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-01T18:27:01.982988Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2c3239b60c033d0c","from":"2c3239b60c033d0c","remote-peer-id":"7d555fa605d0a4f8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-01T18:27:01.989082Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2c3239b60c033d0c","from":"2c3239b60c033d0c","remote-peer-id":"7d555fa605d0a4f8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-01T18:27:02.009209Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2c3239b60c033d0c","from":"2c3239b60c033d0c","remote-peer-id":"7d555fa605d0a4f8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-01T18:27:02.019454Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2c3239b60c033d0c","from":"2c3239b60c033d0c","remote-peer-id":"7d555fa605d0a4f8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-01T18:27:02.029684Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2c3239b60c033d0c","from":"2c3239b60c033d0c","remote-peer-id":"7d555fa605d0a4f8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-01T18:27:02.036975Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2c3239b60c033d0c","from":"2c3239b60c033d0c","remote-peer-id":"7d555fa605d0a4f8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-01T18:27:02.040972Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2c3239b60c033d0c","from":"2c3239b60c033d0c","remote-peer-id":"7d555fa605d0a4f8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-01T18:27:02.066339Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2c3239b60c033d0c","from":"2c3239b60c033d0c","remote-peer-id":"7d555fa605d0a4f8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-01T18:27:02.068742Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2c3239b60c033d0c","from":"2c3239b60c033d0c","remote-peer-id":"7d555fa605d0a4f8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-01T18:27:02.094595Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2c3239b60c033d0c","from":"2c3239b60c033d0c","remote-peer-id":"7d555fa605d0a4f8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-01T18:27:02.121726Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2c3239b60c033d0c","from":"2c3239b60c033d0c","remote-peer-id":"7d555fa605d0a4f8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-01T18:27:02.141509Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2c3239b60c033d0c","from":"2c3239b60c033d0c","remote-peer-id":"7d555fa605d0a4f8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-01T18:27:02.147328Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2c3239b60c033d0c","from":"2c3239b60c033d0c","remote-peer-id":"7d555fa605d0a4f8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-01T18:27:02.158758Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2c3239b60c033d0c","from":"2c3239b60c033d0c","remote-peer-id":"7d555fa605d0a4f8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-01T18:27:02.160794Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2c3239b60c033d0c","from":"2c3239b60c033d0c","remote-peer-id":"7d555fa605d0a4f8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-01T18:27:02.166031Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2c3239b60c033d0c","from":"2c3239b60c033d0c","remote-peer-id":"7d555fa605d0a4f8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-01T18:27:02.173209Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2c3239b60c033d0c","from":"2c3239b60c033d0c","remote-peer-id":"7d555fa605d0a4f8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-01T18:27:02.178457Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2c3239b60c033d0c","from":"2c3239b60c033d0c","remote-peer-id":"7d555fa605d0a4f8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-01T18:27:02.18169Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2c3239b60c033d0c","from":"2c3239b60c033d0c","remote-peer-id":"7d555fa605d0a4f8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-01T18:27:02.187362Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2c3239b60c033d0c","from":"2c3239b60c033d0c","remote-peer-id":"7d555fa605d0a4f8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-01T18:27:02.199668Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2c3239b60c033d0c","from":"2c3239b60c033d0c","remote-peer-id":"7d555fa605d0a4f8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-01T18:27:02.206895Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2c3239b60c033d0c","from":"2c3239b60c033d0c","remote-peer-id":"7d555fa605d0a4f8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-01T18:27:02.261055Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2c3239b60c033d0c","from":"2c3239b60c033d0c","remote-peer-id":"7d555fa605d0a4f8","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:27:02 up 6 min,  0 users,  load average: 0.63, 0.33, 0.15
	Linux ha-293078 5.10.207 #1 SMP Wed Mar 27 22:02:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [39a744dcbdcbe85e94e3ddfb1c32297919a24a7d666cb56091bb090ab4f1b169] <==
	I0401 18:26:24.793239       1 main.go:250] Node ha-293078-m04 has CIDR [10.244.3.0/24] 
	I0401 18:26:34.801211       1 main.go:223] Handling node with IPs: map[192.168.39.74:{}]
	I0401 18:26:34.801272       1 main.go:227] handling current node
	I0401 18:26:34.801288       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0401 18:26:34.801294       1 main.go:250] Node ha-293078-m02 has CIDR [10.244.1.0/24] 
	I0401 18:26:34.801528       1 main.go:223] Handling node with IPs: map[192.168.39.210:{}]
	I0401 18:26:34.801563       1 main.go:250] Node ha-293078-m03 has CIDR [10.244.2.0/24] 
	I0401 18:26:34.801626       1 main.go:223] Handling node with IPs: map[192.168.39.14:{}]
	I0401 18:26:34.801632       1 main.go:250] Node ha-293078-m04 has CIDR [10.244.3.0/24] 
	I0401 18:26:44.810626       1 main.go:223] Handling node with IPs: map[192.168.39.74:{}]
	I0401 18:26:44.810671       1 main.go:227] handling current node
	I0401 18:26:44.810682       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0401 18:26:44.810687       1 main.go:250] Node ha-293078-m02 has CIDR [10.244.1.0/24] 
	I0401 18:26:44.810788       1 main.go:223] Handling node with IPs: map[192.168.39.210:{}]
	I0401 18:26:44.810820       1 main.go:250] Node ha-293078-m03 has CIDR [10.244.2.0/24] 
	I0401 18:26:44.810883       1 main.go:223] Handling node with IPs: map[192.168.39.14:{}]
	I0401 18:26:44.810888       1 main.go:250] Node ha-293078-m04 has CIDR [10.244.3.0/24] 
	I0401 18:26:54.826673       1 main.go:223] Handling node with IPs: map[192.168.39.74:{}]
	I0401 18:26:54.826718       1 main.go:227] handling current node
	I0401 18:26:54.826734       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0401 18:26:54.826739       1 main.go:250] Node ha-293078-m02 has CIDR [10.244.1.0/24] 
	I0401 18:26:54.826847       1 main.go:223] Handling node with IPs: map[192.168.39.210:{}]
	I0401 18:26:54.826877       1 main.go:250] Node ha-293078-m03 has CIDR [10.244.2.0/24] 
	I0401 18:26:54.826943       1 main.go:223] Handling node with IPs: map[192.168.39.14:{}]
	I0401 18:26:54.826948       1 main.go:250] Node ha-293078-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [9d9284db03ef8c515d8a7475c032ebbaa4d501954b6e1f5c383cdcdb3ebf6afb] <==
	I0401 18:20:46.159100       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0401 18:20:46.159212       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0401 18:20:46.159704       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0401 18:20:46.171632       1 shared_informer.go:318] Caches are synced for configmaps
	I0401 18:20:46.172467       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0401 18:20:46.172860       1 aggregator.go:165] initial CRD sync complete...
	I0401 18:20:46.172902       1 autoregister_controller.go:141] Starting autoregister controller
	I0401 18:20:46.172908       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0401 18:20:46.172913       1 cache.go:39] Caches are synced for autoregister controller
	I0401 18:20:46.176701       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0401 18:20:47.054317       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0401 18:20:47.064448       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0401 18:20:47.064486       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0401 18:20:47.779371       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0401 18:20:47.827954       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0401 18:20:47.967905       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0401 18:20:47.978196       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.74]
	I0401 18:20:47.979345       1 controller.go:624] quota admission added evaluator for: endpoints
	I0401 18:20:47.984024       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0401 18:20:48.073905       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0401 18:20:49.730759       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0401 18:20:49.749055       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0401 18:20:49.769372       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0401 18:21:02.078805       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0401 18:21:02.136162       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [e36af39fdf13dd3cf98d2d4a8e7666aea913228d31de663d19c302848663d798] <==
	I0401 18:24:10.655230       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-293078-m04\" does not exist"
	I0401 18:24:10.682506       1 range_allocator.go:380] "Set node PodCIDR" node="ha-293078-m04" podCIDRs=["10.244.3.0/24"]
	I0401 18:24:10.706735       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-ddr9q"
	I0401 18:24:10.717842       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-rccf9"
	E0401 18:24:10.879138       1 daemon_controller.go:326] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"dca24ad1-79a6-4941-bc47-fa9b316afdf5", ResourceVersion:"903", Generation:1, CreationTimestamp:time.Date(2024, time.April, 1, 18, 20, 49, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc000865000), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1
, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVol
umeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0017e49c0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00173c090), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVo
lumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:
v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00173c0a8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPers
istentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"registry.k8s.io/kube-proxy:v1.29.3", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc000865100)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil), Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"ku
be-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001732ae0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000cb1ff8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", No
deSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000420540), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil
), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc001aed680)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001b3e050)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:3, NumberMisscheduled:0, DesiredNumberScheduled:3, NumberReady:3, ObservedGeneration:1, UpdatedNumberScheduled:3, NumberAvailable:3, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0401 18:24:10.924598       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-v2shv"
	I0401 18:24:10.938742       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-ddr9q"
	I0401 18:24:10.973599       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-9bvjh"
	I0401 18:24:11.022969       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-rccf9"
	I0401 18:24:11.218240       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-293078-m04"
	I0401 18:24:11.218299       1 event.go:376] "Event occurred" object="ha-293078-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-293078-m04 event: Registered Node ha-293078-m04 in Controller"
	I0401 18:24:19.794324       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-293078-m04"
	I0401 18:25:21.244443       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-293078-m04"
	I0401 18:25:21.244995       1 event.go:376] "Event occurred" object="ha-293078-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node ha-293078-m02 status is now: NodeNotReady"
	I0401 18:25:21.267091       1 event.go:376] "Event occurred" object="kube-system/kube-controller-manager-ha-293078-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0401 18:25:21.286658       1 event.go:376] "Event occurred" object="kube-system/etcd-ha-293078-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0401 18:25:21.302649       1 event.go:376] "Event occurred" object="kube-system/kube-vip-ha-293078-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0401 18:25:21.323243       1 event.go:376] "Event occurred" object="kube-system/kube-scheduler-ha-293078-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0401 18:25:21.337646       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-ntbk4" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0401 18:25:21.397897       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-8s2xk" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0401 18:25:21.434309       1 event.go:376] "Event occurred" object="kube-system/kindnet-f4djp" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0401 18:25:21.450785       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="112.834631ms"
	I0401 18:25:21.451005       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="132.438µs"
	I0401 18:25:21.460778       1 event.go:376] "Event occurred" object="kube-system/kube-apiserver-ha-293078-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	
	
	==> kube-proxy [8d7ab06dacb1f801ea9714513d3f23a0bad938d609fb9f291d0ec0c4903d8d6a] <==
	I0401 18:21:03.148505       1 server_others.go:72] "Using iptables proxy"
	I0401 18:21:03.171602       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.74"]
	I0401 18:21:03.256037       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0401 18:21:03.256088       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0401 18:21:03.256101       1 server_others.go:168] "Using iptables Proxier"
	I0401 18:21:03.259948       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0401 18:21:03.261131       1 server.go:865] "Version info" version="v1.29.3"
	I0401 18:21:03.261181       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 18:21:03.269053       1 config.go:188] "Starting service config controller"
	I0401 18:21:03.269330       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0401 18:21:03.269457       1 config.go:97] "Starting endpoint slice config controller"
	I0401 18:21:03.269465       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0401 18:21:03.272511       1 config.go:315] "Starting node config controller"
	I0401 18:21:03.272548       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0401 18:21:03.369489       1 shared_informer.go:318] Caches are synced for service config
	I0401 18:21:03.369568       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0401 18:21:03.372784       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [6bd1ccbceec8c5056f450169f49c17acf202e064825e6c51a55ca89e591e25b5] <==
	E0401 18:20:47.193265       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0401 18:20:47.212493       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0401 18:20:47.212543       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0401 18:20:47.287659       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0401 18:20:47.287712       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0401 18:20:47.311190       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0401 18:20:47.311337       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0401 18:20:47.440870       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0401 18:20:47.440930       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0401 18:20:47.476851       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0401 18:20:47.476927       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0401 18:20:47.525753       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0401 18:20:47.525793       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0401 18:20:49.041641       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0401 18:24:10.774118       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-rccf9\": pod kindnet-rccf9 is already assigned to node \"ha-293078-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-rccf9" node="ha-293078-m04"
	E0401 18:24:10.774856       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-rccf9\": pod kindnet-rccf9 is already assigned to node \"ha-293078-m04\"" pod="kube-system/kindnet-rccf9"
	I0401 18:24:10.778681       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-rccf9" node="ha-293078-m04"
	E0401 18:24:10.807167       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-v2shv\": pod kindnet-v2shv is already assigned to node \"ha-293078-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-v2shv" node="ha-293078-m04"
	E0401 18:24:10.807305       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod 7740b1db-8105-47c4-a822-717e4c12c0cd(kube-system/kindnet-v2shv) wasn't assumed so cannot be forgotten"
	E0401 18:24:10.807446       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-v2shv\": pod kindnet-v2shv is already assigned to node \"ha-293078-m04\"" pod="kube-system/kindnet-v2shv"
	I0401 18:24:10.807501       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-v2shv" node="ha-293078-m04"
	E0401 18:24:10.819254       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-9bvjh\": pod kube-proxy-9bvjh is already assigned to node \"ha-293078-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-9bvjh" node="ha-293078-m04"
	E0401 18:24:10.819794       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod a7691fe3-ae08-4a93-abc1-86bab696bf9f(kube-system/kube-proxy-9bvjh) wasn't assumed so cannot be forgotten"
	E0401 18:24:10.819870       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-9bvjh\": pod kube-proxy-9bvjh is already assigned to node \"ha-293078-m04\"" pod="kube-system/kube-proxy-9bvjh"
	I0401 18:24:10.819909       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-9bvjh" node="ha-293078-m04"
	
	
	==> kubelet <==
	Apr 01 18:22:49 ha-293078 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 01 18:22:49 ha-293078 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 01 18:22:49 ha-293078 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 01 18:23:33 ha-293078 kubelet[1369]: I0401 18:23:33.112758    1369 topology_manager.go:215] "Topology Admit Handler" podUID="0cf87f47-0b2d-42b9-9aa6-e4e3736ca728" podNamespace="default" podName="busybox-7fdf7869d9-7tn8z"
	Apr 01 18:23:33 ha-293078 kubelet[1369]: I0401 18:23:33.182066    1369 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s4rf\" (UniqueName: \"kubernetes.io/projected/0cf87f47-0b2d-42b9-9aa6-e4e3736ca728-kube-api-access-8s4rf\") pod \"busybox-7fdf7869d9-7tn8z\" (UID: \"0cf87f47-0b2d-42b9-9aa6-e4e3736ca728\") " pod="default/busybox-7fdf7869d9-7tn8z"
	Apr 01 18:23:49 ha-293078 kubelet[1369]: E0401 18:23:49.983120    1369 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 01 18:23:49 ha-293078 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 01 18:23:49 ha-293078 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 01 18:23:49 ha-293078 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 01 18:23:49 ha-293078 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 01 18:24:49 ha-293078 kubelet[1369]: E0401 18:24:49.981916    1369 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 01 18:24:49 ha-293078 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 01 18:24:49 ha-293078 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 01 18:24:49 ha-293078 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 01 18:24:49 ha-293078 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 01 18:25:49 ha-293078 kubelet[1369]: E0401 18:25:49.983465    1369 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 01 18:25:49 ha-293078 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 01 18:25:49 ha-293078 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 01 18:25:49 ha-293078 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 01 18:25:49 ha-293078 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 01 18:26:49 ha-293078 kubelet[1369]: E0401 18:26:49.981767    1369 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 01 18:26:49 ha-293078 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 01 18:26:49 ha-293078 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 01 18:26:49 ha-293078 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 01 18:26:49 ha-293078 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-293078 -n ha-293078
helpers_test.go:261: (dbg) Run:  kubectl --context ha-293078 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (142.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (58.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-293078 status -v=7 --alsologtostderr: exit status 3 (3.206511318s)

                                                
                                                
-- stdout --
	ha-293078
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-293078-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-293078-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-293078-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 18:27:06.912094   31639 out.go:291] Setting OutFile to fd 1 ...
	I0401 18:27:06.912372   31639 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 18:27:06.912383   31639 out.go:304] Setting ErrFile to fd 2...
	I0401 18:27:06.912387   31639 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 18:27:06.912614   31639 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-10493/.minikube/bin
	I0401 18:27:06.912823   31639 out.go:298] Setting JSON to false
	I0401 18:27:06.912849   31639 mustload.go:65] Loading cluster: ha-293078
	I0401 18:27:06.912970   31639 notify.go:220] Checking for updates...
	I0401 18:27:06.913314   31639 config.go:182] Loaded profile config "ha-293078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 18:27:06.913332   31639 status.go:255] checking status of ha-293078 ...
	I0401 18:27:06.913733   31639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:06.913786   31639 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:06.929590   31639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43043
	I0401 18:27:06.930008   31639 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:06.930586   31639 main.go:141] libmachine: Using API Version  1
	I0401 18:27:06.930613   31639 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:06.931067   31639 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:06.931309   31639 main.go:141] libmachine: (ha-293078) Calling .GetState
	I0401 18:27:06.933339   31639 status.go:330] ha-293078 host status = "Running" (err=<nil>)
	I0401 18:27:06.933359   31639 host.go:66] Checking if "ha-293078" exists ...
	I0401 18:27:06.933727   31639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:06.933779   31639 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:06.949304   31639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38649
	I0401 18:27:06.949814   31639 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:06.950278   31639 main.go:141] libmachine: Using API Version  1
	I0401 18:27:06.950297   31639 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:06.950644   31639 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:06.950804   31639 main.go:141] libmachine: (ha-293078) Calling .GetIP
	I0401 18:27:06.953411   31639 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:27:06.953793   31639 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:27:06.953812   31639 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:27:06.953965   31639 host.go:66] Checking if "ha-293078" exists ...
	I0401 18:27:06.954225   31639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:06.954279   31639 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:06.969137   31639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37755
	I0401 18:27:06.969554   31639 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:06.970072   31639 main.go:141] libmachine: Using API Version  1
	I0401 18:27:06.970100   31639 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:06.970436   31639 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:06.970633   31639 main.go:141] libmachine: (ha-293078) Calling .DriverName
	I0401 18:27:06.970871   31639 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 18:27:06.970900   31639 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:27:06.973633   31639 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:27:06.974054   31639 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:27:06.974080   31639 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:27:06.974219   31639 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:27:06.974376   31639 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:27:06.974504   31639 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:27:06.974677   31639 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078/id_rsa Username:docker}
	I0401 18:27:07.062693   31639 ssh_runner.go:195] Run: systemctl --version
	I0401 18:27:07.069721   31639 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 18:27:07.089570   31639 kubeconfig.go:125] found "ha-293078" server: "https://192.168.39.254:8443"
	I0401 18:27:07.089608   31639 api_server.go:166] Checking apiserver status ...
	I0401 18:27:07.089675   31639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 18:27:07.107173   31639 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1197/cgroup
	W0401 18:27:07.120472   31639 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1197/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0401 18:27:07.120526   31639 ssh_runner.go:195] Run: ls
	I0401 18:27:07.126275   31639 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0401 18:27:07.132575   31639 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0401 18:27:07.132605   31639 status.go:422] ha-293078 apiserver status = Running (err=<nil>)
	I0401 18:27:07.132614   31639 status.go:257] ha-293078 status: &{Name:ha-293078 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0401 18:27:07.132631   31639 status.go:255] checking status of ha-293078-m02 ...
	I0401 18:27:07.132923   31639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:07.132958   31639 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:07.147954   31639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40581
	I0401 18:27:07.148443   31639 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:07.148877   31639 main.go:141] libmachine: Using API Version  1
	I0401 18:27:07.148901   31639 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:07.149152   31639 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:07.149363   31639 main.go:141] libmachine: (ha-293078-m02) Calling .GetState
	I0401 18:27:07.151026   31639 status.go:330] ha-293078-m02 host status = "Running" (err=<nil>)
	I0401 18:27:07.151043   31639 host.go:66] Checking if "ha-293078-m02" exists ...
	I0401 18:27:07.151456   31639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:07.151522   31639 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:07.166314   31639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45499
	I0401 18:27:07.166744   31639 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:07.167198   31639 main.go:141] libmachine: Using API Version  1
	I0401 18:27:07.167225   31639 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:07.167546   31639 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:07.167757   31639 main.go:141] libmachine: (ha-293078-m02) Calling .GetIP
	I0401 18:27:07.170820   31639 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:27:07.171233   31639 main.go:141] libmachine: (ha-293078-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7f:87", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:21:19 +0000 UTC Type:0 Mac:52:54:00:25:7f:87 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-293078-m02 Clientid:01:52:54:00:25:7f:87}
	I0401 18:27:07.171259   31639 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined IP address 192.168.39.161 and MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:27:07.171452   31639 host.go:66] Checking if "ha-293078-m02" exists ...
	I0401 18:27:07.171740   31639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:07.171777   31639 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:07.187754   31639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35093
	I0401 18:27:07.188401   31639 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:07.188873   31639 main.go:141] libmachine: Using API Version  1
	I0401 18:27:07.188893   31639 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:07.189242   31639 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:07.189437   31639 main.go:141] libmachine: (ha-293078-m02) Calling .DriverName
	I0401 18:27:07.189697   31639 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 18:27:07.189716   31639 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHHostname
	I0401 18:27:07.192454   31639 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:27:07.192812   31639 main.go:141] libmachine: (ha-293078-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7f:87", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:21:19 +0000 UTC Type:0 Mac:52:54:00:25:7f:87 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-293078-m02 Clientid:01:52:54:00:25:7f:87}
	I0401 18:27:07.192845   31639 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined IP address 192.168.39.161 and MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:27:07.192976   31639 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHPort
	I0401 18:27:07.193165   31639 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHKeyPath
	I0401 18:27:07.193308   31639 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHUsername
	I0401 18:27:07.193456   31639 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m02/id_rsa Username:docker}
	W0401 18:27:09.693908   31639 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.161:22: connect: no route to host
	W0401 18:27:09.694022   31639 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.161:22: connect: no route to host
	E0401 18:27:09.694038   31639 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.161:22: connect: no route to host
	I0401 18:27:09.694045   31639 status.go:257] ha-293078-m02 status: &{Name:ha-293078-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0401 18:27:09.694061   31639 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.161:22: connect: no route to host
	I0401 18:27:09.694068   31639 status.go:255] checking status of ha-293078-m03 ...
	I0401 18:27:09.694366   31639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:09.694411   31639 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:09.709505   31639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33993
	I0401 18:27:09.709947   31639 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:09.710470   31639 main.go:141] libmachine: Using API Version  1
	I0401 18:27:09.710493   31639 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:09.710865   31639 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:09.711049   31639 main.go:141] libmachine: (ha-293078-m03) Calling .GetState
	I0401 18:27:09.712713   31639 status.go:330] ha-293078-m03 host status = "Running" (err=<nil>)
	I0401 18:27:09.712736   31639 host.go:66] Checking if "ha-293078-m03" exists ...
	I0401 18:27:09.713139   31639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:09.713189   31639 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:09.727974   31639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34653
	I0401 18:27:09.728492   31639 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:09.728991   31639 main.go:141] libmachine: Using API Version  1
	I0401 18:27:09.729014   31639 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:09.729328   31639 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:09.729519   31639 main.go:141] libmachine: (ha-293078-m03) Calling .GetIP
	I0401 18:27:09.732014   31639 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:27:09.732402   31639 main.go:141] libmachine: (ha-293078-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:33:4d", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:22:31 +0000 UTC Type:0 Mac:52:54:00:48:33:4d Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-293078-m03 Clientid:01:52:54:00:48:33:4d}
	I0401 18:27:09.732429   31639 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:27:09.732558   31639 host.go:66] Checking if "ha-293078-m03" exists ...
	I0401 18:27:09.732844   31639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:09.732877   31639 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:09.747849   31639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37667
	I0401 18:27:09.748308   31639 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:09.748763   31639 main.go:141] libmachine: Using API Version  1
	I0401 18:27:09.748783   31639 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:09.749083   31639 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:09.749245   31639 main.go:141] libmachine: (ha-293078-m03) Calling .DriverName
	I0401 18:27:09.749466   31639 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 18:27:09.749489   31639 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHHostname
	I0401 18:27:09.752031   31639 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:27:09.752485   31639 main.go:141] libmachine: (ha-293078-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:33:4d", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:22:31 +0000 UTC Type:0 Mac:52:54:00:48:33:4d Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-293078-m03 Clientid:01:52:54:00:48:33:4d}
	I0401 18:27:09.752511   31639 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:27:09.752623   31639 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHPort
	I0401 18:27:09.752750   31639 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHKeyPath
	I0401 18:27:09.752866   31639 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHUsername
	I0401 18:27:09.752993   31639 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m03/id_rsa Username:docker}
	I0401 18:27:09.843001   31639 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 18:27:09.859000   31639 kubeconfig.go:125] found "ha-293078" server: "https://192.168.39.254:8443"
	I0401 18:27:09.859032   31639 api_server.go:166] Checking apiserver status ...
	I0401 18:27:09.859071   31639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 18:27:09.873263   31639 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1510/cgroup
	W0401 18:27:09.884879   31639 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1510/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0401 18:27:09.884934   31639 ssh_runner.go:195] Run: ls
	I0401 18:27:09.891188   31639 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0401 18:27:09.898160   31639 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0401 18:27:09.898183   31639 status.go:422] ha-293078-m03 apiserver status = Running (err=<nil>)
	I0401 18:27:09.898193   31639 status.go:257] ha-293078-m03 status: &{Name:ha-293078-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0401 18:27:09.898213   31639 status.go:255] checking status of ha-293078-m04 ...
	I0401 18:27:09.898633   31639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:09.898676   31639 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:09.913598   31639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40379
	I0401 18:27:09.913997   31639 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:09.914488   31639 main.go:141] libmachine: Using API Version  1
	I0401 18:27:09.914509   31639 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:09.914798   31639 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:09.915025   31639 main.go:141] libmachine: (ha-293078-m04) Calling .GetState
	I0401 18:27:09.916731   31639 status.go:330] ha-293078-m04 host status = "Running" (err=<nil>)
	I0401 18:27:09.916755   31639 host.go:66] Checking if "ha-293078-m04" exists ...
	I0401 18:27:09.917065   31639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:09.917103   31639 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:09.931594   31639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41961
	I0401 18:27:09.932004   31639 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:09.932450   31639 main.go:141] libmachine: Using API Version  1
	I0401 18:27:09.932468   31639 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:09.932783   31639 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:09.932969   31639 main.go:141] libmachine: (ha-293078-m04) Calling .GetIP
	I0401 18:27:09.936087   31639 main.go:141] libmachine: (ha-293078-m04) DBG | domain ha-293078-m04 has defined MAC address 52:54:00:b5:ec:c5 in network mk-ha-293078
	I0401 18:27:09.936510   31639 main.go:141] libmachine: (ha-293078-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:ec:c5", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:23:56 +0000 UTC Type:0 Mac:52:54:00:b5:ec:c5 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-293078-m04 Clientid:01:52:54:00:b5:ec:c5}
	I0401 18:27:09.936541   31639 main.go:141] libmachine: (ha-293078-m04) DBG | domain ha-293078-m04 has defined IP address 192.168.39.14 and MAC address 52:54:00:b5:ec:c5 in network mk-ha-293078
	I0401 18:27:09.936666   31639 host.go:66] Checking if "ha-293078-m04" exists ...
	I0401 18:27:09.937014   31639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:09.937053   31639 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:09.952528   31639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33625
	I0401 18:27:09.953041   31639 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:09.953491   31639 main.go:141] libmachine: Using API Version  1
	I0401 18:27:09.953518   31639 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:09.953874   31639 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:09.954041   31639 main.go:141] libmachine: (ha-293078-m04) Calling .DriverName
	I0401 18:27:09.954253   31639 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 18:27:09.954271   31639 main.go:141] libmachine: (ha-293078-m04) Calling .GetSSHHostname
	I0401 18:27:09.956877   31639 main.go:141] libmachine: (ha-293078-m04) DBG | domain ha-293078-m04 has defined MAC address 52:54:00:b5:ec:c5 in network mk-ha-293078
	I0401 18:27:09.957343   31639 main.go:141] libmachine: (ha-293078-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:ec:c5", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:23:56 +0000 UTC Type:0 Mac:52:54:00:b5:ec:c5 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-293078-m04 Clientid:01:52:54:00:b5:ec:c5}
	I0401 18:27:09.957499   31639 main.go:141] libmachine: (ha-293078-m04) Calling .GetSSHPort
	I0401 18:27:09.957537   31639 main.go:141] libmachine: (ha-293078-m04) DBG | domain ha-293078-m04 has defined IP address 192.168.39.14 and MAC address 52:54:00:b5:ec:c5 in network mk-ha-293078
	I0401 18:27:09.957741   31639 main.go:141] libmachine: (ha-293078-m04) Calling .GetSSHKeyPath
	I0401 18:27:09.957896   31639 main.go:141] libmachine: (ha-293078-m04) Calling .GetSSHUsername
	I0401 18:27:09.958073   31639 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m04/id_rsa Username:docker}
	I0401 18:27:10.046439   31639 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 18:27:10.063572   31639 status.go:257] ha-293078-m04 status: &{Name:ha-293078-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-293078 status -v=7 --alsologtostderr: exit status 3 (5.536872038s)

                                                
                                                
-- stdout --
	ha-293078
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-293078-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-293078-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-293078-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 18:27:10.732847   31736 out.go:291] Setting OutFile to fd 1 ...
	I0401 18:27:10.732978   31736 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 18:27:10.732989   31736 out.go:304] Setting ErrFile to fd 2...
	I0401 18:27:10.732995   31736 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 18:27:10.733197   31736 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-10493/.minikube/bin
	I0401 18:27:10.733388   31736 out.go:298] Setting JSON to false
	I0401 18:27:10.733417   31736 mustload.go:65] Loading cluster: ha-293078
	I0401 18:27:10.733524   31736 notify.go:220] Checking for updates...
	I0401 18:27:10.733911   31736 config.go:182] Loaded profile config "ha-293078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 18:27:10.733928   31736 status.go:255] checking status of ha-293078 ...
	I0401 18:27:10.734315   31736 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:10.734386   31736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:10.755358   31736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41837
	I0401 18:27:10.755821   31736 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:10.756393   31736 main.go:141] libmachine: Using API Version  1
	I0401 18:27:10.756420   31736 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:10.756809   31736 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:10.757001   31736 main.go:141] libmachine: (ha-293078) Calling .GetState
	I0401 18:27:10.758777   31736 status.go:330] ha-293078 host status = "Running" (err=<nil>)
	I0401 18:27:10.758794   31736 host.go:66] Checking if "ha-293078" exists ...
	I0401 18:27:10.759160   31736 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:10.759203   31736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:10.775578   31736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38299
	I0401 18:27:10.776026   31736 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:10.776531   31736 main.go:141] libmachine: Using API Version  1
	I0401 18:27:10.776545   31736 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:10.776845   31736 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:10.777046   31736 main.go:141] libmachine: (ha-293078) Calling .GetIP
	I0401 18:27:10.779740   31736 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:27:10.780231   31736 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:27:10.780272   31736 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:27:10.780379   31736 host.go:66] Checking if "ha-293078" exists ...
	I0401 18:27:10.780667   31736 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:10.780701   31736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:10.795876   31736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38687
	I0401 18:27:10.796451   31736 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:10.796936   31736 main.go:141] libmachine: Using API Version  1
	I0401 18:27:10.796950   31736 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:10.797274   31736 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:10.797498   31736 main.go:141] libmachine: (ha-293078) Calling .DriverName
	I0401 18:27:10.797705   31736 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 18:27:10.797747   31736 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:27:10.800496   31736 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:27:10.800941   31736 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:27:10.800977   31736 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:27:10.801157   31736 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:27:10.801337   31736 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:27:10.801504   31736 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:27:10.801662   31736 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078/id_rsa Username:docker}
	I0401 18:27:10.890880   31736 ssh_runner.go:195] Run: systemctl --version
	I0401 18:27:10.897374   31736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 18:27:10.914925   31736 kubeconfig.go:125] found "ha-293078" server: "https://192.168.39.254:8443"
	I0401 18:27:10.914965   31736 api_server.go:166] Checking apiserver status ...
	I0401 18:27:10.915005   31736 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 18:27:10.930434   31736 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1197/cgroup
	W0401 18:27:10.940695   31736 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1197/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0401 18:27:10.940744   31736 ssh_runner.go:195] Run: ls
	I0401 18:27:10.945719   31736 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0401 18:27:10.952587   31736 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0401 18:27:10.952614   31736 status.go:422] ha-293078 apiserver status = Running (err=<nil>)
	I0401 18:27:10.952625   31736 status.go:257] ha-293078 status: &{Name:ha-293078 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0401 18:27:10.952654   31736 status.go:255] checking status of ha-293078-m02 ...
	I0401 18:27:10.952998   31736 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:10.953038   31736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:10.968912   31736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42957
	I0401 18:27:10.969380   31736 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:10.969903   31736 main.go:141] libmachine: Using API Version  1
	I0401 18:27:10.969931   31736 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:10.970340   31736 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:10.970550   31736 main.go:141] libmachine: (ha-293078-m02) Calling .GetState
	I0401 18:27:10.972157   31736 status.go:330] ha-293078-m02 host status = "Running" (err=<nil>)
	I0401 18:27:10.972175   31736 host.go:66] Checking if "ha-293078-m02" exists ...
	I0401 18:27:10.972555   31736 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:10.972599   31736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:10.986624   31736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34833
	I0401 18:27:10.987065   31736 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:10.987524   31736 main.go:141] libmachine: Using API Version  1
	I0401 18:27:10.987550   31736 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:10.987825   31736 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:10.987989   31736 main.go:141] libmachine: (ha-293078-m02) Calling .GetIP
	I0401 18:27:10.990576   31736 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:27:10.991049   31736 main.go:141] libmachine: (ha-293078-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7f:87", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:21:19 +0000 UTC Type:0 Mac:52:54:00:25:7f:87 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-293078-m02 Clientid:01:52:54:00:25:7f:87}
	I0401 18:27:10.991077   31736 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined IP address 192.168.39.161 and MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:27:10.991233   31736 host.go:66] Checking if "ha-293078-m02" exists ...
	I0401 18:27:10.991515   31736 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:10.991545   31736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:11.005505   31736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43721
	I0401 18:27:11.005898   31736 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:11.006360   31736 main.go:141] libmachine: Using API Version  1
	I0401 18:27:11.006379   31736 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:11.006637   31736 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:11.006795   31736 main.go:141] libmachine: (ha-293078-m02) Calling .DriverName
	I0401 18:27:11.006968   31736 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 18:27:11.006988   31736 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHHostname
	I0401 18:27:11.009620   31736 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:27:11.010075   31736 main.go:141] libmachine: (ha-293078-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7f:87", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:21:19 +0000 UTC Type:0 Mac:52:54:00:25:7f:87 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-293078-m02 Clientid:01:52:54:00:25:7f:87}
	I0401 18:27:11.010100   31736 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined IP address 192.168.39.161 and MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:27:11.010218   31736 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHPort
	I0401 18:27:11.010368   31736 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHKeyPath
	I0401 18:27:11.010545   31736 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHUsername
	I0401 18:27:11.010682   31736 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m02/id_rsa Username:docker}
	W0401 18:27:12.769911   31736 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.161:22: connect: no route to host
	I0401 18:27:12.770011   31736 retry.go:31] will retry after 151.888283ms: new client: new client: dial tcp 192.168.39.161:22: connect: no route to host
	I0401 18:27:12.922358   31736 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHHostname
	I0401 18:27:12.925277   31736 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:27:12.925680   31736 main.go:141] libmachine: (ha-293078-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7f:87", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:21:19 +0000 UTC Type:0 Mac:52:54:00:25:7f:87 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-293078-m02 Clientid:01:52:54:00:25:7f:87}
	I0401 18:27:12.925720   31736 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined IP address 192.168.39.161 and MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:27:12.925838   31736 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHPort
	I0401 18:27:12.926018   31736 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHKeyPath
	I0401 18:27:12.926195   31736 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHUsername
	I0401 18:27:12.926332   31736 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m02/id_rsa Username:docker}
	W0401 18:27:15.841942   31736 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.161:22: connect: no route to host
	W0401 18:27:15.842015   31736 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.161:22: connect: no route to host
	E0401 18:27:15.842046   31736 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.161:22: connect: no route to host
	I0401 18:27:15.842054   31736 status.go:257] ha-293078-m02 status: &{Name:ha-293078-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0401 18:27:15.842071   31736 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.161:22: connect: no route to host
	I0401 18:27:15.842078   31736 status.go:255] checking status of ha-293078-m03 ...
	I0401 18:27:15.842456   31736 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:15.842504   31736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:15.856910   31736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39427
	I0401 18:27:15.857401   31736 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:15.857857   31736 main.go:141] libmachine: Using API Version  1
	I0401 18:27:15.857880   31736 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:15.858211   31736 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:15.858374   31736 main.go:141] libmachine: (ha-293078-m03) Calling .GetState
	I0401 18:27:15.859861   31736 status.go:330] ha-293078-m03 host status = "Running" (err=<nil>)
	I0401 18:27:15.859873   31736 host.go:66] Checking if "ha-293078-m03" exists ...
	I0401 18:27:15.860142   31736 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:15.860173   31736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:15.876183   31736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40841
	I0401 18:27:15.876568   31736 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:15.877001   31736 main.go:141] libmachine: Using API Version  1
	I0401 18:27:15.877020   31736 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:15.877329   31736 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:15.877552   31736 main.go:141] libmachine: (ha-293078-m03) Calling .GetIP
	I0401 18:27:15.880257   31736 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:27:15.880663   31736 main.go:141] libmachine: (ha-293078-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:33:4d", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:22:31 +0000 UTC Type:0 Mac:52:54:00:48:33:4d Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-293078-m03 Clientid:01:52:54:00:48:33:4d}
	I0401 18:27:15.880690   31736 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:27:15.880848   31736 host.go:66] Checking if "ha-293078-m03" exists ...
	I0401 18:27:15.881122   31736 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:15.881154   31736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:15.896211   31736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43639
	I0401 18:27:15.896600   31736 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:15.897000   31736 main.go:141] libmachine: Using API Version  1
	I0401 18:27:15.897022   31736 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:15.897304   31736 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:15.897442   31736 main.go:141] libmachine: (ha-293078-m03) Calling .DriverName
	I0401 18:27:15.897612   31736 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 18:27:15.897634   31736 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHHostname
	I0401 18:27:15.900093   31736 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:27:15.900493   31736 main.go:141] libmachine: (ha-293078-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:33:4d", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:22:31 +0000 UTC Type:0 Mac:52:54:00:48:33:4d Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-293078-m03 Clientid:01:52:54:00:48:33:4d}
	I0401 18:27:15.900517   31736 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:27:15.900681   31736 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHPort
	I0401 18:27:15.900861   31736 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHKeyPath
	I0401 18:27:15.901008   31736 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHUsername
	I0401 18:27:15.901134   31736 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m03/id_rsa Username:docker}
	I0401 18:27:15.996337   31736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 18:27:16.018174   31736 kubeconfig.go:125] found "ha-293078" server: "https://192.168.39.254:8443"
	I0401 18:27:16.018217   31736 api_server.go:166] Checking apiserver status ...
	I0401 18:27:16.018250   31736 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 18:27:16.034495   31736 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1510/cgroup
	W0401 18:27:16.047421   31736 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1510/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0401 18:27:16.047470   31736 ssh_runner.go:195] Run: ls
	I0401 18:27:16.053230   31736 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0401 18:27:16.058135   31736 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0401 18:27:16.058162   31736 status.go:422] ha-293078-m03 apiserver status = Running (err=<nil>)
	I0401 18:27:16.058173   31736 status.go:257] ha-293078-m03 status: &{Name:ha-293078-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0401 18:27:16.058192   31736 status.go:255] checking status of ha-293078-m04 ...
	I0401 18:27:16.058481   31736 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:16.058517   31736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:16.073486   31736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37429
	I0401 18:27:16.073938   31736 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:16.074443   31736 main.go:141] libmachine: Using API Version  1
	I0401 18:27:16.074463   31736 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:16.074789   31736 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:16.074932   31736 main.go:141] libmachine: (ha-293078-m04) Calling .GetState
	I0401 18:27:16.076457   31736 status.go:330] ha-293078-m04 host status = "Running" (err=<nil>)
	I0401 18:27:16.076471   31736 host.go:66] Checking if "ha-293078-m04" exists ...
	I0401 18:27:16.076743   31736 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:16.076772   31736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:16.090997   31736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33719
	I0401 18:27:16.091438   31736 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:16.091951   31736 main.go:141] libmachine: Using API Version  1
	I0401 18:27:16.091971   31736 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:16.092260   31736 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:16.092453   31736 main.go:141] libmachine: (ha-293078-m04) Calling .GetIP
	I0401 18:27:16.095036   31736 main.go:141] libmachine: (ha-293078-m04) DBG | domain ha-293078-m04 has defined MAC address 52:54:00:b5:ec:c5 in network mk-ha-293078
	I0401 18:27:16.095491   31736 main.go:141] libmachine: (ha-293078-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:ec:c5", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:23:56 +0000 UTC Type:0 Mac:52:54:00:b5:ec:c5 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-293078-m04 Clientid:01:52:54:00:b5:ec:c5}
	I0401 18:27:16.095518   31736 main.go:141] libmachine: (ha-293078-m04) DBG | domain ha-293078-m04 has defined IP address 192.168.39.14 and MAC address 52:54:00:b5:ec:c5 in network mk-ha-293078
	I0401 18:27:16.095619   31736 host.go:66] Checking if "ha-293078-m04" exists ...
	I0401 18:27:16.095976   31736 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:16.096038   31736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:16.110226   31736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39537
	I0401 18:27:16.110600   31736 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:16.111032   31736 main.go:141] libmachine: Using API Version  1
	I0401 18:27:16.111053   31736 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:16.111537   31736 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:16.111728   31736 main.go:141] libmachine: (ha-293078-m04) Calling .DriverName
	I0401 18:27:16.111908   31736 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 18:27:16.111929   31736 main.go:141] libmachine: (ha-293078-m04) Calling .GetSSHHostname
	I0401 18:27:16.114414   31736 main.go:141] libmachine: (ha-293078-m04) DBG | domain ha-293078-m04 has defined MAC address 52:54:00:b5:ec:c5 in network mk-ha-293078
	I0401 18:27:16.114769   31736 main.go:141] libmachine: (ha-293078-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:ec:c5", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:23:56 +0000 UTC Type:0 Mac:52:54:00:b5:ec:c5 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-293078-m04 Clientid:01:52:54:00:b5:ec:c5}
	I0401 18:27:16.114792   31736 main.go:141] libmachine: (ha-293078-m04) DBG | domain ha-293078-m04 has defined IP address 192.168.39.14 and MAC address 52:54:00:b5:ec:c5 in network mk-ha-293078
	I0401 18:27:16.114941   31736 main.go:141] libmachine: (ha-293078-m04) Calling .GetSSHPort
	I0401 18:27:16.115112   31736 main.go:141] libmachine: (ha-293078-m04) Calling .GetSSHKeyPath
	I0401 18:27:16.115236   31736 main.go:141] libmachine: (ha-293078-m04) Calling .GetSSHUsername
	I0401 18:27:16.115354   31736 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m04/id_rsa Username:docker}
	I0401 18:27:16.193979   31736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 18:27:16.210308   31736 status.go:257] ha-293078-m04 status: &{Name:ha-293078-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-293078 status -v=7 --alsologtostderr: exit status 3 (4.808236914s)

                                                
                                                
-- stdout --
	ha-293078
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-293078-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-293078-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-293078-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 18:27:17.721741   31831 out.go:291] Setting OutFile to fd 1 ...
	I0401 18:27:17.721859   31831 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 18:27:17.721872   31831 out.go:304] Setting ErrFile to fd 2...
	I0401 18:27:17.721878   31831 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 18:27:17.722199   31831 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-10493/.minikube/bin
	I0401 18:27:17.722410   31831 out.go:298] Setting JSON to false
	I0401 18:27:17.722439   31831 mustload.go:65] Loading cluster: ha-293078
	I0401 18:27:17.722492   31831 notify.go:220] Checking for updates...
	I0401 18:27:17.722891   31831 config.go:182] Loaded profile config "ha-293078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 18:27:17.722911   31831 status.go:255] checking status of ha-293078 ...
	I0401 18:27:17.723429   31831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:17.723487   31831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:17.742007   31831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33863
	I0401 18:27:17.742539   31831 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:17.743039   31831 main.go:141] libmachine: Using API Version  1
	I0401 18:27:17.743061   31831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:17.743318   31831 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:17.743534   31831 main.go:141] libmachine: (ha-293078) Calling .GetState
	I0401 18:27:17.745244   31831 status.go:330] ha-293078 host status = "Running" (err=<nil>)
	I0401 18:27:17.745258   31831 host.go:66] Checking if "ha-293078" exists ...
	I0401 18:27:17.745513   31831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:17.745547   31831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:17.760909   31831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33251
	I0401 18:27:17.761342   31831 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:17.761812   31831 main.go:141] libmachine: Using API Version  1
	I0401 18:27:17.761840   31831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:17.762179   31831 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:17.762391   31831 main.go:141] libmachine: (ha-293078) Calling .GetIP
	I0401 18:27:17.765442   31831 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:27:17.765970   31831 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:27:17.765995   31831 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:27:17.766134   31831 host.go:66] Checking if "ha-293078" exists ...
	I0401 18:27:17.766443   31831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:17.766490   31831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:17.782024   31831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44425
	I0401 18:27:17.782401   31831 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:17.782896   31831 main.go:141] libmachine: Using API Version  1
	I0401 18:27:17.782920   31831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:17.783224   31831 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:17.783430   31831 main.go:141] libmachine: (ha-293078) Calling .DriverName
	I0401 18:27:17.783601   31831 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 18:27:17.783623   31831 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:27:17.786221   31831 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:27:17.786673   31831 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:27:17.786712   31831 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:27:17.786987   31831 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:27:17.787175   31831 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:27:17.787370   31831 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:27:17.787526   31831 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078/id_rsa Username:docker}
	I0401 18:27:17.870222   31831 ssh_runner.go:195] Run: systemctl --version
	I0401 18:27:17.877085   31831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 18:27:17.893921   31831 kubeconfig.go:125] found "ha-293078" server: "https://192.168.39.254:8443"
	I0401 18:27:17.893951   31831 api_server.go:166] Checking apiserver status ...
	I0401 18:27:17.893983   31831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 18:27:17.909303   31831 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1197/cgroup
	W0401 18:27:17.921931   31831 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1197/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0401 18:27:17.921987   31831 ssh_runner.go:195] Run: ls
	I0401 18:27:17.927408   31831 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0401 18:27:17.932069   31831 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0401 18:27:17.932090   31831 status.go:422] ha-293078 apiserver status = Running (err=<nil>)
	I0401 18:27:17.932100   31831 status.go:257] ha-293078 status: &{Name:ha-293078 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0401 18:27:17.932128   31831 status.go:255] checking status of ha-293078-m02 ...
	I0401 18:27:17.932421   31831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:17.932458   31831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:17.947428   31831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43001
	I0401 18:27:17.947852   31831 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:17.948312   31831 main.go:141] libmachine: Using API Version  1
	I0401 18:27:17.948335   31831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:17.948701   31831 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:17.948911   31831 main.go:141] libmachine: (ha-293078-m02) Calling .GetState
	I0401 18:27:17.950717   31831 status.go:330] ha-293078-m02 host status = "Running" (err=<nil>)
	I0401 18:27:17.950735   31831 host.go:66] Checking if "ha-293078-m02" exists ...
	I0401 18:27:17.950999   31831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:17.951042   31831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:17.965523   31831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40321
	I0401 18:27:17.965932   31831 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:17.966371   31831 main.go:141] libmachine: Using API Version  1
	I0401 18:27:17.966403   31831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:17.966695   31831 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:17.966886   31831 main.go:141] libmachine: (ha-293078-m02) Calling .GetIP
	I0401 18:27:17.969603   31831 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:27:17.970036   31831 main.go:141] libmachine: (ha-293078-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7f:87", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:21:19 +0000 UTC Type:0 Mac:52:54:00:25:7f:87 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-293078-m02 Clientid:01:52:54:00:25:7f:87}
	I0401 18:27:17.970052   31831 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined IP address 192.168.39.161 and MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:27:17.970217   31831 host.go:66] Checking if "ha-293078-m02" exists ...
	I0401 18:27:17.970594   31831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:17.970628   31831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:17.984881   31831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42623
	I0401 18:27:17.985282   31831 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:17.985725   31831 main.go:141] libmachine: Using API Version  1
	I0401 18:27:17.985751   31831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:17.986093   31831 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:17.986269   31831 main.go:141] libmachine: (ha-293078-m02) Calling .DriverName
	I0401 18:27:17.986452   31831 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 18:27:17.986469   31831 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHHostname
	I0401 18:27:17.989229   31831 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:27:17.989605   31831 main.go:141] libmachine: (ha-293078-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7f:87", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:21:19 +0000 UTC Type:0 Mac:52:54:00:25:7f:87 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-293078-m02 Clientid:01:52:54:00:25:7f:87}
	I0401 18:27:17.989633   31831 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined IP address 192.168.39.161 and MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:27:17.989775   31831 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHPort
	I0401 18:27:17.989927   31831 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHKeyPath
	I0401 18:27:17.990129   31831 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHUsername
	I0401 18:27:17.990264   31831 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m02/id_rsa Username:docker}
	W0401 18:27:18.909928   31831 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.161:22: connect: no route to host
	I0401 18:27:18.909981   31831 retry.go:31] will retry after 144.765175ms: dial tcp 192.168.39.161:22: connect: no route to host
	W0401 18:27:22.109943   31831 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.161:22: connect: no route to host
	W0401 18:27:22.110034   31831 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.161:22: connect: no route to host
	E0401 18:27:22.110057   31831 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.161:22: connect: no route to host
	I0401 18:27:22.110071   31831 status.go:257] ha-293078-m02 status: &{Name:ha-293078-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0401 18:27:22.110101   31831 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.161:22: connect: no route to host
	I0401 18:27:22.110116   31831 status.go:255] checking status of ha-293078-m03 ...
	I0401 18:27:22.110514   31831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:22.110569   31831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:22.125942   31831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38467
	I0401 18:27:22.126343   31831 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:22.126862   31831 main.go:141] libmachine: Using API Version  1
	I0401 18:27:22.126893   31831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:22.127264   31831 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:22.127447   31831 main.go:141] libmachine: (ha-293078-m03) Calling .GetState
	I0401 18:27:22.129135   31831 status.go:330] ha-293078-m03 host status = "Running" (err=<nil>)
	I0401 18:27:22.129151   31831 host.go:66] Checking if "ha-293078-m03" exists ...
	I0401 18:27:22.129500   31831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:22.129537   31831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:22.143935   31831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37991
	I0401 18:27:22.144326   31831 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:22.145004   31831 main.go:141] libmachine: Using API Version  1
	I0401 18:27:22.145029   31831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:22.145408   31831 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:22.145626   31831 main.go:141] libmachine: (ha-293078-m03) Calling .GetIP
	I0401 18:27:22.148407   31831 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:27:22.148809   31831 main.go:141] libmachine: (ha-293078-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:33:4d", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:22:31 +0000 UTC Type:0 Mac:52:54:00:48:33:4d Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-293078-m03 Clientid:01:52:54:00:48:33:4d}
	I0401 18:27:22.148834   31831 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:27:22.148992   31831 host.go:66] Checking if "ha-293078-m03" exists ...
	I0401 18:27:22.149298   31831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:22.149338   31831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:22.164126   31831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32871
	I0401 18:27:22.164498   31831 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:22.164906   31831 main.go:141] libmachine: Using API Version  1
	I0401 18:27:22.164930   31831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:22.165238   31831 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:22.165440   31831 main.go:141] libmachine: (ha-293078-m03) Calling .DriverName
	I0401 18:27:22.165613   31831 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 18:27:22.165636   31831 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHHostname
	I0401 18:27:22.168847   31831 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:27:22.169247   31831 main.go:141] libmachine: (ha-293078-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:33:4d", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:22:31 +0000 UTC Type:0 Mac:52:54:00:48:33:4d Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-293078-m03 Clientid:01:52:54:00:48:33:4d}
	I0401 18:27:22.169279   31831 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:27:22.169384   31831 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHPort
	I0401 18:27:22.169541   31831 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHKeyPath
	I0401 18:27:22.169726   31831 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHUsername
	I0401 18:27:22.169869   31831 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m03/id_rsa Username:docker}
	I0401 18:27:22.258080   31831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 18:27:22.277208   31831 kubeconfig.go:125] found "ha-293078" server: "https://192.168.39.254:8443"
	I0401 18:27:22.277238   31831 api_server.go:166] Checking apiserver status ...
	I0401 18:27:22.277276   31831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 18:27:22.293992   31831 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1510/cgroup
	W0401 18:27:22.305117   31831 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1510/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0401 18:27:22.305177   31831 ssh_runner.go:195] Run: ls
	I0401 18:27:22.310110   31831 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0401 18:27:22.316563   31831 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0401 18:27:22.316601   31831 status.go:422] ha-293078-m03 apiserver status = Running (err=<nil>)
	I0401 18:27:22.316612   31831 status.go:257] ha-293078-m03 status: &{Name:ha-293078-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0401 18:27:22.316636   31831 status.go:255] checking status of ha-293078-m04 ...
	I0401 18:27:22.316917   31831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:22.316959   31831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:22.331679   31831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36879
	I0401 18:27:22.332197   31831 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:22.332717   31831 main.go:141] libmachine: Using API Version  1
	I0401 18:27:22.332742   31831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:22.333049   31831 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:22.333244   31831 main.go:141] libmachine: (ha-293078-m04) Calling .GetState
	I0401 18:27:22.334625   31831 status.go:330] ha-293078-m04 host status = "Running" (err=<nil>)
	I0401 18:27:22.334639   31831 host.go:66] Checking if "ha-293078-m04" exists ...
	I0401 18:27:22.334919   31831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:22.334961   31831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:22.349427   31831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35951
	I0401 18:27:22.349853   31831 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:22.350309   31831 main.go:141] libmachine: Using API Version  1
	I0401 18:27:22.350332   31831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:22.350640   31831 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:22.350832   31831 main.go:141] libmachine: (ha-293078-m04) Calling .GetIP
	I0401 18:27:22.353473   31831 main.go:141] libmachine: (ha-293078-m04) DBG | domain ha-293078-m04 has defined MAC address 52:54:00:b5:ec:c5 in network mk-ha-293078
	I0401 18:27:22.353938   31831 main.go:141] libmachine: (ha-293078-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:ec:c5", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:23:56 +0000 UTC Type:0 Mac:52:54:00:b5:ec:c5 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-293078-m04 Clientid:01:52:54:00:b5:ec:c5}
	I0401 18:27:22.353978   31831 main.go:141] libmachine: (ha-293078-m04) DBG | domain ha-293078-m04 has defined IP address 192.168.39.14 and MAC address 52:54:00:b5:ec:c5 in network mk-ha-293078
	I0401 18:27:22.354127   31831 host.go:66] Checking if "ha-293078-m04" exists ...
	I0401 18:27:22.354527   31831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:22.354570   31831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:22.368798   31831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46451
	I0401 18:27:22.369127   31831 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:22.369564   31831 main.go:141] libmachine: Using API Version  1
	I0401 18:27:22.369584   31831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:22.369967   31831 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:22.370139   31831 main.go:141] libmachine: (ha-293078-m04) Calling .DriverName
	I0401 18:27:22.370338   31831 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 18:27:22.370364   31831 main.go:141] libmachine: (ha-293078-m04) Calling .GetSSHHostname
	I0401 18:27:22.373116   31831 main.go:141] libmachine: (ha-293078-m04) DBG | domain ha-293078-m04 has defined MAC address 52:54:00:b5:ec:c5 in network mk-ha-293078
	I0401 18:27:22.373503   31831 main.go:141] libmachine: (ha-293078-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:ec:c5", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:23:56 +0000 UTC Type:0 Mac:52:54:00:b5:ec:c5 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-293078-m04 Clientid:01:52:54:00:b5:ec:c5}
	I0401 18:27:22.373532   31831 main.go:141] libmachine: (ha-293078-m04) DBG | domain ha-293078-m04 has defined IP address 192.168.39.14 and MAC address 52:54:00:b5:ec:c5 in network mk-ha-293078
	I0401 18:27:22.373670   31831 main.go:141] libmachine: (ha-293078-m04) Calling .GetSSHPort
	I0401 18:27:22.373830   31831 main.go:141] libmachine: (ha-293078-m04) Calling .GetSSHKeyPath
	I0401 18:27:22.373968   31831 main.go:141] libmachine: (ha-293078-m04) Calling .GetSSHUsername
	I0401 18:27:22.374105   31831 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m04/id_rsa Username:docker}
	I0401 18:27:22.456134   31831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 18:27:22.472039   31831 status.go:257] ha-293078-m04 status: &{Name:ha-293078-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-293078 status -v=7 --alsologtostderr: exit status 3 (3.759796026s)

                                                
                                                
-- stdout --
	ha-293078
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-293078-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-293078-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-293078-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 18:27:25.046356   31926 out.go:291] Setting OutFile to fd 1 ...
	I0401 18:27:25.046481   31926 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 18:27:25.046492   31926 out.go:304] Setting ErrFile to fd 2...
	I0401 18:27:25.046496   31926 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 18:27:25.047079   31926 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-10493/.minikube/bin
	I0401 18:27:25.047454   31926 out.go:298] Setting JSON to false
	I0401 18:27:25.047486   31926 mustload.go:65] Loading cluster: ha-293078
	I0401 18:27:25.047931   31926 notify.go:220] Checking for updates...
	I0401 18:27:25.048553   31926 config.go:182] Loaded profile config "ha-293078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 18:27:25.048602   31926 status.go:255] checking status of ha-293078 ...
	I0401 18:27:25.049162   31926 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:25.049225   31926 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:25.064327   31926 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37769
	I0401 18:27:25.064749   31926 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:25.065441   31926 main.go:141] libmachine: Using API Version  1
	I0401 18:27:25.065483   31926 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:25.065877   31926 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:25.066077   31926 main.go:141] libmachine: (ha-293078) Calling .GetState
	I0401 18:27:25.067702   31926 status.go:330] ha-293078 host status = "Running" (err=<nil>)
	I0401 18:27:25.067717   31926 host.go:66] Checking if "ha-293078" exists ...
	I0401 18:27:25.068100   31926 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:25.068142   31926 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:25.082768   31926 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39685
	I0401 18:27:25.083192   31926 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:25.083608   31926 main.go:141] libmachine: Using API Version  1
	I0401 18:27:25.083626   31926 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:25.084019   31926 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:25.084205   31926 main.go:141] libmachine: (ha-293078) Calling .GetIP
	I0401 18:27:25.087138   31926 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:27:25.087688   31926 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:27:25.087737   31926 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:27:25.087922   31926 host.go:66] Checking if "ha-293078" exists ...
	I0401 18:27:25.088298   31926 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:25.088344   31926 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:25.103194   31926 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35161
	I0401 18:27:25.103596   31926 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:25.104046   31926 main.go:141] libmachine: Using API Version  1
	I0401 18:27:25.104184   31926 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:25.104622   31926 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:25.104819   31926 main.go:141] libmachine: (ha-293078) Calling .DriverName
	I0401 18:27:25.105034   31926 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 18:27:25.105062   31926 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:27:25.108081   31926 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:27:25.108513   31926 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:27:25.108539   31926 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:27:25.108719   31926 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:27:25.108883   31926 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:27:25.109035   31926 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:27:25.109175   31926 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078/id_rsa Username:docker}
	I0401 18:27:25.191557   31926 ssh_runner.go:195] Run: systemctl --version
	I0401 18:27:25.199084   31926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 18:27:25.217271   31926 kubeconfig.go:125] found "ha-293078" server: "https://192.168.39.254:8443"
	I0401 18:27:25.217300   31926 api_server.go:166] Checking apiserver status ...
	I0401 18:27:25.217329   31926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 18:27:25.234275   31926 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1197/cgroup
	W0401 18:27:25.245729   31926 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1197/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0401 18:27:25.245795   31926 ssh_runner.go:195] Run: ls
	I0401 18:27:25.251175   31926 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0401 18:27:25.258808   31926 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0401 18:27:25.258835   31926 status.go:422] ha-293078 apiserver status = Running (err=<nil>)
	I0401 18:27:25.258855   31926 status.go:257] ha-293078 status: &{Name:ha-293078 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0401 18:27:25.258875   31926 status.go:255] checking status of ha-293078-m02 ...
	I0401 18:27:25.259197   31926 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:25.259231   31926 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:25.274034   31926 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33601
	I0401 18:27:25.274477   31926 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:25.274934   31926 main.go:141] libmachine: Using API Version  1
	I0401 18:27:25.274956   31926 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:25.275264   31926 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:25.275420   31926 main.go:141] libmachine: (ha-293078-m02) Calling .GetState
	I0401 18:27:25.276989   31926 status.go:330] ha-293078-m02 host status = "Running" (err=<nil>)
	I0401 18:27:25.277010   31926 host.go:66] Checking if "ha-293078-m02" exists ...
	I0401 18:27:25.277527   31926 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:25.277572   31926 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:25.292446   31926 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41497
	I0401 18:27:25.292820   31926 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:25.293230   31926 main.go:141] libmachine: Using API Version  1
	I0401 18:27:25.293251   31926 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:25.293562   31926 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:25.293754   31926 main.go:141] libmachine: (ha-293078-m02) Calling .GetIP
	I0401 18:27:25.296551   31926 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:27:25.297029   31926 main.go:141] libmachine: (ha-293078-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7f:87", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:21:19 +0000 UTC Type:0 Mac:52:54:00:25:7f:87 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-293078-m02 Clientid:01:52:54:00:25:7f:87}
	I0401 18:27:25.297052   31926 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined IP address 192.168.39.161 and MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:27:25.297219   31926 host.go:66] Checking if "ha-293078-m02" exists ...
	I0401 18:27:25.297612   31926 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:25.297671   31926 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:25.313423   31926 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46193
	I0401 18:27:25.313845   31926 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:25.314327   31926 main.go:141] libmachine: Using API Version  1
	I0401 18:27:25.314352   31926 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:25.314710   31926 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:25.314876   31926 main.go:141] libmachine: (ha-293078-m02) Calling .DriverName
	I0401 18:27:25.315045   31926 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 18:27:25.315066   31926 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHHostname
	I0401 18:27:25.317763   31926 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:27:25.318216   31926 main.go:141] libmachine: (ha-293078-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7f:87", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:21:19 +0000 UTC Type:0 Mac:52:54:00:25:7f:87 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-293078-m02 Clientid:01:52:54:00:25:7f:87}
	I0401 18:27:25.318239   31926 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined IP address 192.168.39.161 and MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:27:25.318370   31926 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHPort
	I0401 18:27:25.318546   31926 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHKeyPath
	I0401 18:27:25.318693   31926 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHUsername
	I0401 18:27:25.318821   31926 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m02/id_rsa Username:docker}
	W0401 18:27:28.381859   31926 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.161:22: connect: no route to host
	W0401 18:27:28.381980   31926 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.161:22: connect: no route to host
	E0401 18:27:28.382002   31926 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.161:22: connect: no route to host
	I0401 18:27:28.382015   31926 status.go:257] ha-293078-m02 status: &{Name:ha-293078-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0401 18:27:28.382040   31926 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.161:22: connect: no route to host
	I0401 18:27:28.382060   31926 status.go:255] checking status of ha-293078-m03 ...
	I0401 18:27:28.382378   31926 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:28.382420   31926 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:28.396964   31926 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43607
	I0401 18:27:28.397369   31926 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:28.397816   31926 main.go:141] libmachine: Using API Version  1
	I0401 18:27:28.397838   31926 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:28.398154   31926 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:28.398362   31926 main.go:141] libmachine: (ha-293078-m03) Calling .GetState
	I0401 18:27:28.399858   31926 status.go:330] ha-293078-m03 host status = "Running" (err=<nil>)
	I0401 18:27:28.399874   31926 host.go:66] Checking if "ha-293078-m03" exists ...
	I0401 18:27:28.400211   31926 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:28.400253   31926 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:28.414199   31926 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43845
	I0401 18:27:28.414655   31926 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:28.415149   31926 main.go:141] libmachine: Using API Version  1
	I0401 18:27:28.415172   31926 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:28.415476   31926 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:28.415645   31926 main.go:141] libmachine: (ha-293078-m03) Calling .GetIP
	I0401 18:27:28.418360   31926 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:27:28.418765   31926 main.go:141] libmachine: (ha-293078-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:33:4d", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:22:31 +0000 UTC Type:0 Mac:52:54:00:48:33:4d Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-293078-m03 Clientid:01:52:54:00:48:33:4d}
	I0401 18:27:28.418783   31926 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:27:28.418930   31926 host.go:66] Checking if "ha-293078-m03" exists ...
	I0401 18:27:28.419217   31926 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:28.419252   31926 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:28.433547   31926 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35525
	I0401 18:27:28.433988   31926 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:28.434422   31926 main.go:141] libmachine: Using API Version  1
	I0401 18:27:28.434443   31926 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:28.434834   31926 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:28.435016   31926 main.go:141] libmachine: (ha-293078-m03) Calling .DriverName
	I0401 18:27:28.435191   31926 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 18:27:28.435213   31926 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHHostname
	I0401 18:27:28.437556   31926 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:27:28.438017   31926 main.go:141] libmachine: (ha-293078-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:33:4d", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:22:31 +0000 UTC Type:0 Mac:52:54:00:48:33:4d Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-293078-m03 Clientid:01:52:54:00:48:33:4d}
	I0401 18:27:28.438046   31926 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:27:28.438184   31926 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHPort
	I0401 18:27:28.438349   31926 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHKeyPath
	I0401 18:27:28.438497   31926 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHUsername
	I0401 18:27:28.438664   31926 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m03/id_rsa Username:docker}
	I0401 18:27:28.527971   31926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 18:27:28.549335   31926 kubeconfig.go:125] found "ha-293078" server: "https://192.168.39.254:8443"
	I0401 18:27:28.549360   31926 api_server.go:166] Checking apiserver status ...
	I0401 18:27:28.549404   31926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 18:27:28.573415   31926 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1510/cgroup
	W0401 18:27:28.585618   31926 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1510/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0401 18:27:28.585689   31926 ssh_runner.go:195] Run: ls
	I0401 18:27:28.592485   31926 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0401 18:27:28.596706   31926 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0401 18:27:28.596731   31926 status.go:422] ha-293078-m03 apiserver status = Running (err=<nil>)
	I0401 18:27:28.596753   31926 status.go:257] ha-293078-m03 status: &{Name:ha-293078-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0401 18:27:28.596781   31926 status.go:255] checking status of ha-293078-m04 ...
	I0401 18:27:28.597073   31926 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:28.597112   31926 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:28.612680   31926 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40093
	I0401 18:27:28.613091   31926 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:28.613578   31926 main.go:141] libmachine: Using API Version  1
	I0401 18:27:28.613598   31926 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:28.613898   31926 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:28.614093   31926 main.go:141] libmachine: (ha-293078-m04) Calling .GetState
	I0401 18:27:28.615572   31926 status.go:330] ha-293078-m04 host status = "Running" (err=<nil>)
	I0401 18:27:28.615588   31926 host.go:66] Checking if "ha-293078-m04" exists ...
	I0401 18:27:28.615851   31926 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:28.615882   31926 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:28.630544   31926 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36505
	I0401 18:27:28.630985   31926 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:28.631430   31926 main.go:141] libmachine: Using API Version  1
	I0401 18:27:28.631453   31926 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:28.631789   31926 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:28.631988   31926 main.go:141] libmachine: (ha-293078-m04) Calling .GetIP
	I0401 18:27:28.634638   31926 main.go:141] libmachine: (ha-293078-m04) DBG | domain ha-293078-m04 has defined MAC address 52:54:00:b5:ec:c5 in network mk-ha-293078
	I0401 18:27:28.635012   31926 main.go:141] libmachine: (ha-293078-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:ec:c5", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:23:56 +0000 UTC Type:0 Mac:52:54:00:b5:ec:c5 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-293078-m04 Clientid:01:52:54:00:b5:ec:c5}
	I0401 18:27:28.635042   31926 main.go:141] libmachine: (ha-293078-m04) DBG | domain ha-293078-m04 has defined IP address 192.168.39.14 and MAC address 52:54:00:b5:ec:c5 in network mk-ha-293078
	I0401 18:27:28.635166   31926 host.go:66] Checking if "ha-293078-m04" exists ...
	I0401 18:27:28.635440   31926 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:28.635482   31926 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:28.649894   31926 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46553
	I0401 18:27:28.650287   31926 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:28.650711   31926 main.go:141] libmachine: Using API Version  1
	I0401 18:27:28.650742   31926 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:28.651075   31926 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:28.651272   31926 main.go:141] libmachine: (ha-293078-m04) Calling .DriverName
	I0401 18:27:28.651451   31926 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 18:27:28.651490   31926 main.go:141] libmachine: (ha-293078-m04) Calling .GetSSHHostname
	I0401 18:27:28.654249   31926 main.go:141] libmachine: (ha-293078-m04) DBG | domain ha-293078-m04 has defined MAC address 52:54:00:b5:ec:c5 in network mk-ha-293078
	I0401 18:27:28.654631   31926 main.go:141] libmachine: (ha-293078-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:ec:c5", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:23:56 +0000 UTC Type:0 Mac:52:54:00:b5:ec:c5 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-293078-m04 Clientid:01:52:54:00:b5:ec:c5}
	I0401 18:27:28.654655   31926 main.go:141] libmachine: (ha-293078-m04) DBG | domain ha-293078-m04 has defined IP address 192.168.39.14 and MAC address 52:54:00:b5:ec:c5 in network mk-ha-293078
	I0401 18:27:28.654784   31926 main.go:141] libmachine: (ha-293078-m04) Calling .GetSSHPort
	I0401 18:27:28.654961   31926 main.go:141] libmachine: (ha-293078-m04) Calling .GetSSHKeyPath
	I0401 18:27:28.655115   31926 main.go:141] libmachine: (ha-293078-m04) Calling .GetSSHUsername
	I0401 18:27:28.655254   31926 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m04/id_rsa Username:docker}
	I0401 18:27:28.734695   31926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 18:27:28.751795   31926 status.go:257] ha-293078-m04 status: &{Name:ha-293078-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-293078 status -v=7 --alsologtostderr: exit status 3 (3.766919389s)

                                                
                                                
-- stdout --
	ha-293078
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-293078-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-293078-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-293078-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 18:27:33.706245   32033 out.go:291] Setting OutFile to fd 1 ...
	I0401 18:27:33.706513   32033 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 18:27:33.706559   32033 out.go:304] Setting ErrFile to fd 2...
	I0401 18:27:33.706579   32033 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 18:27:33.706967   32033 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-10493/.minikube/bin
	I0401 18:27:33.707180   32033 out.go:298] Setting JSON to false
	I0401 18:27:33.707205   32033 mustload.go:65] Loading cluster: ha-293078
	I0401 18:27:33.707245   32033 notify.go:220] Checking for updates...
	I0401 18:27:33.707596   32033 config.go:182] Loaded profile config "ha-293078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 18:27:33.707610   32033 status.go:255] checking status of ha-293078 ...
	I0401 18:27:33.707984   32033 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:33.708022   32033 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:33.728600   32033 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35011
	I0401 18:27:33.729053   32033 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:33.729789   32033 main.go:141] libmachine: Using API Version  1
	I0401 18:27:33.729820   32033 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:33.730210   32033 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:33.730424   32033 main.go:141] libmachine: (ha-293078) Calling .GetState
	I0401 18:27:33.731995   32033 status.go:330] ha-293078 host status = "Running" (err=<nil>)
	I0401 18:27:33.732016   32033 host.go:66] Checking if "ha-293078" exists ...
	I0401 18:27:33.732440   32033 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:33.732484   32033 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:33.747646   32033 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46487
	I0401 18:27:33.748054   32033 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:33.748469   32033 main.go:141] libmachine: Using API Version  1
	I0401 18:27:33.748493   32033 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:33.748763   32033 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:33.748965   32033 main.go:141] libmachine: (ha-293078) Calling .GetIP
	I0401 18:27:33.751885   32033 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:27:33.752313   32033 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:27:33.752341   32033 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:27:33.752530   32033 host.go:66] Checking if "ha-293078" exists ...
	I0401 18:27:33.752793   32033 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:33.752821   32033 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:33.767044   32033 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37923
	I0401 18:27:33.767489   32033 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:33.767987   32033 main.go:141] libmachine: Using API Version  1
	I0401 18:27:33.768009   32033 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:33.768308   32033 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:33.768535   32033 main.go:141] libmachine: (ha-293078) Calling .DriverName
	I0401 18:27:33.768722   32033 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 18:27:33.768749   32033 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:27:33.771434   32033 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:27:33.771946   32033 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:27:33.771975   32033 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:27:33.772230   32033 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:27:33.772384   32033 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:27:33.772530   32033 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:27:33.772666   32033 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078/id_rsa Username:docker}
	I0401 18:27:33.859000   32033 ssh_runner.go:195] Run: systemctl --version
	I0401 18:27:33.866204   32033 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 18:27:33.883696   32033 kubeconfig.go:125] found "ha-293078" server: "https://192.168.39.254:8443"
	I0401 18:27:33.883727   32033 api_server.go:166] Checking apiserver status ...
	I0401 18:27:33.883756   32033 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 18:27:33.898618   32033 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1197/cgroup
	W0401 18:27:33.909295   32033 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1197/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0401 18:27:33.909338   32033 ssh_runner.go:195] Run: ls
	I0401 18:27:33.914699   32033 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0401 18:27:33.919567   32033 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0401 18:27:33.919591   32033 status.go:422] ha-293078 apiserver status = Running (err=<nil>)
	I0401 18:27:33.919604   32033 status.go:257] ha-293078 status: &{Name:ha-293078 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0401 18:27:33.919634   32033 status.go:255] checking status of ha-293078-m02 ...
	I0401 18:27:33.919923   32033 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:33.919963   32033 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:33.935603   32033 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39077
	I0401 18:27:33.936055   32033 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:33.936545   32033 main.go:141] libmachine: Using API Version  1
	I0401 18:27:33.936567   32033 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:33.936851   32033 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:33.937130   32033 main.go:141] libmachine: (ha-293078-m02) Calling .GetState
	I0401 18:27:33.938743   32033 status.go:330] ha-293078-m02 host status = "Running" (err=<nil>)
	I0401 18:27:33.938761   32033 host.go:66] Checking if "ha-293078-m02" exists ...
	I0401 18:27:33.939042   32033 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:33.939080   32033 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:33.953213   32033 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38909
	I0401 18:27:33.953585   32033 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:33.954052   32033 main.go:141] libmachine: Using API Version  1
	I0401 18:27:33.954077   32033 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:33.954393   32033 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:33.954571   32033 main.go:141] libmachine: (ha-293078-m02) Calling .GetIP
	I0401 18:27:33.957386   32033 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:27:33.957894   32033 main.go:141] libmachine: (ha-293078-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7f:87", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:21:19 +0000 UTC Type:0 Mac:52:54:00:25:7f:87 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-293078-m02 Clientid:01:52:54:00:25:7f:87}
	I0401 18:27:33.957921   32033 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined IP address 192.168.39.161 and MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:27:33.958129   32033 host.go:66] Checking if "ha-293078-m02" exists ...
	I0401 18:27:33.958528   32033 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:33.958571   32033 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:33.973010   32033 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40199
	I0401 18:27:33.973461   32033 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:33.973988   32033 main.go:141] libmachine: Using API Version  1
	I0401 18:27:33.974013   32033 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:33.974290   32033 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:33.974484   32033 main.go:141] libmachine: (ha-293078-m02) Calling .DriverName
	I0401 18:27:33.974674   32033 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 18:27:33.974695   32033 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHHostname
	I0401 18:27:33.977752   32033 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:27:33.978209   32033 main.go:141] libmachine: (ha-293078-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7f:87", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:21:19 +0000 UTC Type:0 Mac:52:54:00:25:7f:87 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-293078-m02 Clientid:01:52:54:00:25:7f:87}
	I0401 18:27:33.978240   32033 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined IP address 192.168.39.161 and MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:27:33.978371   32033 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHPort
	I0401 18:27:33.978530   32033 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHKeyPath
	I0401 18:27:33.978699   32033 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHUsername
	I0401 18:27:33.978841   32033 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m02/id_rsa Username:docker}
	W0401 18:27:37.053964   32033 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.161:22: connect: no route to host
	W0401 18:27:37.054070   32033 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.161:22: connect: no route to host
	E0401 18:27:37.054088   32033 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.161:22: connect: no route to host
	I0401 18:27:37.054104   32033 status.go:257] ha-293078-m02 status: &{Name:ha-293078-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0401 18:27:37.054128   32033 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.161:22: connect: no route to host
	I0401 18:27:37.054139   32033 status.go:255] checking status of ha-293078-m03 ...
	I0401 18:27:37.054622   32033 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:37.054675   32033 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:37.069163   32033 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35245
	I0401 18:27:37.069539   32033 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:37.070046   32033 main.go:141] libmachine: Using API Version  1
	I0401 18:27:37.070068   32033 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:37.070401   32033 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:37.070591   32033 main.go:141] libmachine: (ha-293078-m03) Calling .GetState
	I0401 18:27:37.072022   32033 status.go:330] ha-293078-m03 host status = "Running" (err=<nil>)
	I0401 18:27:37.072040   32033 host.go:66] Checking if "ha-293078-m03" exists ...
	I0401 18:27:37.072307   32033 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:37.072340   32033 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:37.086778   32033 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41103
	I0401 18:27:37.087168   32033 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:37.087621   32033 main.go:141] libmachine: Using API Version  1
	I0401 18:27:37.087654   32033 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:37.087985   32033 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:37.088152   32033 main.go:141] libmachine: (ha-293078-m03) Calling .GetIP
	I0401 18:27:37.090871   32033 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:27:37.091338   32033 main.go:141] libmachine: (ha-293078-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:33:4d", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:22:31 +0000 UTC Type:0 Mac:52:54:00:48:33:4d Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-293078-m03 Clientid:01:52:54:00:48:33:4d}
	I0401 18:27:37.091364   32033 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:27:37.091458   32033 host.go:66] Checking if "ha-293078-m03" exists ...
	I0401 18:27:37.091757   32033 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:37.091800   32033 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:37.105909   32033 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33227
	I0401 18:27:37.106348   32033 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:37.106786   32033 main.go:141] libmachine: Using API Version  1
	I0401 18:27:37.106807   32033 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:37.107121   32033 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:37.107299   32033 main.go:141] libmachine: (ha-293078-m03) Calling .DriverName
	I0401 18:27:37.107491   32033 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 18:27:37.107512   32033 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHHostname
	I0401 18:27:37.110556   32033 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:27:37.111021   32033 main.go:141] libmachine: (ha-293078-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:33:4d", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:22:31 +0000 UTC Type:0 Mac:52:54:00:48:33:4d Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-293078-m03 Clientid:01:52:54:00:48:33:4d}
	I0401 18:27:37.111047   32033 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:27:37.111234   32033 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHPort
	I0401 18:27:37.111393   32033 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHKeyPath
	I0401 18:27:37.111507   32033 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHUsername
	I0401 18:27:37.111614   32033 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m03/id_rsa Username:docker}
	I0401 18:27:37.202446   32033 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 18:27:37.220831   32033 kubeconfig.go:125] found "ha-293078" server: "https://192.168.39.254:8443"
	I0401 18:27:37.220856   32033 api_server.go:166] Checking apiserver status ...
	I0401 18:27:37.220892   32033 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 18:27:37.237526   32033 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1510/cgroup
	W0401 18:27:37.250042   32033 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1510/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0401 18:27:37.250105   32033 ssh_runner.go:195] Run: ls
	I0401 18:27:37.255419   32033 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0401 18:27:37.261744   32033 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0401 18:27:37.261765   32033 status.go:422] ha-293078-m03 apiserver status = Running (err=<nil>)
	I0401 18:27:37.261773   32033 status.go:257] ha-293078-m03 status: &{Name:ha-293078-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0401 18:27:37.261800   32033 status.go:255] checking status of ha-293078-m04 ...
	I0401 18:27:37.262140   32033 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:37.262175   32033 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:37.276492   32033 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41021
	I0401 18:27:37.276976   32033 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:37.277514   32033 main.go:141] libmachine: Using API Version  1
	I0401 18:27:37.277536   32033 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:37.277860   32033 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:37.278026   32033 main.go:141] libmachine: (ha-293078-m04) Calling .GetState
	I0401 18:27:37.279768   32033 status.go:330] ha-293078-m04 host status = "Running" (err=<nil>)
	I0401 18:27:37.279787   32033 host.go:66] Checking if "ha-293078-m04" exists ...
	I0401 18:27:37.280131   32033 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:37.280184   32033 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:37.294810   32033 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45835
	I0401 18:27:37.295322   32033 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:37.295862   32033 main.go:141] libmachine: Using API Version  1
	I0401 18:27:37.295881   32033 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:37.296186   32033 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:37.296415   32033 main.go:141] libmachine: (ha-293078-m04) Calling .GetIP
	I0401 18:27:37.299270   32033 main.go:141] libmachine: (ha-293078-m04) DBG | domain ha-293078-m04 has defined MAC address 52:54:00:b5:ec:c5 in network mk-ha-293078
	I0401 18:27:37.299779   32033 main.go:141] libmachine: (ha-293078-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:ec:c5", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:23:56 +0000 UTC Type:0 Mac:52:54:00:b5:ec:c5 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-293078-m04 Clientid:01:52:54:00:b5:ec:c5}
	I0401 18:27:37.299805   32033 main.go:141] libmachine: (ha-293078-m04) DBG | domain ha-293078-m04 has defined IP address 192.168.39.14 and MAC address 52:54:00:b5:ec:c5 in network mk-ha-293078
	I0401 18:27:37.299956   32033 host.go:66] Checking if "ha-293078-m04" exists ...
	I0401 18:27:37.300324   32033 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:37.300374   32033 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:37.315603   32033 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35141
	I0401 18:27:37.316048   32033 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:37.316541   32033 main.go:141] libmachine: Using API Version  1
	I0401 18:27:37.316558   32033 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:37.316867   32033 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:37.317053   32033 main.go:141] libmachine: (ha-293078-m04) Calling .DriverName
	I0401 18:27:37.317214   32033 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 18:27:37.317236   32033 main.go:141] libmachine: (ha-293078-m04) Calling .GetSSHHostname
	I0401 18:27:37.320058   32033 main.go:141] libmachine: (ha-293078-m04) DBG | domain ha-293078-m04 has defined MAC address 52:54:00:b5:ec:c5 in network mk-ha-293078
	I0401 18:27:37.320493   32033 main.go:141] libmachine: (ha-293078-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:ec:c5", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:23:56 +0000 UTC Type:0 Mac:52:54:00:b5:ec:c5 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-293078-m04 Clientid:01:52:54:00:b5:ec:c5}
	I0401 18:27:37.320526   32033 main.go:141] libmachine: (ha-293078-m04) DBG | domain ha-293078-m04 has defined IP address 192.168.39.14 and MAC address 52:54:00:b5:ec:c5 in network mk-ha-293078
	I0401 18:27:37.320634   32033 main.go:141] libmachine: (ha-293078-m04) Calling .GetSSHPort
	I0401 18:27:37.320820   32033 main.go:141] libmachine: (ha-293078-m04) Calling .GetSSHKeyPath
	I0401 18:27:37.320984   32033 main.go:141] libmachine: (ha-293078-m04) Calling .GetSSHUsername
	I0401 18:27:37.321114   32033 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m04/id_rsa Username:docker}
	I0401 18:27:37.402409   32033 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 18:27:37.419210   32033 status.go:257] ha-293078-m04 status: &{Name:ha-293078-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-293078 status -v=7 --alsologtostderr: exit status 7 (647.525677ms)

                                                
                                                
-- stdout --
	ha-293078
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-293078-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-293078-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-293078-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 18:27:43.588870   32166 out.go:291] Setting OutFile to fd 1 ...
	I0401 18:27:43.589160   32166 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 18:27:43.589171   32166 out.go:304] Setting ErrFile to fd 2...
	I0401 18:27:43.589175   32166 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 18:27:43.589442   32166 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-10493/.minikube/bin
	I0401 18:27:43.589628   32166 out.go:298] Setting JSON to false
	I0401 18:27:43.589668   32166 mustload.go:65] Loading cluster: ha-293078
	I0401 18:27:43.589786   32166 notify.go:220] Checking for updates...
	I0401 18:27:43.590198   32166 config.go:182] Loaded profile config "ha-293078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 18:27:43.590216   32166 status.go:255] checking status of ha-293078 ...
	I0401 18:27:43.590670   32166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:43.590730   32166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:43.605512   32166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46673
	I0401 18:27:43.606039   32166 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:43.606787   32166 main.go:141] libmachine: Using API Version  1
	I0401 18:27:43.606835   32166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:43.607259   32166 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:43.607486   32166 main.go:141] libmachine: (ha-293078) Calling .GetState
	I0401 18:27:43.609663   32166 status.go:330] ha-293078 host status = "Running" (err=<nil>)
	I0401 18:27:43.609678   32166 host.go:66] Checking if "ha-293078" exists ...
	I0401 18:27:43.609966   32166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:43.610017   32166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:43.625288   32166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34165
	I0401 18:27:43.625832   32166 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:43.626313   32166 main.go:141] libmachine: Using API Version  1
	I0401 18:27:43.626341   32166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:43.626650   32166 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:43.626835   32166 main.go:141] libmachine: (ha-293078) Calling .GetIP
	I0401 18:27:43.629765   32166 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:27:43.630253   32166 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:27:43.630286   32166 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:27:43.630459   32166 host.go:66] Checking if "ha-293078" exists ...
	I0401 18:27:43.630740   32166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:43.630778   32166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:43.646189   32166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35057
	I0401 18:27:43.646665   32166 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:43.647175   32166 main.go:141] libmachine: Using API Version  1
	I0401 18:27:43.647205   32166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:43.647512   32166 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:43.647660   32166 main.go:141] libmachine: (ha-293078) Calling .DriverName
	I0401 18:27:43.647839   32166 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 18:27:43.647915   32166 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:27:43.650945   32166 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:27:43.651405   32166 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:27:43.651436   32166 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:27:43.651606   32166 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:27:43.651777   32166 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:27:43.651918   32166 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:27:43.652093   32166 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078/id_rsa Username:docker}
	I0401 18:27:43.736416   32166 ssh_runner.go:195] Run: systemctl --version
	I0401 18:27:43.744803   32166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 18:27:43.765527   32166 kubeconfig.go:125] found "ha-293078" server: "https://192.168.39.254:8443"
	I0401 18:27:43.765561   32166 api_server.go:166] Checking apiserver status ...
	I0401 18:27:43.765592   32166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 18:27:43.782967   32166 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1197/cgroup
	W0401 18:27:43.795160   32166 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1197/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0401 18:27:43.795215   32166 ssh_runner.go:195] Run: ls
	I0401 18:27:43.800363   32166 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0401 18:27:43.804785   32166 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0401 18:27:43.804804   32166 status.go:422] ha-293078 apiserver status = Running (err=<nil>)
	I0401 18:27:43.804815   32166 status.go:257] ha-293078 status: &{Name:ha-293078 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0401 18:27:43.804830   32166 status.go:255] checking status of ha-293078-m02 ...
	I0401 18:27:43.805124   32166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:43.805158   32166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:43.820243   32166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37015
	I0401 18:27:43.820796   32166 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:43.821316   32166 main.go:141] libmachine: Using API Version  1
	I0401 18:27:43.821338   32166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:43.821713   32166 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:43.821943   32166 main.go:141] libmachine: (ha-293078-m02) Calling .GetState
	I0401 18:27:43.823648   32166 status.go:330] ha-293078-m02 host status = "Stopped" (err=<nil>)
	I0401 18:27:43.823663   32166 status.go:343] host is not running, skipping remaining checks
	I0401 18:27:43.823673   32166 status.go:257] ha-293078-m02 status: &{Name:ha-293078-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0401 18:27:43.823694   32166 status.go:255] checking status of ha-293078-m03 ...
	I0401 18:27:43.824001   32166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:43.824050   32166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:43.840597   32166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39229
	I0401 18:27:43.841009   32166 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:43.841479   32166 main.go:141] libmachine: Using API Version  1
	I0401 18:27:43.841506   32166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:43.841873   32166 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:43.842028   32166 main.go:141] libmachine: (ha-293078-m03) Calling .GetState
	I0401 18:27:43.843574   32166 status.go:330] ha-293078-m03 host status = "Running" (err=<nil>)
	I0401 18:27:43.843589   32166 host.go:66] Checking if "ha-293078-m03" exists ...
	I0401 18:27:43.843847   32166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:43.843879   32166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:43.858415   32166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45837
	I0401 18:27:43.858815   32166 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:43.859224   32166 main.go:141] libmachine: Using API Version  1
	I0401 18:27:43.859245   32166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:43.859529   32166 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:43.859691   32166 main.go:141] libmachine: (ha-293078-m03) Calling .GetIP
	I0401 18:27:43.862625   32166 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:27:43.863060   32166 main.go:141] libmachine: (ha-293078-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:33:4d", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:22:31 +0000 UTC Type:0 Mac:52:54:00:48:33:4d Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-293078-m03 Clientid:01:52:54:00:48:33:4d}
	I0401 18:27:43.863081   32166 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:27:43.863263   32166 host.go:66] Checking if "ha-293078-m03" exists ...
	I0401 18:27:43.863595   32166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:43.863664   32166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:43.878388   32166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41597
	I0401 18:27:43.878808   32166 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:43.879250   32166 main.go:141] libmachine: Using API Version  1
	I0401 18:27:43.879271   32166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:43.879572   32166 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:43.879758   32166 main.go:141] libmachine: (ha-293078-m03) Calling .DriverName
	I0401 18:27:43.879952   32166 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 18:27:43.879974   32166 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHHostname
	I0401 18:27:43.882785   32166 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:27:43.883211   32166 main.go:141] libmachine: (ha-293078-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:33:4d", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:22:31 +0000 UTC Type:0 Mac:52:54:00:48:33:4d Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-293078-m03 Clientid:01:52:54:00:48:33:4d}
	I0401 18:27:43.883240   32166 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:27:43.883392   32166 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHPort
	I0401 18:27:43.883554   32166 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHKeyPath
	I0401 18:27:43.883689   32166 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHUsername
	I0401 18:27:43.883836   32166 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m03/id_rsa Username:docker}
	I0401 18:27:43.966948   32166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 18:27:43.985523   32166 kubeconfig.go:125] found "ha-293078" server: "https://192.168.39.254:8443"
	I0401 18:27:43.985548   32166 api_server.go:166] Checking apiserver status ...
	I0401 18:27:43.985586   32166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 18:27:44.002021   32166 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1510/cgroup
	W0401 18:27:44.015892   32166 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1510/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0401 18:27:44.015955   32166 ssh_runner.go:195] Run: ls
	I0401 18:27:44.021665   32166 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0401 18:27:44.026399   32166 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0401 18:27:44.026422   32166 status.go:422] ha-293078-m03 apiserver status = Running (err=<nil>)
	I0401 18:27:44.026430   32166 status.go:257] ha-293078-m03 status: &{Name:ha-293078-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0401 18:27:44.026444   32166 status.go:255] checking status of ha-293078-m04 ...
	I0401 18:27:44.026719   32166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:44.026749   32166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:44.041379   32166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44079
	I0401 18:27:44.041813   32166 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:44.042313   32166 main.go:141] libmachine: Using API Version  1
	I0401 18:27:44.042335   32166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:44.042644   32166 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:44.042835   32166 main.go:141] libmachine: (ha-293078-m04) Calling .GetState
	I0401 18:27:44.044299   32166 status.go:330] ha-293078-m04 host status = "Running" (err=<nil>)
	I0401 18:27:44.044316   32166 host.go:66] Checking if "ha-293078-m04" exists ...
	I0401 18:27:44.044570   32166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:44.044608   32166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:44.059444   32166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42953
	I0401 18:27:44.059786   32166 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:44.060228   32166 main.go:141] libmachine: Using API Version  1
	I0401 18:27:44.060248   32166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:44.060543   32166 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:44.060747   32166 main.go:141] libmachine: (ha-293078-m04) Calling .GetIP
	I0401 18:27:44.063575   32166 main.go:141] libmachine: (ha-293078-m04) DBG | domain ha-293078-m04 has defined MAC address 52:54:00:b5:ec:c5 in network mk-ha-293078
	I0401 18:27:44.064074   32166 main.go:141] libmachine: (ha-293078-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:ec:c5", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:23:56 +0000 UTC Type:0 Mac:52:54:00:b5:ec:c5 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-293078-m04 Clientid:01:52:54:00:b5:ec:c5}
	I0401 18:27:44.064111   32166 main.go:141] libmachine: (ha-293078-m04) DBG | domain ha-293078-m04 has defined IP address 192.168.39.14 and MAC address 52:54:00:b5:ec:c5 in network mk-ha-293078
	I0401 18:27:44.064190   32166 host.go:66] Checking if "ha-293078-m04" exists ...
	I0401 18:27:44.064455   32166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:44.064490   32166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:44.079021   32166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33665
	I0401 18:27:44.079372   32166 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:44.079781   32166 main.go:141] libmachine: Using API Version  1
	I0401 18:27:44.079802   32166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:44.080169   32166 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:44.080347   32166 main.go:141] libmachine: (ha-293078-m04) Calling .DriverName
	I0401 18:27:44.080496   32166 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 18:27:44.080516   32166 main.go:141] libmachine: (ha-293078-m04) Calling .GetSSHHostname
	I0401 18:27:44.083267   32166 main.go:141] libmachine: (ha-293078-m04) DBG | domain ha-293078-m04 has defined MAC address 52:54:00:b5:ec:c5 in network mk-ha-293078
	I0401 18:27:44.083654   32166 main.go:141] libmachine: (ha-293078-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:ec:c5", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:23:56 +0000 UTC Type:0 Mac:52:54:00:b5:ec:c5 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-293078-m04 Clientid:01:52:54:00:b5:ec:c5}
	I0401 18:27:44.083681   32166 main.go:141] libmachine: (ha-293078-m04) DBG | domain ha-293078-m04 has defined IP address 192.168.39.14 and MAC address 52:54:00:b5:ec:c5 in network mk-ha-293078
	I0401 18:27:44.083813   32166 main.go:141] libmachine: (ha-293078-m04) Calling .GetSSHPort
	I0401 18:27:44.083988   32166 main.go:141] libmachine: (ha-293078-m04) Calling .GetSSHKeyPath
	I0401 18:27:44.084125   32166 main.go:141] libmachine: (ha-293078-m04) Calling .GetSSHUsername
	I0401 18:27:44.084256   32166 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m04/id_rsa Username:docker}
	I0401 18:27:44.162275   32166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 18:27:44.179054   32166 status.go:257] ha-293078-m04 status: &{Name:ha-293078-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-293078 status -v=7 --alsologtostderr: exit status 7 (657.538114ms)

                                                
                                                
-- stdout --
	ha-293078
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-293078-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-293078-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-293078-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 18:27:54.149968   32260 out.go:291] Setting OutFile to fd 1 ...
	I0401 18:27:54.150122   32260 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 18:27:54.150136   32260 out.go:304] Setting ErrFile to fd 2...
	I0401 18:27:54.150143   32260 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 18:27:54.150420   32260 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-10493/.minikube/bin
	I0401 18:27:54.150658   32260 out.go:298] Setting JSON to false
	I0401 18:27:54.150687   32260 mustload.go:65] Loading cluster: ha-293078
	I0401 18:27:54.150808   32260 notify.go:220] Checking for updates...
	I0401 18:27:54.151119   32260 config.go:182] Loaded profile config "ha-293078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 18:27:54.151142   32260 status.go:255] checking status of ha-293078 ...
	I0401 18:27:54.151587   32260 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:54.151648   32260 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:54.167837   32260 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41891
	I0401 18:27:54.168238   32260 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:54.168819   32260 main.go:141] libmachine: Using API Version  1
	I0401 18:27:54.168847   32260 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:54.169237   32260 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:54.169451   32260 main.go:141] libmachine: (ha-293078) Calling .GetState
	I0401 18:27:54.171125   32260 status.go:330] ha-293078 host status = "Running" (err=<nil>)
	I0401 18:27:54.171142   32260 host.go:66] Checking if "ha-293078" exists ...
	I0401 18:27:54.171534   32260 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:54.171580   32260 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:54.185836   32260 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45411
	I0401 18:27:54.186288   32260 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:54.186793   32260 main.go:141] libmachine: Using API Version  1
	I0401 18:27:54.186817   32260 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:54.187117   32260 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:54.187298   32260 main.go:141] libmachine: (ha-293078) Calling .GetIP
	I0401 18:27:54.190232   32260 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:27:54.190563   32260 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:27:54.190584   32260 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:27:54.190745   32260 host.go:66] Checking if "ha-293078" exists ...
	I0401 18:27:54.191034   32260 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:54.191078   32260 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:54.205232   32260 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43031
	I0401 18:27:54.205681   32260 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:54.206133   32260 main.go:141] libmachine: Using API Version  1
	I0401 18:27:54.206160   32260 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:54.206488   32260 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:54.206674   32260 main.go:141] libmachine: (ha-293078) Calling .DriverName
	I0401 18:27:54.206863   32260 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 18:27:54.206895   32260 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:27:54.209543   32260 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:27:54.209944   32260 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:27:54.209977   32260 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:27:54.210129   32260 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:27:54.210295   32260 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:27:54.210433   32260 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:27:54.210561   32260 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078/id_rsa Username:docker}
	I0401 18:27:54.298482   32260 ssh_runner.go:195] Run: systemctl --version
	I0401 18:27:54.304924   32260 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 18:27:54.322752   32260 kubeconfig.go:125] found "ha-293078" server: "https://192.168.39.254:8443"
	I0401 18:27:54.322791   32260 api_server.go:166] Checking apiserver status ...
	I0401 18:27:54.322835   32260 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 18:27:54.344441   32260 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1197/cgroup
	W0401 18:27:54.356085   32260 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1197/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0401 18:27:54.356125   32260 ssh_runner.go:195] Run: ls
	I0401 18:27:54.362134   32260 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0401 18:27:54.366799   32260 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0401 18:27:54.366820   32260 status.go:422] ha-293078 apiserver status = Running (err=<nil>)
	I0401 18:27:54.366834   32260 status.go:257] ha-293078 status: &{Name:ha-293078 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0401 18:27:54.366875   32260 status.go:255] checking status of ha-293078-m02 ...
	I0401 18:27:54.367159   32260 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:54.367201   32260 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:54.381468   32260 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46389
	I0401 18:27:54.381872   32260 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:54.382336   32260 main.go:141] libmachine: Using API Version  1
	I0401 18:27:54.382356   32260 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:54.382646   32260 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:54.382829   32260 main.go:141] libmachine: (ha-293078-m02) Calling .GetState
	I0401 18:27:54.384204   32260 status.go:330] ha-293078-m02 host status = "Stopped" (err=<nil>)
	I0401 18:27:54.384219   32260 status.go:343] host is not running, skipping remaining checks
	I0401 18:27:54.384226   32260 status.go:257] ha-293078-m02 status: &{Name:ha-293078-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0401 18:27:54.384244   32260 status.go:255] checking status of ha-293078-m03 ...
	I0401 18:27:54.384509   32260 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:54.384542   32260 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:54.400390   32260 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41571
	I0401 18:27:54.400871   32260 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:54.401360   32260 main.go:141] libmachine: Using API Version  1
	I0401 18:27:54.401386   32260 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:54.401683   32260 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:54.401869   32260 main.go:141] libmachine: (ha-293078-m03) Calling .GetState
	I0401 18:27:54.403398   32260 status.go:330] ha-293078-m03 host status = "Running" (err=<nil>)
	I0401 18:27:54.403415   32260 host.go:66] Checking if "ha-293078-m03" exists ...
	I0401 18:27:54.403791   32260 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:54.403834   32260 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:54.417862   32260 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36905
	I0401 18:27:54.418199   32260 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:54.418619   32260 main.go:141] libmachine: Using API Version  1
	I0401 18:27:54.418638   32260 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:54.418931   32260 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:54.419099   32260 main.go:141] libmachine: (ha-293078-m03) Calling .GetIP
	I0401 18:27:54.421520   32260 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:27:54.421978   32260 main.go:141] libmachine: (ha-293078-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:33:4d", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:22:31 +0000 UTC Type:0 Mac:52:54:00:48:33:4d Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-293078-m03 Clientid:01:52:54:00:48:33:4d}
	I0401 18:27:54.422003   32260 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:27:54.422123   32260 host.go:66] Checking if "ha-293078-m03" exists ...
	I0401 18:27:54.422391   32260 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:54.422452   32260 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:54.436622   32260 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43601
	I0401 18:27:54.437082   32260 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:54.437599   32260 main.go:141] libmachine: Using API Version  1
	I0401 18:27:54.437620   32260 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:54.437921   32260 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:54.438097   32260 main.go:141] libmachine: (ha-293078-m03) Calling .DriverName
	I0401 18:27:54.438283   32260 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 18:27:54.438308   32260 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHHostname
	I0401 18:27:54.440863   32260 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:27:54.441273   32260 main.go:141] libmachine: (ha-293078-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:33:4d", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:22:31 +0000 UTC Type:0 Mac:52:54:00:48:33:4d Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-293078-m03 Clientid:01:52:54:00:48:33:4d}
	I0401 18:27:54.441303   32260 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:27:54.441449   32260 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHPort
	I0401 18:27:54.441606   32260 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHKeyPath
	I0401 18:27:54.441778   32260 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHUsername
	I0401 18:27:54.441943   32260 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m03/id_rsa Username:docker}
	I0401 18:27:54.526878   32260 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 18:27:54.543986   32260 kubeconfig.go:125] found "ha-293078" server: "https://192.168.39.254:8443"
	I0401 18:27:54.544013   32260 api_server.go:166] Checking apiserver status ...
	I0401 18:27:54.544054   32260 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 18:27:54.562821   32260 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1510/cgroup
	W0401 18:27:54.581232   32260 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1510/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0401 18:27:54.581292   32260 ssh_runner.go:195] Run: ls
	I0401 18:27:54.587125   32260 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0401 18:27:54.594786   32260 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0401 18:27:54.594817   32260 status.go:422] ha-293078-m03 apiserver status = Running (err=<nil>)
	I0401 18:27:54.594826   32260 status.go:257] ha-293078-m03 status: &{Name:ha-293078-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0401 18:27:54.594841   32260 status.go:255] checking status of ha-293078-m04 ...
	I0401 18:27:54.595230   32260 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:54.595273   32260 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:54.610878   32260 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35893
	I0401 18:27:54.611275   32260 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:54.611814   32260 main.go:141] libmachine: Using API Version  1
	I0401 18:27:54.611843   32260 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:54.612203   32260 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:54.612397   32260 main.go:141] libmachine: (ha-293078-m04) Calling .GetState
	I0401 18:27:54.614214   32260 status.go:330] ha-293078-m04 host status = "Running" (err=<nil>)
	I0401 18:27:54.614245   32260 host.go:66] Checking if "ha-293078-m04" exists ...
	I0401 18:27:54.614607   32260 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:54.614646   32260 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:54.630021   32260 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33725
	I0401 18:27:54.630473   32260 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:54.630880   32260 main.go:141] libmachine: Using API Version  1
	I0401 18:27:54.630904   32260 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:54.631217   32260 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:54.631364   32260 main.go:141] libmachine: (ha-293078-m04) Calling .GetIP
	I0401 18:27:54.634037   32260 main.go:141] libmachine: (ha-293078-m04) DBG | domain ha-293078-m04 has defined MAC address 52:54:00:b5:ec:c5 in network mk-ha-293078
	I0401 18:27:54.634443   32260 main.go:141] libmachine: (ha-293078-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:ec:c5", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:23:56 +0000 UTC Type:0 Mac:52:54:00:b5:ec:c5 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-293078-m04 Clientid:01:52:54:00:b5:ec:c5}
	I0401 18:27:54.634479   32260 main.go:141] libmachine: (ha-293078-m04) DBG | domain ha-293078-m04 has defined IP address 192.168.39.14 and MAC address 52:54:00:b5:ec:c5 in network mk-ha-293078
	I0401 18:27:54.634608   32260 host.go:66] Checking if "ha-293078-m04" exists ...
	I0401 18:27:54.634895   32260 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:27:54.634934   32260 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:27:54.649067   32260 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38605
	I0401 18:27:54.649436   32260 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:27:54.649932   32260 main.go:141] libmachine: Using API Version  1
	I0401 18:27:54.649950   32260 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:27:54.650286   32260 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:27:54.650510   32260 main.go:141] libmachine: (ha-293078-m04) Calling .DriverName
	I0401 18:27:54.650689   32260 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 18:27:54.650708   32260 main.go:141] libmachine: (ha-293078-m04) Calling .GetSSHHostname
	I0401 18:27:54.653717   32260 main.go:141] libmachine: (ha-293078-m04) DBG | domain ha-293078-m04 has defined MAC address 52:54:00:b5:ec:c5 in network mk-ha-293078
	I0401 18:27:54.654148   32260 main.go:141] libmachine: (ha-293078-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:ec:c5", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:23:56 +0000 UTC Type:0 Mac:52:54:00:b5:ec:c5 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-293078-m04 Clientid:01:52:54:00:b5:ec:c5}
	I0401 18:27:54.654179   32260 main.go:141] libmachine: (ha-293078-m04) DBG | domain ha-293078-m04 has defined IP address 192.168.39.14 and MAC address 52:54:00:b5:ec:c5 in network mk-ha-293078
	I0401 18:27:54.654289   32260 main.go:141] libmachine: (ha-293078-m04) Calling .GetSSHPort
	I0401 18:27:54.654466   32260 main.go:141] libmachine: (ha-293078-m04) Calling .GetSSHKeyPath
	I0401 18:27:54.654590   32260 main.go:141] libmachine: (ha-293078-m04) Calling .GetSSHUsername
	I0401 18:27:54.654727   32260 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m04/id_rsa Username:docker}
	I0401 18:27:54.734174   32260 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 18:27:54.751029   32260 status.go:257] ha-293078-m04 status: &{Name:ha-293078-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-293078 status -v=7 --alsologtostderr: exit status 7 (659.548979ms)

                                                
                                                
-- stdout --
	ha-293078
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-293078-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-293078-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-293078-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 18:28:02.677497   32354 out.go:291] Setting OutFile to fd 1 ...
	I0401 18:28:02.677781   32354 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 18:28:02.677791   32354 out.go:304] Setting ErrFile to fd 2...
	I0401 18:28:02.677795   32354 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 18:28:02.678105   32354 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-10493/.minikube/bin
	I0401 18:28:02.678329   32354 out.go:298] Setting JSON to false
	I0401 18:28:02.678355   32354 mustload.go:65] Loading cluster: ha-293078
	I0401 18:28:02.678398   32354 notify.go:220] Checking for updates...
	I0401 18:28:02.678702   32354 config.go:182] Loaded profile config "ha-293078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 18:28:02.678715   32354 status.go:255] checking status of ha-293078 ...
	I0401 18:28:02.679137   32354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:28:02.679220   32354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:28:02.694459   32354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36381
	I0401 18:28:02.694897   32354 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:28:02.695538   32354 main.go:141] libmachine: Using API Version  1
	I0401 18:28:02.695569   32354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:28:02.695907   32354 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:28:02.696108   32354 main.go:141] libmachine: (ha-293078) Calling .GetState
	I0401 18:28:02.697784   32354 status.go:330] ha-293078 host status = "Running" (err=<nil>)
	I0401 18:28:02.697818   32354 host.go:66] Checking if "ha-293078" exists ...
	I0401 18:28:02.698215   32354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:28:02.698261   32354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:28:02.713148   32354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36561
	I0401 18:28:02.713685   32354 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:28:02.714111   32354 main.go:141] libmachine: Using API Version  1
	I0401 18:28:02.714131   32354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:28:02.714410   32354 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:28:02.714613   32354 main.go:141] libmachine: (ha-293078) Calling .GetIP
	I0401 18:28:02.717738   32354 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:28:02.718158   32354 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:28:02.718198   32354 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:28:02.718303   32354 host.go:66] Checking if "ha-293078" exists ...
	I0401 18:28:02.718586   32354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:28:02.718624   32354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:28:02.732907   32354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38421
	I0401 18:28:02.733296   32354 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:28:02.733766   32354 main.go:141] libmachine: Using API Version  1
	I0401 18:28:02.733785   32354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:28:02.734073   32354 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:28:02.734255   32354 main.go:141] libmachine: (ha-293078) Calling .DriverName
	I0401 18:28:02.734409   32354 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 18:28:02.734427   32354 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:28:02.737211   32354 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:28:02.737694   32354 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:28:02.737725   32354 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:28:02.737879   32354 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:28:02.738088   32354 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:28:02.738213   32354 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:28:02.738346   32354 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078/id_rsa Username:docker}
	I0401 18:28:02.823852   32354 ssh_runner.go:195] Run: systemctl --version
	I0401 18:28:02.831597   32354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 18:28:02.849599   32354 kubeconfig.go:125] found "ha-293078" server: "https://192.168.39.254:8443"
	I0401 18:28:02.849632   32354 api_server.go:166] Checking apiserver status ...
	I0401 18:28:02.849689   32354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 18:28:02.865974   32354 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1197/cgroup
	W0401 18:28:02.877507   32354 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1197/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0401 18:28:02.877578   32354 ssh_runner.go:195] Run: ls
	I0401 18:28:02.883737   32354 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0401 18:28:02.890318   32354 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0401 18:28:02.890345   32354 status.go:422] ha-293078 apiserver status = Running (err=<nil>)
	I0401 18:28:02.890358   32354 status.go:257] ha-293078 status: &{Name:ha-293078 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0401 18:28:02.890378   32354 status.go:255] checking status of ha-293078-m02 ...
	I0401 18:28:02.890720   32354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:28:02.890760   32354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:28:02.905591   32354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45995
	I0401 18:28:02.906031   32354 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:28:02.906551   32354 main.go:141] libmachine: Using API Version  1
	I0401 18:28:02.906571   32354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:28:02.906955   32354 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:28:02.907217   32354 main.go:141] libmachine: (ha-293078-m02) Calling .GetState
	I0401 18:28:02.908761   32354 status.go:330] ha-293078-m02 host status = "Stopped" (err=<nil>)
	I0401 18:28:02.908777   32354 status.go:343] host is not running, skipping remaining checks
	I0401 18:28:02.908785   32354 status.go:257] ha-293078-m02 status: &{Name:ha-293078-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0401 18:28:02.908806   32354 status.go:255] checking status of ha-293078-m03 ...
	I0401 18:28:02.909245   32354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:28:02.909285   32354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:28:02.926314   32354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45741
	I0401 18:28:02.926701   32354 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:28:02.927154   32354 main.go:141] libmachine: Using API Version  1
	I0401 18:28:02.927177   32354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:28:02.927496   32354 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:28:02.927754   32354 main.go:141] libmachine: (ha-293078-m03) Calling .GetState
	I0401 18:28:02.929418   32354 status.go:330] ha-293078-m03 host status = "Running" (err=<nil>)
	I0401 18:28:02.929434   32354 host.go:66] Checking if "ha-293078-m03" exists ...
	I0401 18:28:02.929726   32354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:28:02.929786   32354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:28:02.944687   32354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44681
	I0401 18:28:02.945086   32354 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:28:02.945530   32354 main.go:141] libmachine: Using API Version  1
	I0401 18:28:02.945552   32354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:28:02.945883   32354 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:28:02.946070   32354 main.go:141] libmachine: (ha-293078-m03) Calling .GetIP
	I0401 18:28:02.948805   32354 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:28:02.949232   32354 main.go:141] libmachine: (ha-293078-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:33:4d", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:22:31 +0000 UTC Type:0 Mac:52:54:00:48:33:4d Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-293078-m03 Clientid:01:52:54:00:48:33:4d}
	I0401 18:28:02.949254   32354 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:28:02.949414   32354 host.go:66] Checking if "ha-293078-m03" exists ...
	I0401 18:28:02.949728   32354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:28:02.949770   32354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:28:02.964004   32354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40837
	I0401 18:28:02.964475   32354 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:28:02.964958   32354 main.go:141] libmachine: Using API Version  1
	I0401 18:28:02.964985   32354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:28:02.965261   32354 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:28:02.965461   32354 main.go:141] libmachine: (ha-293078-m03) Calling .DriverName
	I0401 18:28:02.965639   32354 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 18:28:02.965692   32354 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHHostname
	I0401 18:28:02.968357   32354 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:28:02.968783   32354 main.go:141] libmachine: (ha-293078-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:33:4d", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:22:31 +0000 UTC Type:0 Mac:52:54:00:48:33:4d Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-293078-m03 Clientid:01:52:54:00:48:33:4d}
	I0401 18:28:02.968805   32354 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:28:02.968977   32354 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHPort
	I0401 18:28:02.969151   32354 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHKeyPath
	I0401 18:28:02.969281   32354 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHUsername
	I0401 18:28:02.969434   32354 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m03/id_rsa Username:docker}
	I0401 18:28:03.058182   32354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 18:28:03.076466   32354 kubeconfig.go:125] found "ha-293078" server: "https://192.168.39.254:8443"
	I0401 18:28:03.076491   32354 api_server.go:166] Checking apiserver status ...
	I0401 18:28:03.076519   32354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 18:28:03.093667   32354 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1510/cgroup
	W0401 18:28:03.104284   32354 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1510/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0401 18:28:03.104345   32354 ssh_runner.go:195] Run: ls
	I0401 18:28:03.109475   32354 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0401 18:28:03.113588   32354 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0401 18:28:03.113609   32354 status.go:422] ha-293078-m03 apiserver status = Running (err=<nil>)
	I0401 18:28:03.113619   32354 status.go:257] ha-293078-m03 status: &{Name:ha-293078-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0401 18:28:03.113637   32354 status.go:255] checking status of ha-293078-m04 ...
	I0401 18:28:03.113961   32354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:28:03.114000   32354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:28:03.129941   32354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44991
	I0401 18:28:03.130456   32354 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:28:03.130937   32354 main.go:141] libmachine: Using API Version  1
	I0401 18:28:03.130956   32354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:28:03.131239   32354 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:28:03.131420   32354 main.go:141] libmachine: (ha-293078-m04) Calling .GetState
	I0401 18:28:03.133035   32354 status.go:330] ha-293078-m04 host status = "Running" (err=<nil>)
	I0401 18:28:03.133051   32354 host.go:66] Checking if "ha-293078-m04" exists ...
	I0401 18:28:03.133346   32354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:28:03.133379   32354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:28:03.153100   32354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34841
	I0401 18:28:03.153543   32354 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:28:03.154101   32354 main.go:141] libmachine: Using API Version  1
	I0401 18:28:03.154121   32354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:28:03.154463   32354 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:28:03.154674   32354 main.go:141] libmachine: (ha-293078-m04) Calling .GetIP
	I0401 18:28:03.157359   32354 main.go:141] libmachine: (ha-293078-m04) DBG | domain ha-293078-m04 has defined MAC address 52:54:00:b5:ec:c5 in network mk-ha-293078
	I0401 18:28:03.157843   32354 main.go:141] libmachine: (ha-293078-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:ec:c5", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:23:56 +0000 UTC Type:0 Mac:52:54:00:b5:ec:c5 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-293078-m04 Clientid:01:52:54:00:b5:ec:c5}
	I0401 18:28:03.157878   32354 main.go:141] libmachine: (ha-293078-m04) DBG | domain ha-293078-m04 has defined IP address 192.168.39.14 and MAC address 52:54:00:b5:ec:c5 in network mk-ha-293078
	I0401 18:28:03.158027   32354 host.go:66] Checking if "ha-293078-m04" exists ...
	I0401 18:28:03.158353   32354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:28:03.158400   32354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:28:03.172987   32354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36945
	I0401 18:28:03.173365   32354 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:28:03.173907   32354 main.go:141] libmachine: Using API Version  1
	I0401 18:28:03.173932   32354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:28:03.174277   32354 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:28:03.174489   32354 main.go:141] libmachine: (ha-293078-m04) Calling .DriverName
	I0401 18:28:03.174687   32354 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 18:28:03.174705   32354 main.go:141] libmachine: (ha-293078-m04) Calling .GetSSHHostname
	I0401 18:28:03.177347   32354 main.go:141] libmachine: (ha-293078-m04) DBG | domain ha-293078-m04 has defined MAC address 52:54:00:b5:ec:c5 in network mk-ha-293078
	I0401 18:28:03.177843   32354 main.go:141] libmachine: (ha-293078-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:ec:c5", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:23:56 +0000 UTC Type:0 Mac:52:54:00:b5:ec:c5 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-293078-m04 Clientid:01:52:54:00:b5:ec:c5}
	I0401 18:28:03.177880   32354 main.go:141] libmachine: (ha-293078-m04) DBG | domain ha-293078-m04 has defined IP address 192.168.39.14 and MAC address 52:54:00:b5:ec:c5 in network mk-ha-293078
	I0401 18:28:03.178051   32354 main.go:141] libmachine: (ha-293078-m04) Calling .GetSSHPort
	I0401 18:28:03.178408   32354 main.go:141] libmachine: (ha-293078-m04) Calling .GetSSHKeyPath
	I0401 18:28:03.178592   32354 main.go:141] libmachine: (ha-293078-m04) Calling .GetSSHUsername
	I0401 18:28:03.178773   32354 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m04/id_rsa Username:docker}
	I0401 18:28:03.262230   32354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 18:28:03.280383   32354 status.go:257] ha-293078-m04 status: &{Name:ha-293078-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-293078 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-293078 -n ha-293078
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-293078 logs -n 25: (1.55990536s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   |    Version     |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| ssh     | ha-293078 ssh -n                                                                 | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | ha-293078-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-293078 cp ha-293078-m03:/home/docker/cp-test.txt                              | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | ha-293078:/home/docker/cp-test_ha-293078-m03_ha-293078.txt                       |           |         |                |                     |                     |
	| ssh     | ha-293078 ssh -n                                                                 | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | ha-293078-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-293078 ssh -n ha-293078 sudo cat                                              | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | /home/docker/cp-test_ha-293078-m03_ha-293078.txt                                 |           |         |                |                     |                     |
	| cp      | ha-293078 cp ha-293078-m03:/home/docker/cp-test.txt                              | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | ha-293078-m02:/home/docker/cp-test_ha-293078-m03_ha-293078-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-293078 ssh -n                                                                 | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | ha-293078-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-293078 ssh -n ha-293078-m02 sudo cat                                          | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | /home/docker/cp-test_ha-293078-m03_ha-293078-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-293078 cp ha-293078-m03:/home/docker/cp-test.txt                              | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | ha-293078-m04:/home/docker/cp-test_ha-293078-m03_ha-293078-m04.txt               |           |         |                |                     |                     |
	| ssh     | ha-293078 ssh -n                                                                 | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | ha-293078-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-293078 ssh -n ha-293078-m04 sudo cat                                          | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | /home/docker/cp-test_ha-293078-m03_ha-293078-m04.txt                             |           |         |                |                     |                     |
	| cp      | ha-293078 cp testdata/cp-test.txt                                                | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | ha-293078-m04:/home/docker/cp-test.txt                                           |           |         |                |                     |                     |
	| ssh     | ha-293078 ssh -n                                                                 | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | ha-293078-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-293078 cp ha-293078-m04:/home/docker/cp-test.txt                              | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3967030531/001/cp-test_ha-293078-m04.txt |           |         |                |                     |                     |
	| ssh     | ha-293078 ssh -n                                                                 | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | ha-293078-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-293078 cp ha-293078-m04:/home/docker/cp-test.txt                              | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | ha-293078:/home/docker/cp-test_ha-293078-m04_ha-293078.txt                       |           |         |                |                     |                     |
	| ssh     | ha-293078 ssh -n                                                                 | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | ha-293078-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-293078 ssh -n ha-293078 sudo cat                                              | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | /home/docker/cp-test_ha-293078-m04_ha-293078.txt                                 |           |         |                |                     |                     |
	| cp      | ha-293078 cp ha-293078-m04:/home/docker/cp-test.txt                              | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | ha-293078-m02:/home/docker/cp-test_ha-293078-m04_ha-293078-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-293078 ssh -n                                                                 | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | ha-293078-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-293078 ssh -n ha-293078-m02 sudo cat                                          | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | /home/docker/cp-test_ha-293078-m04_ha-293078-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-293078 cp ha-293078-m04:/home/docker/cp-test.txt                              | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | ha-293078-m03:/home/docker/cp-test_ha-293078-m04_ha-293078-m03.txt               |           |         |                |                     |                     |
	| ssh     | ha-293078 ssh -n                                                                 | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | ha-293078-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-293078 ssh -n ha-293078-m03 sudo cat                                          | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | /home/docker/cp-test_ha-293078-m04_ha-293078-m03.txt                             |           |         |                |                     |                     |
	| node    | ha-293078 node stop m02 -v=7                                                     | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | ha-293078 node start m02 -v=7                                                    | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:27 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/01 18:20:08
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 18:20:08.169597   27284 out.go:291] Setting OutFile to fd 1 ...
	I0401 18:20:08.169727   27284 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 18:20:08.169736   27284 out.go:304] Setting ErrFile to fd 2...
	I0401 18:20:08.169741   27284 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 18:20:08.169959   27284 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-10493/.minikube/bin
	I0401 18:20:08.170658   27284 out.go:298] Setting JSON to false
	I0401 18:20:08.171489   27284 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3760,"bootTime":1711991848,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 18:20:08.171542   27284 start.go:139] virtualization: kvm guest
	I0401 18:20:08.173669   27284 out.go:177] * [ha-293078] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 18:20:08.175086   27284 out.go:177]   - MINIKUBE_LOCATION=18233
	I0401 18:20:08.175120   27284 notify.go:220] Checking for updates...
	I0401 18:20:08.176449   27284 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 18:20:08.177913   27284 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 18:20:08.179348   27284 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-10493/.minikube
	I0401 18:20:08.180659   27284 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 18:20:08.182041   27284 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 18:20:08.183488   27284 driver.go:392] Setting default libvirt URI to qemu:///system
	I0401 18:20:08.217194   27284 out.go:177] * Using the kvm2 driver based on user configuration
	I0401 18:20:08.218651   27284 start.go:297] selected driver: kvm2
	I0401 18:20:08.218676   27284 start.go:901] validating driver "kvm2" against <nil>
	I0401 18:20:08.218689   27284 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 18:20:08.219402   27284 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 18:20:08.219517   27284 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18233-10493/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0401 18:20:08.233744   27284 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0401 18:20:08.233777   27284 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0401 18:20:08.234002   27284 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 18:20:08.234071   27284 cni.go:84] Creating CNI manager for ""
	I0401 18:20:08.234087   27284 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0401 18:20:08.234099   27284 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0401 18:20:08.234162   27284 start.go:340] cluster config:
	{Name:ha-293078 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-293078 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0401 18:20:08.234288   27284 iso.go:125] acquiring lock: {Name:mka511ffe42ecd86bd7f46e7a17ddcdd3e5e4327 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 18:20:08.236122   27284 out.go:177] * Starting "ha-293078" primary control-plane node in "ha-293078" cluster
	I0401 18:20:08.237614   27284 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0401 18:20:08.237656   27284 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0401 18:20:08.237684   27284 cache.go:56] Caching tarball of preloaded images
	I0401 18:20:08.237772   27284 preload.go:173] Found /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 18:20:08.237787   27284 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0401 18:20:08.238046   27284 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/config.json ...
	I0401 18:20:08.238066   27284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/config.json: {Name:mke97edf58f64b766cee43b56480c9c081c5d8fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:20:08.238207   27284 start.go:360] acquireMachinesLock for ha-293078: {Name:mk6b7472209a8db5f40be4c2f0565da7e0094c19 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 18:20:08.238243   27284 start.go:364] duration metric: took 19.532µs to acquireMachinesLock for "ha-293078"
	I0401 18:20:08.238265   27284 start.go:93] Provisioning new machine with config: &{Name:ha-293078 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:ha-293078 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 18:20:08.238318   27284 start.go:125] createHost starting for "" (driver="kvm2")
	I0401 18:20:08.239952   27284 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0401 18:20:08.240088   27284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:20:08.240162   27284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:20:08.253548   27284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42945
	I0401 18:20:08.253976   27284 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:20:08.254480   27284 main.go:141] libmachine: Using API Version  1
	I0401 18:20:08.254510   27284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:20:08.254853   27284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:20:08.255023   27284 main.go:141] libmachine: (ha-293078) Calling .GetMachineName
	I0401 18:20:08.255150   27284 main.go:141] libmachine: (ha-293078) Calling .DriverName
	I0401 18:20:08.255269   27284 start.go:159] libmachine.API.Create for "ha-293078" (driver="kvm2")
	I0401 18:20:08.255296   27284 client.go:168] LocalClient.Create starting
	I0401 18:20:08.255326   27284 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem
	I0401 18:20:08.255355   27284 main.go:141] libmachine: Decoding PEM data...
	I0401 18:20:08.255373   27284 main.go:141] libmachine: Parsing certificate...
	I0401 18:20:08.255426   27284 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem
	I0401 18:20:08.255451   27284 main.go:141] libmachine: Decoding PEM data...
	I0401 18:20:08.255466   27284 main.go:141] libmachine: Parsing certificate...
	I0401 18:20:08.255483   27284 main.go:141] libmachine: Running pre-create checks...
	I0401 18:20:08.255492   27284 main.go:141] libmachine: (ha-293078) Calling .PreCreateCheck
	I0401 18:20:08.255778   27284 main.go:141] libmachine: (ha-293078) Calling .GetConfigRaw
	I0401 18:20:08.256108   27284 main.go:141] libmachine: Creating machine...
	I0401 18:20:08.256122   27284 main.go:141] libmachine: (ha-293078) Calling .Create
	I0401 18:20:08.256238   27284 main.go:141] libmachine: (ha-293078) Creating KVM machine...
	I0401 18:20:08.257432   27284 main.go:141] libmachine: (ha-293078) DBG | found existing default KVM network
	I0401 18:20:08.258083   27284 main.go:141] libmachine: (ha-293078) DBG | I0401 18:20:08.257965   27307 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012d980}
	I0401 18:20:08.258106   27284 main.go:141] libmachine: (ha-293078) DBG | created network xml: 
	I0401 18:20:08.258115   27284 main.go:141] libmachine: (ha-293078) DBG | <network>
	I0401 18:20:08.258122   27284 main.go:141] libmachine: (ha-293078) DBG |   <name>mk-ha-293078</name>
	I0401 18:20:08.258130   27284 main.go:141] libmachine: (ha-293078) DBG |   <dns enable='no'/>
	I0401 18:20:08.258135   27284 main.go:141] libmachine: (ha-293078) DBG |   
	I0401 18:20:08.258143   27284 main.go:141] libmachine: (ha-293078) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0401 18:20:08.258150   27284 main.go:141] libmachine: (ha-293078) DBG |     <dhcp>
	I0401 18:20:08.258164   27284 main.go:141] libmachine: (ha-293078) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0401 18:20:08.258174   27284 main.go:141] libmachine: (ha-293078) DBG |     </dhcp>
	I0401 18:20:08.258187   27284 main.go:141] libmachine: (ha-293078) DBG |   </ip>
	I0401 18:20:08.258196   27284 main.go:141] libmachine: (ha-293078) DBG |   
	I0401 18:20:08.258202   27284 main.go:141] libmachine: (ha-293078) DBG | </network>
	I0401 18:20:08.258210   27284 main.go:141] libmachine: (ha-293078) DBG | 
	I0401 18:20:08.262992   27284 main.go:141] libmachine: (ha-293078) DBG | trying to create private KVM network mk-ha-293078 192.168.39.0/24...
	I0401 18:20:08.323174   27284 main.go:141] libmachine: (ha-293078) DBG | private KVM network mk-ha-293078 192.168.39.0/24 created
	I0401 18:20:08.323209   27284 main.go:141] libmachine: (ha-293078) Setting up store path in /home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078 ...
	I0401 18:20:08.323224   27284 main.go:141] libmachine: (ha-293078) DBG | I0401 18:20:08.323141   27307 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18233-10493/.minikube
	I0401 18:20:08.323244   27284 main.go:141] libmachine: (ha-293078) Building disk image from file:///home/jenkins/minikube-integration/18233-10493/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso
	I0401 18:20:08.323281   27284 main.go:141] libmachine: (ha-293078) Downloading /home/jenkins/minikube-integration/18233-10493/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18233-10493/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso...
	I0401 18:20:08.545463   27284 main.go:141] libmachine: (ha-293078) DBG | I0401 18:20:08.545359   27307 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078/id_rsa...
	I0401 18:20:08.619955   27284 main.go:141] libmachine: (ha-293078) DBG | I0401 18:20:08.619862   27307 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078/ha-293078.rawdisk...
	I0401 18:20:08.619974   27284 main.go:141] libmachine: (ha-293078) DBG | Writing magic tar header
	I0401 18:20:08.619985   27284 main.go:141] libmachine: (ha-293078) DBG | Writing SSH key tar header
	I0401 18:20:08.620173   27284 main.go:141] libmachine: (ha-293078) DBG | I0401 18:20:08.620120   27307 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078 ...
	I0401 18:20:08.620277   27284 main.go:141] libmachine: (ha-293078) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078
	I0401 18:20:08.620308   27284 main.go:141] libmachine: (ha-293078) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18233-10493/.minikube/machines
	I0401 18:20:08.620325   27284 main.go:141] libmachine: (ha-293078) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18233-10493/.minikube
	I0401 18:20:08.620338   27284 main.go:141] libmachine: (ha-293078) Setting executable bit set on /home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078 (perms=drwx------)
	I0401 18:20:08.620352   27284 main.go:141] libmachine: (ha-293078) Setting executable bit set on /home/jenkins/minikube-integration/18233-10493/.minikube/machines (perms=drwxr-xr-x)
	I0401 18:20:08.620363   27284 main.go:141] libmachine: (ha-293078) Setting executable bit set on /home/jenkins/minikube-integration/18233-10493/.minikube (perms=drwxr-xr-x)
	I0401 18:20:08.620380   27284 main.go:141] libmachine: (ha-293078) Setting executable bit set on /home/jenkins/minikube-integration/18233-10493 (perms=drwxrwxr-x)
	I0401 18:20:08.620392   27284 main.go:141] libmachine: (ha-293078) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0401 18:20:08.620418   27284 main.go:141] libmachine: (ha-293078) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18233-10493
	I0401 18:20:08.620436   27284 main.go:141] libmachine: (ha-293078) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0401 18:20:08.620448   27284 main.go:141] libmachine: (ha-293078) DBG | Checking permissions on dir: /home/jenkins
	I0401 18:20:08.620458   27284 main.go:141] libmachine: (ha-293078) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0401 18:20:08.620470   27284 main.go:141] libmachine: (ha-293078) DBG | Checking permissions on dir: /home
	I0401 18:20:08.620484   27284 main.go:141] libmachine: (ha-293078) DBG | Skipping /home - not owner
	I0401 18:20:08.620494   27284 main.go:141] libmachine: (ha-293078) Creating domain...
	I0401 18:20:08.621488   27284 main.go:141] libmachine: (ha-293078) define libvirt domain using xml: 
	I0401 18:20:08.621518   27284 main.go:141] libmachine: (ha-293078) <domain type='kvm'>
	I0401 18:20:08.621529   27284 main.go:141] libmachine: (ha-293078)   <name>ha-293078</name>
	I0401 18:20:08.621542   27284 main.go:141] libmachine: (ha-293078)   <memory unit='MiB'>2200</memory>
	I0401 18:20:08.621554   27284 main.go:141] libmachine: (ha-293078)   <vcpu>2</vcpu>
	I0401 18:20:08.621561   27284 main.go:141] libmachine: (ha-293078)   <features>
	I0401 18:20:08.621569   27284 main.go:141] libmachine: (ha-293078)     <acpi/>
	I0401 18:20:08.621578   27284 main.go:141] libmachine: (ha-293078)     <apic/>
	I0401 18:20:08.621587   27284 main.go:141] libmachine: (ha-293078)     <pae/>
	I0401 18:20:08.621596   27284 main.go:141] libmachine: (ha-293078)     
	I0401 18:20:08.621606   27284 main.go:141] libmachine: (ha-293078)   </features>
	I0401 18:20:08.621616   27284 main.go:141] libmachine: (ha-293078)   <cpu mode='host-passthrough'>
	I0401 18:20:08.621639   27284 main.go:141] libmachine: (ha-293078)   
	I0401 18:20:08.621677   27284 main.go:141] libmachine: (ha-293078)   </cpu>
	I0401 18:20:08.621687   27284 main.go:141] libmachine: (ha-293078)   <os>
	I0401 18:20:08.621695   27284 main.go:141] libmachine: (ha-293078)     <type>hvm</type>
	I0401 18:20:08.621703   27284 main.go:141] libmachine: (ha-293078)     <boot dev='cdrom'/>
	I0401 18:20:08.621710   27284 main.go:141] libmachine: (ha-293078)     <boot dev='hd'/>
	I0401 18:20:08.621718   27284 main.go:141] libmachine: (ha-293078)     <bootmenu enable='no'/>
	I0401 18:20:08.621723   27284 main.go:141] libmachine: (ha-293078)   </os>
	I0401 18:20:08.621729   27284 main.go:141] libmachine: (ha-293078)   <devices>
	I0401 18:20:08.621734   27284 main.go:141] libmachine: (ha-293078)     <disk type='file' device='cdrom'>
	I0401 18:20:08.621745   27284 main.go:141] libmachine: (ha-293078)       <source file='/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078/boot2docker.iso'/>
	I0401 18:20:08.621750   27284 main.go:141] libmachine: (ha-293078)       <target dev='hdc' bus='scsi'/>
	I0401 18:20:08.621774   27284 main.go:141] libmachine: (ha-293078)       <readonly/>
	I0401 18:20:08.621800   27284 main.go:141] libmachine: (ha-293078)     </disk>
	I0401 18:20:08.621822   27284 main.go:141] libmachine: (ha-293078)     <disk type='file' device='disk'>
	I0401 18:20:08.621833   27284 main.go:141] libmachine: (ha-293078)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0401 18:20:08.621847   27284 main.go:141] libmachine: (ha-293078)       <source file='/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078/ha-293078.rawdisk'/>
	I0401 18:20:08.621855   27284 main.go:141] libmachine: (ha-293078)       <target dev='hda' bus='virtio'/>
	I0401 18:20:08.621863   27284 main.go:141] libmachine: (ha-293078)     </disk>
	I0401 18:20:08.621876   27284 main.go:141] libmachine: (ha-293078)     <interface type='network'>
	I0401 18:20:08.621908   27284 main.go:141] libmachine: (ha-293078)       <source network='mk-ha-293078'/>
	I0401 18:20:08.621933   27284 main.go:141] libmachine: (ha-293078)       <model type='virtio'/>
	I0401 18:20:08.621948   27284 main.go:141] libmachine: (ha-293078)     </interface>
	I0401 18:20:08.621962   27284 main.go:141] libmachine: (ha-293078)     <interface type='network'>
	I0401 18:20:08.621977   27284 main.go:141] libmachine: (ha-293078)       <source network='default'/>
	I0401 18:20:08.621990   27284 main.go:141] libmachine: (ha-293078)       <model type='virtio'/>
	I0401 18:20:08.622005   27284 main.go:141] libmachine: (ha-293078)     </interface>
	I0401 18:20:08.622021   27284 main.go:141] libmachine: (ha-293078)     <serial type='pty'>
	I0401 18:20:08.622035   27284 main.go:141] libmachine: (ha-293078)       <target port='0'/>
	I0401 18:20:08.622049   27284 main.go:141] libmachine: (ha-293078)     </serial>
	I0401 18:20:08.622063   27284 main.go:141] libmachine: (ha-293078)     <console type='pty'>
	I0401 18:20:08.622074   27284 main.go:141] libmachine: (ha-293078)       <target type='serial' port='0'/>
	I0401 18:20:08.622094   27284 main.go:141] libmachine: (ha-293078)     </console>
	I0401 18:20:08.622115   27284 main.go:141] libmachine: (ha-293078)     <rng model='virtio'>
	I0401 18:20:08.622134   27284 main.go:141] libmachine: (ha-293078)       <backend model='random'>/dev/random</backend>
	I0401 18:20:08.622155   27284 main.go:141] libmachine: (ha-293078)     </rng>
	I0401 18:20:08.622173   27284 main.go:141] libmachine: (ha-293078)     
	I0401 18:20:08.622185   27284 main.go:141] libmachine: (ha-293078)     
	I0401 18:20:08.622191   27284 main.go:141] libmachine: (ha-293078)   </devices>
	I0401 18:20:08.622199   27284 main.go:141] libmachine: (ha-293078) </domain>
	I0401 18:20:08.622208   27284 main.go:141] libmachine: (ha-293078) 
	I0401 18:20:08.626471   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:8a:2d:39 in network default
	I0401 18:20:08.627059   27284 main.go:141] libmachine: (ha-293078) Ensuring networks are active...
	I0401 18:20:08.627100   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:08.627670   27284 main.go:141] libmachine: (ha-293078) Ensuring network default is active
	I0401 18:20:08.628041   27284 main.go:141] libmachine: (ha-293078) Ensuring network mk-ha-293078 is active
	I0401 18:20:08.628575   27284 main.go:141] libmachine: (ha-293078) Getting domain xml...
	I0401 18:20:08.629240   27284 main.go:141] libmachine: (ha-293078) Creating domain...
	I0401 18:20:09.790198   27284 main.go:141] libmachine: (ha-293078) Waiting to get IP...
	I0401 18:20:09.791005   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:09.791393   27284 main.go:141] libmachine: (ha-293078) DBG | unable to find current IP address of domain ha-293078 in network mk-ha-293078
	I0401 18:20:09.791415   27284 main.go:141] libmachine: (ha-293078) DBG | I0401 18:20:09.791378   27307 retry.go:31] will retry after 299.095049ms: waiting for machine to come up
	I0401 18:20:10.091780   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:10.092218   27284 main.go:141] libmachine: (ha-293078) DBG | unable to find current IP address of domain ha-293078 in network mk-ha-293078
	I0401 18:20:10.092244   27284 main.go:141] libmachine: (ha-293078) DBG | I0401 18:20:10.092172   27307 retry.go:31] will retry after 341.823452ms: waiting for machine to come up
	I0401 18:20:10.435740   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:10.436221   27284 main.go:141] libmachine: (ha-293078) DBG | unable to find current IP address of domain ha-293078 in network mk-ha-293078
	I0401 18:20:10.436263   27284 main.go:141] libmachine: (ha-293078) DBG | I0401 18:20:10.436201   27307 retry.go:31] will retry after 412.275855ms: waiting for machine to come up
	I0401 18:20:10.849632   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:10.850052   27284 main.go:141] libmachine: (ha-293078) DBG | unable to find current IP address of domain ha-293078 in network mk-ha-293078
	I0401 18:20:10.850075   27284 main.go:141] libmachine: (ha-293078) DBG | I0401 18:20:10.850019   27307 retry.go:31] will retry after 504.08215ms: waiting for machine to come up
	I0401 18:20:11.356728   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:11.357488   27284 main.go:141] libmachine: (ha-293078) DBG | unable to find current IP address of domain ha-293078 in network mk-ha-293078
	I0401 18:20:11.357594   27284 main.go:141] libmachine: (ha-293078) DBG | I0401 18:20:11.357446   27307 retry.go:31] will retry after 521.12253ms: waiting for machine to come up
	I0401 18:20:11.880118   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:11.880587   27284 main.go:141] libmachine: (ha-293078) DBG | unable to find current IP address of domain ha-293078 in network mk-ha-293078
	I0401 18:20:11.880607   27284 main.go:141] libmachine: (ha-293078) DBG | I0401 18:20:11.880563   27307 retry.go:31] will retry after 840.04722ms: waiting for machine to come up
	I0401 18:20:12.722613   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:12.722961   27284 main.go:141] libmachine: (ha-293078) DBG | unable to find current IP address of domain ha-293078 in network mk-ha-293078
	I0401 18:20:12.723019   27284 main.go:141] libmachine: (ha-293078) DBG | I0401 18:20:12.722910   27307 retry.go:31] will retry after 1.165268416s: waiting for machine to come up
	I0401 18:20:13.889819   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:13.890267   27284 main.go:141] libmachine: (ha-293078) DBG | unable to find current IP address of domain ha-293078 in network mk-ha-293078
	I0401 18:20:13.890296   27284 main.go:141] libmachine: (ha-293078) DBG | I0401 18:20:13.890213   27307 retry.go:31] will retry after 955.488594ms: waiting for machine to come up
	I0401 18:20:14.847839   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:14.848189   27284 main.go:141] libmachine: (ha-293078) DBG | unable to find current IP address of domain ha-293078 in network mk-ha-293078
	I0401 18:20:14.848212   27284 main.go:141] libmachine: (ha-293078) DBG | I0401 18:20:14.848142   27307 retry.go:31] will retry after 1.835094911s: waiting for machine to come up
	I0401 18:20:16.686235   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:16.686609   27284 main.go:141] libmachine: (ha-293078) DBG | unable to find current IP address of domain ha-293078 in network mk-ha-293078
	I0401 18:20:16.686636   27284 main.go:141] libmachine: (ha-293078) DBG | I0401 18:20:16.686563   27307 retry.go:31] will retry after 1.705606324s: waiting for machine to come up
	I0401 18:20:18.393239   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:18.393664   27284 main.go:141] libmachine: (ha-293078) DBG | unable to find current IP address of domain ha-293078 in network mk-ha-293078
	I0401 18:20:18.393692   27284 main.go:141] libmachine: (ha-293078) DBG | I0401 18:20:18.393591   27307 retry.go:31] will retry after 2.302351777s: waiting for machine to come up
	I0401 18:20:20.697519   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:20.698043   27284 main.go:141] libmachine: (ha-293078) DBG | unable to find current IP address of domain ha-293078 in network mk-ha-293078
	I0401 18:20:20.698070   27284 main.go:141] libmachine: (ha-293078) DBG | I0401 18:20:20.697994   27307 retry.go:31] will retry after 2.904641277s: waiting for machine to come up
	I0401 18:20:23.604466   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:23.604767   27284 main.go:141] libmachine: (ha-293078) DBG | unable to find current IP address of domain ha-293078 in network mk-ha-293078
	I0401 18:20:23.604798   27284 main.go:141] libmachine: (ha-293078) DBG | I0401 18:20:23.604722   27307 retry.go:31] will retry after 2.947312694s: waiting for machine to come up
	I0401 18:20:26.554688   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:26.555152   27284 main.go:141] libmachine: (ha-293078) DBG | unable to find current IP address of domain ha-293078 in network mk-ha-293078
	I0401 18:20:26.555179   27284 main.go:141] libmachine: (ha-293078) DBG | I0401 18:20:26.555119   27307 retry.go:31] will retry after 3.439592829s: waiting for machine to come up
	I0401 18:20:29.995900   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:29.996358   27284 main.go:141] libmachine: (ha-293078) Found IP for machine: 192.168.39.74
	I0401 18:20:29.996376   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has current primary IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:29.996384   27284 main.go:141] libmachine: (ha-293078) Reserving static IP address...
	I0401 18:20:29.996738   27284 main.go:141] libmachine: (ha-293078) DBG | unable to find host DHCP lease matching {name: "ha-293078", mac: "52:54:00:62:80:20", ip: "192.168.39.74"} in network mk-ha-293078
	I0401 18:20:30.067441   27284 main.go:141] libmachine: (ha-293078) DBG | Getting to WaitForSSH function...
	I0401 18:20:30.067471   27284 main.go:141] libmachine: (ha-293078) Reserved static IP address: 192.168.39.74
	I0401 18:20:30.067483   27284 main.go:141] libmachine: (ha-293078) Waiting for SSH to be available...
	I0401 18:20:30.070189   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:30.070712   27284 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:minikube Clientid:01:52:54:00:62:80:20}
	I0401 18:20:30.070750   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:30.070867   27284 main.go:141] libmachine: (ha-293078) DBG | Using SSH client type: external
	I0401 18:20:30.070886   27284 main.go:141] libmachine: (ha-293078) DBG | Using SSH private key: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078/id_rsa (-rw-------)
	I0401 18:20:30.070918   27284 main.go:141] libmachine: (ha-293078) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.74 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0401 18:20:30.070927   27284 main.go:141] libmachine: (ha-293078) DBG | About to run SSH command:
	I0401 18:20:30.070951   27284 main.go:141] libmachine: (ha-293078) DBG | exit 0
	I0401 18:20:30.197899   27284 main.go:141] libmachine: (ha-293078) DBG | SSH cmd err, output: <nil>: 
	I0401 18:20:30.198132   27284 main.go:141] libmachine: (ha-293078) KVM machine creation complete!
	I0401 18:20:30.198461   27284 main.go:141] libmachine: (ha-293078) Calling .GetConfigRaw
	I0401 18:20:30.199022   27284 main.go:141] libmachine: (ha-293078) Calling .DriverName
	I0401 18:20:30.199228   27284 main.go:141] libmachine: (ha-293078) Calling .DriverName
	I0401 18:20:30.199391   27284 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0401 18:20:30.199407   27284 main.go:141] libmachine: (ha-293078) Calling .GetState
	I0401 18:20:30.200977   27284 main.go:141] libmachine: Detecting operating system of created instance...
	I0401 18:20:30.200991   27284 main.go:141] libmachine: Waiting for SSH to be available...
	I0401 18:20:30.200996   27284 main.go:141] libmachine: Getting to WaitForSSH function...
	I0401 18:20:30.201002   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:20:30.203360   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:30.203700   27284 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:20:30.203721   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:30.203841   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:20:30.204042   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:20:30.204204   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:20:30.204362   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:20:30.204568   27284 main.go:141] libmachine: Using SSH client type: native
	I0401 18:20:30.204761   27284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.74 22 <nil> <nil>}
	I0401 18:20:30.204781   27284 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0401 18:20:30.309268   27284 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 18:20:30.309293   27284 main.go:141] libmachine: Detecting the provisioner...
	I0401 18:20:30.309300   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:20:30.312064   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:30.312427   27284 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:20:30.312450   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:30.312601   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:20:30.312781   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:20:30.312920   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:20:30.313078   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:20:30.313242   27284 main.go:141] libmachine: Using SSH client type: native
	I0401 18:20:30.313393   27284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.74 22 <nil> <nil>}
	I0401 18:20:30.313402   27284 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0401 18:20:30.418580   27284 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0401 18:20:30.418654   27284 main.go:141] libmachine: found compatible host: buildroot
	I0401 18:20:30.418663   27284 main.go:141] libmachine: Provisioning with buildroot...
	I0401 18:20:30.418680   27284 main.go:141] libmachine: (ha-293078) Calling .GetMachineName
	I0401 18:20:30.418923   27284 buildroot.go:166] provisioning hostname "ha-293078"
	I0401 18:20:30.418946   27284 main.go:141] libmachine: (ha-293078) Calling .GetMachineName
	I0401 18:20:30.419138   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:20:30.421591   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:30.421893   27284 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:20:30.421919   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:30.422014   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:20:30.422199   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:20:30.422366   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:20:30.422504   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:20:30.422662   27284 main.go:141] libmachine: Using SSH client type: native
	I0401 18:20:30.422818   27284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.74 22 <nil> <nil>}
	I0401 18:20:30.422829   27284 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-293078 && echo "ha-293078" | sudo tee /etc/hostname
	I0401 18:20:30.545449   27284 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-293078
	
	I0401 18:20:30.545475   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:20:30.549246   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:30.549678   27284 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:20:30.549728   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:30.549960   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:20:30.550142   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:20:30.550292   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:20:30.550466   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:20:30.550639   27284 main.go:141] libmachine: Using SSH client type: native
	I0401 18:20:30.550873   27284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.74 22 <nil> <nil>}
	I0401 18:20:30.550896   27284 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-293078' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-293078/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-293078' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 18:20:30.672682   27284 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 18:20:30.672709   27284 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18233-10493/.minikube CaCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18233-10493/.minikube}
	I0401 18:20:30.672747   27284 buildroot.go:174] setting up certificates
	I0401 18:20:30.672763   27284 provision.go:84] configureAuth start
	I0401 18:20:30.672774   27284 main.go:141] libmachine: (ha-293078) Calling .GetMachineName
	I0401 18:20:30.673066   27284 main.go:141] libmachine: (ha-293078) Calling .GetIP
	I0401 18:20:30.675645   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:30.675993   27284 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:20:30.676016   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:30.676132   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:20:30.678264   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:30.678585   27284 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:20:30.678612   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:30.678751   27284 provision.go:143] copyHostCerts
	I0401 18:20:30.678781   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem
	I0401 18:20:30.678811   27284 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem, removing ...
	I0401 18:20:30.678823   27284 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem
	I0401 18:20:30.678897   27284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem (1679 bytes)
	I0401 18:20:30.679000   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem
	I0401 18:20:30.679024   27284 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem, removing ...
	I0401 18:20:30.679030   27284 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem
	I0401 18:20:30.679066   27284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem (1082 bytes)
	I0401 18:20:30.679136   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem
	I0401 18:20:30.679159   27284 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem, removing ...
	I0401 18:20:30.679169   27284 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem
	I0401 18:20:30.679205   27284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem (1123 bytes)
	I0401 18:20:30.679282   27284 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem org=jenkins.ha-293078 san=[127.0.0.1 192.168.39.74 ha-293078 localhost minikube]
	I0401 18:20:30.820542   27284 provision.go:177] copyRemoteCerts
	I0401 18:20:30.820604   27284 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 18:20:30.820625   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:20:30.823170   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:30.823407   27284 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:20:30.823440   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:30.823579   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:20:30.823747   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:20:30.823897   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:20:30.824064   27284 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078/id_rsa Username:docker}
	I0401 18:20:30.908150   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0401 18:20:30.908240   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 18:20:30.935046   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0401 18:20:30.935124   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0401 18:20:30.961458   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0401 18:20:30.961511   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 18:20:30.987687   27284 provision.go:87] duration metric: took 314.911851ms to configureAuth
	I0401 18:20:30.987710   27284 buildroot.go:189] setting minikube options for container-runtime
	I0401 18:20:30.987847   27284 config.go:182] Loaded profile config "ha-293078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 18:20:30.987935   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:20:30.990671   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:30.990936   27284 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:20:30.990965   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:30.991191   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:20:30.991368   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:20:30.991515   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:20:30.991639   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:20:30.991778   27284 main.go:141] libmachine: Using SSH client type: native
	I0401 18:20:30.991936   27284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.74 22 <nil> <nil>}
	I0401 18:20:30.991950   27284 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 18:20:31.264914   27284 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 18:20:31.264936   27284 main.go:141] libmachine: Checking connection to Docker...
	I0401 18:20:31.264943   27284 main.go:141] libmachine: (ha-293078) Calling .GetURL
	I0401 18:20:31.266161   27284 main.go:141] libmachine: (ha-293078) DBG | Using libvirt version 6000000
	I0401 18:20:31.268364   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:31.268834   27284 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:20:31.268860   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:31.269027   27284 main.go:141] libmachine: Docker is up and running!
	I0401 18:20:31.269042   27284 main.go:141] libmachine: Reticulating splines...
	I0401 18:20:31.269059   27284 client.go:171] duration metric: took 23.013742748s to LocalClient.Create
	I0401 18:20:31.269081   27284 start.go:167] duration metric: took 23.013815219s to libmachine.API.Create "ha-293078"
	I0401 18:20:31.269090   27284 start.go:293] postStartSetup for "ha-293078" (driver="kvm2")
	I0401 18:20:31.269099   27284 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 18:20:31.269114   27284 main.go:141] libmachine: (ha-293078) Calling .DriverName
	I0401 18:20:31.269330   27284 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 18:20:31.269351   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:20:31.271284   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:31.271575   27284 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:20:31.271603   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:31.271717   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:20:31.271906   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:20:31.272071   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:20:31.272191   27284 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078/id_rsa Username:docker}
	I0401 18:20:31.356469   27284 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 18:20:31.361087   27284 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 18:20:31.361105   27284 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/addons for local assets ...
	I0401 18:20:31.361161   27284 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/files for local assets ...
	I0401 18:20:31.361238   27284 filesync.go:149] local asset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> 177512.pem in /etc/ssl/certs
	I0401 18:20:31.361247   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> /etc/ssl/certs/177512.pem
	I0401 18:20:31.361351   27284 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 18:20:31.370978   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /etc/ssl/certs/177512.pem (1708 bytes)
	I0401 18:20:31.396855   27284 start.go:296] duration metric: took 127.754309ms for postStartSetup
	I0401 18:20:31.396900   27284 main.go:141] libmachine: (ha-293078) Calling .GetConfigRaw
	I0401 18:20:31.397475   27284 main.go:141] libmachine: (ha-293078) Calling .GetIP
	I0401 18:20:31.400335   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:31.400666   27284 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:20:31.400687   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:31.400904   27284 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/config.json ...
	I0401 18:20:31.401088   27284 start.go:128] duration metric: took 23.162760686s to createHost
	I0401 18:20:31.401111   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:20:31.403095   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:31.403333   27284 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:20:31.403356   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:31.403624   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:20:31.403853   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:20:31.404031   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:20:31.404163   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:20:31.404354   27284 main.go:141] libmachine: Using SSH client type: native
	I0401 18:20:31.404507   27284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.74 22 <nil> <nil>}
	I0401 18:20:31.404525   27284 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 18:20:31.510860   27284 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711995631.480746331
	
	I0401 18:20:31.510887   27284 fix.go:216] guest clock: 1711995631.480746331
	I0401 18:20:31.510898   27284 fix.go:229] Guest: 2024-04-01 18:20:31.480746331 +0000 UTC Remote: 2024-04-01 18:20:31.401099618 +0000 UTC m=+23.276094302 (delta=79.646713ms)
	I0401 18:20:31.510923   27284 fix.go:200] guest clock delta is within tolerance: 79.646713ms
	I0401 18:20:31.510929   27284 start.go:83] releasing machines lock for "ha-293078", held for 23.2726754s
	I0401 18:20:31.510965   27284 main.go:141] libmachine: (ha-293078) Calling .DriverName
	I0401 18:20:31.511213   27284 main.go:141] libmachine: (ha-293078) Calling .GetIP
	I0401 18:20:31.513686   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:31.514016   27284 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:20:31.514045   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:31.514193   27284 main.go:141] libmachine: (ha-293078) Calling .DriverName
	I0401 18:20:31.514662   27284 main.go:141] libmachine: (ha-293078) Calling .DriverName
	I0401 18:20:31.514842   27284 main.go:141] libmachine: (ha-293078) Calling .DriverName
	I0401 18:20:31.514931   27284 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 18:20:31.514968   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:20:31.515029   27284 ssh_runner.go:195] Run: cat /version.json
	I0401 18:20:31.515058   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:20:31.517418   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:31.517747   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:31.517809   27284 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:20:31.517843   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:31.518162   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:20:31.518359   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:20:31.518367   27284 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:20:31.518388   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:31.518441   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:20:31.518539   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:20:31.518563   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:20:31.518659   27284 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078/id_rsa Username:docker}
	I0401 18:20:31.519038   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:20:31.519191   27284 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078/id_rsa Username:docker}
	I0401 18:20:31.621743   27284 ssh_runner.go:195] Run: systemctl --version
	I0401 18:20:31.628066   27284 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 18:20:31.794252   27284 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 18:20:31.801897   27284 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 18:20:31.801953   27284 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 18:20:31.819968   27284 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 18:20:31.819996   27284 start.go:494] detecting cgroup driver to use...
	I0401 18:20:31.820050   27284 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 18:20:31.837349   27284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 18:20:31.851827   27284 docker.go:217] disabling cri-docker service (if available) ...
	I0401 18:20:31.851887   27284 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 18:20:31.866122   27284 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 18:20:31.880472   27284 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 18:20:32.000719   27284 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 18:20:32.173281   27284 docker.go:233] disabling docker service ...
	I0401 18:20:32.173362   27284 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 18:20:32.188338   27284 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 18:20:32.201976   27284 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 18:20:32.341030   27284 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 18:20:32.481681   27284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 18:20:32.497291   27284 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 18:20:32.518625   27284 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0401 18:20:32.518683   27284 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:20:32.529562   27284 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 18:20:32.529627   27284 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:20:32.541685   27284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:20:32.553418   27284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:20:32.565639   27284 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 18:20:32.578097   27284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:20:32.590452   27284 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:20:32.609793   27284 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:20:32.622253   27284 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 18:20:32.633090   27284 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0401 18:20:32.633147   27284 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0401 18:20:32.649008   27284 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 18:20:32.661173   27284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 18:20:32.770760   27284 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 18:20:32.915987   27284 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 18:20:32.916053   27284 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 18:20:32.921542   27284 start.go:562] Will wait 60s for crictl version
	I0401 18:20:32.921601   27284 ssh_runner.go:195] Run: which crictl
	I0401 18:20:32.925615   27284 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 18:20:32.965401   27284 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 18:20:32.965472   27284 ssh_runner.go:195] Run: crio --version
	I0401 18:20:32.997166   27284 ssh_runner.go:195] Run: crio --version
	I0401 18:20:33.030321   27284 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0401 18:20:33.031852   27284 main.go:141] libmachine: (ha-293078) Calling .GetIP
	I0401 18:20:33.034539   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:33.034939   27284 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:20:33.034968   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:20:33.035159   27284 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0401 18:20:33.039978   27284 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 18:20:33.054194   27284 kubeadm.go:877] updating cluster {Name:ha-293078 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cl
usterName:ha-293078 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.74 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 18:20:33.054315   27284 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0401 18:20:33.054393   27284 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 18:20:33.089715   27284 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0401 18:20:33.089786   27284 ssh_runner.go:195] Run: which lz4
	I0401 18:20:33.094230   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0401 18:20:33.094340   27284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0401 18:20:33.099107   27284 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 18:20:33.099135   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0401 18:20:34.709169   27284 crio.go:462] duration metric: took 1.614860102s to copy over tarball
	I0401 18:20:34.709248   27284 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 18:20:37.121760   27284 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.412487353s)
	I0401 18:20:37.121784   27284 crio.go:469] duration metric: took 2.412587851s to extract the tarball
	I0401 18:20:37.121793   27284 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 18:20:37.160490   27284 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 18:20:37.205846   27284 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 18:20:37.205872   27284 cache_images.go:84] Images are preloaded, skipping loading
	I0401 18:20:37.205880   27284 kubeadm.go:928] updating node { 192.168.39.74 8443 v1.29.3 crio true true} ...
	I0401 18:20:37.206012   27284 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-293078 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.74
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-293078 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 18:20:37.206098   27284 ssh_runner.go:195] Run: crio config
	I0401 18:20:37.260913   27284 cni.go:84] Creating CNI manager for ""
	I0401 18:20:37.260931   27284 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0401 18:20:37.260940   27284 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 18:20:37.260958   27284 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.74 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-293078 NodeName:ha-293078 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.74"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.74 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 18:20:37.261069   27284 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.74
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-293078"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.74
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.74"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 18:20:37.261092   27284 kube-vip.go:111] generating kube-vip config ...
	I0401 18:20:37.261129   27284 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0401 18:20:37.280880   27284 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0401 18:20:37.280977   27284 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0401 18:20:37.281039   27284 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0401 18:20:37.291683   27284 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 18:20:37.291743   27284 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0401 18:20:37.302077   27284 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0401 18:20:37.320474   27284 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 18:20:37.338780   27284 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0401 18:20:37.356836   27284 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0401 18:20:37.375090   27284 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0401 18:20:37.379879   27284 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 18:20:37.392854   27284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 18:20:37.533762   27284 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 18:20:37.553364   27284 certs.go:68] Setting up /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078 for IP: 192.168.39.74
	I0401 18:20:37.553396   27284 certs.go:194] generating shared ca certs ...
	I0401 18:20:37.553415   27284 certs.go:226] acquiring lock for ca certs: {Name:mk348b3e250c104b662139cd7212c6c6dfda3180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:20:37.553589   27284 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key
	I0401 18:20:37.553667   27284 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key
	I0401 18:20:37.553683   27284 certs.go:256] generating profile certs ...
	I0401 18:20:37.553748   27284 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/client.key
	I0401 18:20:37.553766   27284 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/client.crt with IP's: []
	I0401 18:20:37.623123   27284 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/client.crt ...
	I0401 18:20:37.623150   27284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/client.crt: {Name:mka69a75b279d67cb0f822de057a95d603fed36f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:20:37.623340   27284 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/client.key ...
	I0401 18:20:37.623355   27284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/client.key: {Name:mkfed0a0336184144d67564578cf1b0894b7f875 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:20:37.623454   27284 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.key.23c0cf34
	I0401 18:20:37.623478   27284 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.crt.23c0cf34 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.74 192.168.39.254]
	I0401 18:20:37.776964   27284 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.crt.23c0cf34 ...
	I0401 18:20:37.776991   27284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.crt.23c0cf34: {Name:mkae6087a8d97af53f8a9b350489b78cf9f08d14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:20:37.777174   27284 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.key.23c0cf34 ...
	I0401 18:20:37.777189   27284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.key.23c0cf34: {Name:mk9782ed8659a69b7d748f796cc45cebb965f23e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:20:37.777297   27284 certs.go:381] copying /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.crt.23c0cf34 -> /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.crt
	I0401 18:20:37.777387   27284 certs.go:385] copying /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.key.23c0cf34 -> /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.key
	I0401 18:20:37.777446   27284 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/proxy-client.key
	I0401 18:20:37.777462   27284 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/proxy-client.crt with IP's: []
	I0401 18:20:37.912136   27284 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/proxy-client.crt ...
	I0401 18:20:37.912165   27284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/proxy-client.crt: {Name:mkdf191f1c09913103ca9c0cb067c7122be9de80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:20:37.912345   27284 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/proxy-client.key ...
	I0401 18:20:37.912359   27284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/proxy-client.key: {Name:mk1a7ee05d4a02c67f6cc33ad844664c3e24362a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:20:37.912453   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0401 18:20:37.912472   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0401 18:20:37.912483   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0401 18:20:37.912496   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0401 18:20:37.912508   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0401 18:20:37.912519   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0401 18:20:37.912530   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0401 18:20:37.912542   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0401 18:20:37.912586   27284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem (1338 bytes)
	W0401 18:20:37.912617   27284 certs.go:480] ignoring /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751_empty.pem, impossibly tiny 0 bytes
	I0401 18:20:37.912626   27284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem (1675 bytes)
	I0401 18:20:37.912647   27284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem (1082 bytes)
	I0401 18:20:37.912668   27284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem (1123 bytes)
	I0401 18:20:37.912689   27284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem (1679 bytes)
	I0401 18:20:37.912729   27284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem (1708 bytes)
	I0401 18:20:37.912754   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> /usr/share/ca-certificates/177512.pem
	I0401 18:20:37.912768   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0401 18:20:37.912779   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem -> /usr/share/ca-certificates/17751.pem
	I0401 18:20:37.913265   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 18:20:37.952672   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 18:20:37.982092   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 18:20:38.009743   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 18:20:38.037543   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0401 18:20:38.064945   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 18:20:38.095389   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 18:20:38.125756   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 18:20:38.156175   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /usr/share/ca-certificates/177512.pem (1708 bytes)
	I0401 18:20:38.184996   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 18:20:38.241033   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem --> /usr/share/ca-certificates/17751.pem (1338 bytes)
	I0401 18:20:38.274092   27284 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0401 18:20:38.296076   27284 ssh_runner.go:195] Run: openssl version
	I0401 18:20:38.303065   27284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177512.pem && ln -fs /usr/share/ca-certificates/177512.pem /etc/ssl/certs/177512.pem"
	I0401 18:20:38.315888   27284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177512.pem
	I0401 18:20:38.321232   27284 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 18:15 /usr/share/ca-certificates/177512.pem
	I0401 18:20:38.321281   27284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177512.pem
	I0401 18:20:38.327854   27284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177512.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 18:20:38.345364   27284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 18:20:38.368179   27284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 18:20:38.375060   27284 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 18:07 /usr/share/ca-certificates/minikubeCA.pem
	I0401 18:20:38.375124   27284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 18:20:38.384472   27284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 18:20:38.401237   27284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17751.pem && ln -fs /usr/share/ca-certificates/17751.pem /etc/ssl/certs/17751.pem"
	I0401 18:20:38.419543   27284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17751.pem
	I0401 18:20:38.426032   27284 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 18:15 /usr/share/ca-certificates/17751.pem
	I0401 18:20:38.426111   27284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17751.pem
	I0401 18:20:38.432837   27284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17751.pem /etc/ssl/certs/51391683.0"
	I0401 18:20:38.445342   27284 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 18:20:38.450196   27284 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 18:20:38.450243   27284 kubeadm.go:391] StartCluster: {Name:ha-293078 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clust
erName:ha-293078 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.74 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 18:20:38.450327   27284 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 18:20:38.450397   27284 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 18:20:38.497189   27284 cri.go:89] found id: ""
	I0401 18:20:38.497259   27284 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 18:20:38.508245   27284 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 18:20:38.518589   27284 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 18:20:38.528901   27284 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 18:20:38.528946   27284 kubeadm.go:156] found existing configuration files:
	
	I0401 18:20:38.528991   27284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 18:20:38.538806   27284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 18:20:38.538863   27284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 18:20:38.549472   27284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 18:20:38.559440   27284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 18:20:38.559498   27284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 18:20:38.569753   27284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 18:20:38.579455   27284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 18:20:38.579501   27284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 18:20:38.589684   27284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 18:20:38.599478   27284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 18:20:38.599535   27284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 18:20:38.609636   27284 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 18:20:38.852092   27284 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 18:20:49.908270   27284 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0401 18:20:49.908351   27284 kubeadm.go:309] [preflight] Running pre-flight checks
	I0401 18:20:49.908439   27284 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 18:20:49.908556   27284 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 18:20:49.908694   27284 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 18:20:49.908792   27284 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 18:20:49.910380   27284 out.go:204]   - Generating certificates and keys ...
	I0401 18:20:49.910459   27284 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0401 18:20:49.910557   27284 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0401 18:20:49.910675   27284 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 18:20:49.910763   27284 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0401 18:20:49.910865   27284 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0401 18:20:49.910954   27284 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0401 18:20:49.911031   27284 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0401 18:20:49.911168   27284 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-293078 localhost] and IPs [192.168.39.74 127.0.0.1 ::1]
	I0401 18:20:49.911262   27284 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0401 18:20:49.911422   27284 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-293078 localhost] and IPs [192.168.39.74 127.0.0.1 ::1]
	I0401 18:20:49.911521   27284 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 18:20:49.911606   27284 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 18:20:49.911688   27284 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0401 18:20:49.911760   27284 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 18:20:49.911834   27284 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 18:20:49.911917   27284 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 18:20:49.911992   27284 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 18:20:49.912090   27284 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 18:20:49.912168   27284 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 18:20:49.912279   27284 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 18:20:49.912370   27284 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 18:20:49.913926   27284 out.go:204]   - Booting up control plane ...
	I0401 18:20:49.914007   27284 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 18:20:49.914071   27284 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 18:20:49.914165   27284 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 18:20:49.914315   27284 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 18:20:49.914433   27284 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 18:20:49.914490   27284 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0401 18:20:49.914681   27284 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 18:20:49.914801   27284 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.585874 seconds
	I0401 18:20:49.914935   27284 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 18:20:49.915098   27284 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 18:20:49.915181   27284 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 18:20:49.915332   27284 kubeadm.go:309] [mark-control-plane] Marking the node ha-293078 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 18:20:49.915394   27284 kubeadm.go:309] [bootstrap-token] Using token: 4btpo1.kjs6l4hetnoxsot3
	I0401 18:20:49.916801   27284 out.go:204]   - Configuring RBAC rules ...
	I0401 18:20:49.916907   27284 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 18:20:49.916976   27284 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 18:20:49.917160   27284 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 18:20:49.917329   27284 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 18:20:49.917466   27284 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 18:20:49.917550   27284 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 18:20:49.917663   27284 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 18:20:49.917710   27284 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0401 18:20:49.917763   27284 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0401 18:20:49.917770   27284 kubeadm.go:309] 
	I0401 18:20:49.917823   27284 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0401 18:20:49.917831   27284 kubeadm.go:309] 
	I0401 18:20:49.917892   27284 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0401 18:20:49.917898   27284 kubeadm.go:309] 
	I0401 18:20:49.917950   27284 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0401 18:20:49.918051   27284 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 18:20:49.918136   27284 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 18:20:49.918154   27284 kubeadm.go:309] 
	I0401 18:20:49.918243   27284 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0401 18:20:49.918256   27284 kubeadm.go:309] 
	I0401 18:20:49.918315   27284 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 18:20:49.918327   27284 kubeadm.go:309] 
	I0401 18:20:49.918397   27284 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0401 18:20:49.918486   27284 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 18:20:49.918577   27284 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 18:20:49.918592   27284 kubeadm.go:309] 
	I0401 18:20:49.918696   27284 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 18:20:49.918790   27284 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0401 18:20:49.918799   27284 kubeadm.go:309] 
	I0401 18:20:49.918876   27284 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 4btpo1.kjs6l4hetnoxsot3 \
	I0401 18:20:49.918965   27284 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b8a0197ad47aa27a5800307c57228d22e61e4d31af785fa8a896f2b7fab267b8 \
	I0401 18:20:49.918990   27284 kubeadm.go:309] 	--control-plane 
	I0401 18:20:49.918996   27284 kubeadm.go:309] 
	I0401 18:20:49.919071   27284 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0401 18:20:49.919078   27284 kubeadm.go:309] 
	I0401 18:20:49.919165   27284 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 4btpo1.kjs6l4hetnoxsot3 \
	I0401 18:20:49.919294   27284 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b8a0197ad47aa27a5800307c57228d22e61e4d31af785fa8a896f2b7fab267b8 
	I0401 18:20:49.919310   27284 cni.go:84] Creating CNI manager for ""
	I0401 18:20:49.919318   27284 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0401 18:20:49.920989   27284 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0401 18:20:49.922390   27284 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0401 18:20:49.931672   27284 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0401 18:20:49.931686   27284 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0401 18:20:50.013816   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0401 18:20:50.385478   27284 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 18:20:50.385572   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:20:50.385623   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-293078 minikube.k8s.io/updated_at=2024_04_01T18_20_50_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2 minikube.k8s.io/name=ha-293078 minikube.k8s.io/primary=true
	I0401 18:20:50.538373   27284 ops.go:34] apiserver oom_adj: -16
	I0401 18:20:50.538723   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:20:51.038830   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:20:51.539815   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:20:52.038888   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:20:52.539727   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:20:53.038949   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:20:53.539222   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:20:54.039139   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:20:54.539731   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:20:55.039569   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:20:55.539189   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:20:56.039819   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:20:56.538840   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:20:57.039523   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:20:57.538954   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:20:58.038853   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:20:58.539195   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:20:59.039784   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:20:59.539064   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:21:00.038883   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:21:00.539202   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:21:01.038799   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:21:01.538997   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:21:02.039061   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 18:21:02.166524   27284 kubeadm.go:1107] duration metric: took 11.781001236s to wait for elevateKubeSystemPrivileges
	W0401 18:21:02.166562   27284 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0401 18:21:02.166571   27284 kubeadm.go:393] duration metric: took 23.716331763s to StartCluster
	I0401 18:21:02.166591   27284 settings.go:142] acquiring lock: {Name:mk5cd3d9600680d3808ad7ff6310a5e71b09e71d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:21:02.166672   27284 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 18:21:02.167349   27284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/kubeconfig: {Name:mkbd988e40ba29769e9f8a43c4d876f38e957f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:21:02.167565   27284 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.74 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 18:21:02.167587   27284 start.go:240] waiting for startup goroutines ...
	I0401 18:21:02.167588   27284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0401 18:21:02.167603   27284 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0401 18:21:02.167667   27284 addons.go:69] Setting storage-provisioner=true in profile "ha-293078"
	I0401 18:21:02.167698   27284 addons.go:234] Setting addon storage-provisioner=true in "ha-293078"
	I0401 18:21:02.167700   27284 addons.go:69] Setting default-storageclass=true in profile "ha-293078"
	I0401 18:21:02.167730   27284 host.go:66] Checking if "ha-293078" exists ...
	I0401 18:21:02.167751   27284 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-293078"
	I0401 18:21:02.167840   27284 config.go:182] Loaded profile config "ha-293078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 18:21:02.168166   27284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:21:02.168174   27284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:21:02.168212   27284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:21:02.168356   27284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:21:02.183507   27284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40007
	I0401 18:21:02.183506   27284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40585
	I0401 18:21:02.184007   27284 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:21:02.184090   27284 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:21:02.184543   27284 main.go:141] libmachine: Using API Version  1
	I0401 18:21:02.184574   27284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:21:02.184645   27284 main.go:141] libmachine: Using API Version  1
	I0401 18:21:02.184666   27284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:21:02.184968   27284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:21:02.184998   27284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:21:02.185163   27284 main.go:141] libmachine: (ha-293078) Calling .GetState
	I0401 18:21:02.185508   27284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:21:02.185550   27284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:21:02.187166   27284 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 18:21:02.187496   27284 kapi.go:59] client config for ha-293078: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/client.crt", KeyFile:"/home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/client.key", CAFile:"/home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5ca00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0401 18:21:02.187933   27284 cert_rotation.go:137] Starting client certificate rotation controller
	I0401 18:21:02.188202   27284 addons.go:234] Setting addon default-storageclass=true in "ha-293078"
	I0401 18:21:02.188243   27284 host.go:66] Checking if "ha-293078" exists ...
	I0401 18:21:02.188599   27284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:21:02.188636   27284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:21:02.201087   27284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43639
	I0401 18:21:02.201616   27284 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:21:02.202193   27284 main.go:141] libmachine: Using API Version  1
	I0401 18:21:02.202221   27284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:21:02.202587   27284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:21:02.202778   27284 main.go:141] libmachine: (ha-293078) Calling .GetState
	I0401 18:21:02.204106   27284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33179
	I0401 18:21:02.204511   27284 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:21:02.204692   27284 main.go:141] libmachine: (ha-293078) Calling .DriverName
	I0401 18:21:02.206642   27284 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 18:21:02.205112   27284 main.go:141] libmachine: Using API Version  1
	I0401 18:21:02.208416   27284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:21:02.208530   27284 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 18:21:02.208547   27284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 18:21:02.208565   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:21:02.208802   27284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:21:02.209666   27284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:21:02.209718   27284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:21:02.211851   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:21:02.212291   27284 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:21:02.212324   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:21:02.212442   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:21:02.212630   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:21:02.212814   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:21:02.212939   27284 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078/id_rsa Username:docker}
	I0401 18:21:02.224552   27284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40335
	I0401 18:21:02.225021   27284 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:21:02.225566   27284 main.go:141] libmachine: Using API Version  1
	I0401 18:21:02.225590   27284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:21:02.225965   27284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:21:02.226136   27284 main.go:141] libmachine: (ha-293078) Calling .GetState
	I0401 18:21:02.227678   27284 main.go:141] libmachine: (ha-293078) Calling .DriverName
	I0401 18:21:02.227968   27284 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 18:21:02.227985   27284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 18:21:02.228001   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:21:02.231262   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:21:02.231707   27284 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:21:02.231740   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:21:02.231876   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:21:02.232054   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:21:02.232202   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:21:02.232331   27284 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078/id_rsa Username:docker}
	I0401 18:21:02.333022   27284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0401 18:21:02.438375   27284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 18:21:02.526416   27284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 18:21:02.900392   27284 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0401 18:21:03.336256   27284 main.go:141] libmachine: Making call to close driver server
	I0401 18:21:03.336275   27284 main.go:141] libmachine: (ha-293078) Calling .Close
	I0401 18:21:03.336334   27284 main.go:141] libmachine: Making call to close driver server
	I0401 18:21:03.336349   27284 main.go:141] libmachine: (ha-293078) Calling .Close
	I0401 18:21:03.336595   27284 main.go:141] libmachine: (ha-293078) DBG | Closing plugin on server side
	I0401 18:21:03.336622   27284 main.go:141] libmachine: (ha-293078) DBG | Closing plugin on server side
	I0401 18:21:03.336642   27284 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:21:03.336660   27284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:21:03.336670   27284 main.go:141] libmachine: Making call to close driver server
	I0401 18:21:03.336674   27284 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:21:03.336677   27284 main.go:141] libmachine: (ha-293078) Calling .Close
	I0401 18:21:03.336690   27284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:21:03.336699   27284 main.go:141] libmachine: Making call to close driver server
	I0401 18:21:03.336706   27284 main.go:141] libmachine: (ha-293078) Calling .Close
	I0401 18:21:03.336925   27284 main.go:141] libmachine: (ha-293078) DBG | Closing plugin on server side
	I0401 18:21:03.336924   27284 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:21:03.336945   27284 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:21:03.336950   27284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:21:03.336954   27284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:21:03.336998   27284 main.go:141] libmachine: (ha-293078) DBG | Closing plugin on server side
	I0401 18:21:03.337071   27284 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0401 18:21:03.337078   27284 round_trippers.go:469] Request Headers:
	I0401 18:21:03.337088   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:21:03.337095   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:21:03.352729   27284 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0401 18:21:03.353445   27284 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0401 18:21:03.353530   27284 round_trippers.go:469] Request Headers:
	I0401 18:21:03.353551   27284 round_trippers.go:473]     Content-Type: application/json
	I0401 18:21:03.353565   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:21:03.353569   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:21:03.357118   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:21:03.357247   27284 main.go:141] libmachine: Making call to close driver server
	I0401 18:21:03.357258   27284 main.go:141] libmachine: (ha-293078) Calling .Close
	I0401 18:21:03.357509   27284 main.go:141] libmachine: Successfully made call to close driver server
	I0401 18:21:03.357528   27284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 18:21:03.359532   27284 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0401 18:21:03.361453   27284 addons.go:505] duration metric: took 1.193852719s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0401 18:21:03.361494   27284 start.go:245] waiting for cluster config update ...
	I0401 18:21:03.361508   27284 start.go:254] writing updated cluster config ...
	I0401 18:21:03.363430   27284 out.go:177] 
	I0401 18:21:03.367209   27284 config.go:182] Loaded profile config "ha-293078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 18:21:03.367283   27284 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/config.json ...
	I0401 18:21:03.370001   27284 out.go:177] * Starting "ha-293078-m02" control-plane node in "ha-293078" cluster
	I0401 18:21:03.370993   27284 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0401 18:21:03.371025   27284 cache.go:56] Caching tarball of preloaded images
	I0401 18:21:03.371147   27284 preload.go:173] Found /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 18:21:03.371161   27284 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0401 18:21:03.371222   27284 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/config.json ...
	I0401 18:21:03.371398   27284 start.go:360] acquireMachinesLock for ha-293078-m02: {Name:mk6b7472209a8db5f40be4c2f0565da7e0094c19 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 18:21:03.371440   27284 start.go:364] duration metric: took 24.034µs to acquireMachinesLock for "ha-293078-m02"
	I0401 18:21:03.371458   27284 start.go:93] Provisioning new machine with config: &{Name:ha-293078 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:ha-293078 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.74 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 18:21:03.371524   27284 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0401 18:21:03.373092   27284 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0401 18:21:03.373175   27284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:21:03.373208   27284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:21:03.387531   27284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33333
	I0401 18:21:03.387968   27284 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:21:03.388474   27284 main.go:141] libmachine: Using API Version  1
	I0401 18:21:03.388500   27284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:21:03.388861   27284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:21:03.389050   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetMachineName
	I0401 18:21:03.389231   27284 main.go:141] libmachine: (ha-293078-m02) Calling .DriverName
	I0401 18:21:03.389396   27284 start.go:159] libmachine.API.Create for "ha-293078" (driver="kvm2")
	I0401 18:21:03.389425   27284 client.go:168] LocalClient.Create starting
	I0401 18:21:03.389454   27284 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem
	I0401 18:21:03.389479   27284 main.go:141] libmachine: Decoding PEM data...
	I0401 18:21:03.389493   27284 main.go:141] libmachine: Parsing certificate...
	I0401 18:21:03.389536   27284 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem
	I0401 18:21:03.389553   27284 main.go:141] libmachine: Decoding PEM data...
	I0401 18:21:03.389565   27284 main.go:141] libmachine: Parsing certificate...
	I0401 18:21:03.389582   27284 main.go:141] libmachine: Running pre-create checks...
	I0401 18:21:03.389594   27284 main.go:141] libmachine: (ha-293078-m02) Calling .PreCreateCheck
	I0401 18:21:03.389809   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetConfigRaw
	I0401 18:21:03.390215   27284 main.go:141] libmachine: Creating machine...
	I0401 18:21:03.390230   27284 main.go:141] libmachine: (ha-293078-m02) Calling .Create
	I0401 18:21:03.390373   27284 main.go:141] libmachine: (ha-293078-m02) Creating KVM machine...
	I0401 18:21:03.391699   27284 main.go:141] libmachine: (ha-293078-m02) DBG | found existing default KVM network
	I0401 18:21:03.391854   27284 main.go:141] libmachine: (ha-293078-m02) DBG | found existing private KVM network mk-ha-293078
	I0401 18:21:03.392014   27284 main.go:141] libmachine: (ha-293078-m02) Setting up store path in /home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m02 ...
	I0401 18:21:03.392040   27284 main.go:141] libmachine: (ha-293078-m02) Building disk image from file:///home/jenkins/minikube-integration/18233-10493/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso
	I0401 18:21:03.392083   27284 main.go:141] libmachine: (ha-293078-m02) DBG | I0401 18:21:03.391999   27619 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18233-10493/.minikube
	I0401 18:21:03.392209   27284 main.go:141] libmachine: (ha-293078-m02) Downloading /home/jenkins/minikube-integration/18233-10493/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18233-10493/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso...
	I0401 18:21:03.619622   27284 main.go:141] libmachine: (ha-293078-m02) DBG | I0401 18:21:03.619513   27619 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m02/id_rsa...
	I0401 18:21:03.702083   27284 main.go:141] libmachine: (ha-293078-m02) DBG | I0401 18:21:03.701957   27619 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m02/ha-293078-m02.rawdisk...
	I0401 18:21:03.702115   27284 main.go:141] libmachine: (ha-293078-m02) DBG | Writing magic tar header
	I0401 18:21:03.702130   27284 main.go:141] libmachine: (ha-293078-m02) DBG | Writing SSH key tar header
	I0401 18:21:03.702148   27284 main.go:141] libmachine: (ha-293078-m02) DBG | I0401 18:21:03.702084   27619 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m02 ...
	I0401 18:21:03.702242   27284 main.go:141] libmachine: (ha-293078-m02) Setting executable bit set on /home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m02 (perms=drwx------)
	I0401 18:21:03.702271   27284 main.go:141] libmachine: (ha-293078-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m02
	I0401 18:21:03.702283   27284 main.go:141] libmachine: (ha-293078-m02) Setting executable bit set on /home/jenkins/minikube-integration/18233-10493/.minikube/machines (perms=drwxr-xr-x)
	I0401 18:21:03.702297   27284 main.go:141] libmachine: (ha-293078-m02) Setting executable bit set on /home/jenkins/minikube-integration/18233-10493/.minikube (perms=drwxr-xr-x)
	I0401 18:21:03.702306   27284 main.go:141] libmachine: (ha-293078-m02) Setting executable bit set on /home/jenkins/minikube-integration/18233-10493 (perms=drwxrwxr-x)
	I0401 18:21:03.702326   27284 main.go:141] libmachine: (ha-293078-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0401 18:21:03.702341   27284 main.go:141] libmachine: (ha-293078-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0401 18:21:03.702352   27284 main.go:141] libmachine: (ha-293078-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18233-10493/.minikube/machines
	I0401 18:21:03.702386   27284 main.go:141] libmachine: (ha-293078-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18233-10493/.minikube
	I0401 18:21:03.702410   27284 main.go:141] libmachine: (ha-293078-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18233-10493
	I0401 18:21:03.702417   27284 main.go:141] libmachine: (ha-293078-m02) Creating domain...
	I0401 18:21:03.702427   27284 main.go:141] libmachine: (ha-293078-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0401 18:21:03.702439   27284 main.go:141] libmachine: (ha-293078-m02) DBG | Checking permissions on dir: /home/jenkins
	I0401 18:21:03.702460   27284 main.go:141] libmachine: (ha-293078-m02) DBG | Checking permissions on dir: /home
	I0401 18:21:03.702478   27284 main.go:141] libmachine: (ha-293078-m02) DBG | Skipping /home - not owner
	I0401 18:21:03.703271   27284 main.go:141] libmachine: (ha-293078-m02) define libvirt domain using xml: 
	I0401 18:21:03.703294   27284 main.go:141] libmachine: (ha-293078-m02) <domain type='kvm'>
	I0401 18:21:03.703305   27284 main.go:141] libmachine: (ha-293078-m02)   <name>ha-293078-m02</name>
	I0401 18:21:03.703317   27284 main.go:141] libmachine: (ha-293078-m02)   <memory unit='MiB'>2200</memory>
	I0401 18:21:03.703329   27284 main.go:141] libmachine: (ha-293078-m02)   <vcpu>2</vcpu>
	I0401 18:21:03.703337   27284 main.go:141] libmachine: (ha-293078-m02)   <features>
	I0401 18:21:03.703342   27284 main.go:141] libmachine: (ha-293078-m02)     <acpi/>
	I0401 18:21:03.703349   27284 main.go:141] libmachine: (ha-293078-m02)     <apic/>
	I0401 18:21:03.703354   27284 main.go:141] libmachine: (ha-293078-m02)     <pae/>
	I0401 18:21:03.703360   27284 main.go:141] libmachine: (ha-293078-m02)     
	I0401 18:21:03.703365   27284 main.go:141] libmachine: (ha-293078-m02)   </features>
	I0401 18:21:03.703376   27284 main.go:141] libmachine: (ha-293078-m02)   <cpu mode='host-passthrough'>
	I0401 18:21:03.703383   27284 main.go:141] libmachine: (ha-293078-m02)   
	I0401 18:21:03.703389   27284 main.go:141] libmachine: (ha-293078-m02)   </cpu>
	I0401 18:21:03.703397   27284 main.go:141] libmachine: (ha-293078-m02)   <os>
	I0401 18:21:03.703406   27284 main.go:141] libmachine: (ha-293078-m02)     <type>hvm</type>
	I0401 18:21:03.703413   27284 main.go:141] libmachine: (ha-293078-m02)     <boot dev='cdrom'/>
	I0401 18:21:03.703418   27284 main.go:141] libmachine: (ha-293078-m02)     <boot dev='hd'/>
	I0401 18:21:03.703427   27284 main.go:141] libmachine: (ha-293078-m02)     <bootmenu enable='no'/>
	I0401 18:21:03.703434   27284 main.go:141] libmachine: (ha-293078-m02)   </os>
	I0401 18:21:03.703439   27284 main.go:141] libmachine: (ha-293078-m02)   <devices>
	I0401 18:21:03.703446   27284 main.go:141] libmachine: (ha-293078-m02)     <disk type='file' device='cdrom'>
	I0401 18:21:03.703454   27284 main.go:141] libmachine: (ha-293078-m02)       <source file='/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m02/boot2docker.iso'/>
	I0401 18:21:03.703462   27284 main.go:141] libmachine: (ha-293078-m02)       <target dev='hdc' bus='scsi'/>
	I0401 18:21:03.703467   27284 main.go:141] libmachine: (ha-293078-m02)       <readonly/>
	I0401 18:21:03.703474   27284 main.go:141] libmachine: (ha-293078-m02)     </disk>
	I0401 18:21:03.703480   27284 main.go:141] libmachine: (ha-293078-m02)     <disk type='file' device='disk'>
	I0401 18:21:03.703489   27284 main.go:141] libmachine: (ha-293078-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0401 18:21:03.703499   27284 main.go:141] libmachine: (ha-293078-m02)       <source file='/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m02/ha-293078-m02.rawdisk'/>
	I0401 18:21:03.703506   27284 main.go:141] libmachine: (ha-293078-m02)       <target dev='hda' bus='virtio'/>
	I0401 18:21:03.703511   27284 main.go:141] libmachine: (ha-293078-m02)     </disk>
	I0401 18:21:03.703519   27284 main.go:141] libmachine: (ha-293078-m02)     <interface type='network'>
	I0401 18:21:03.703525   27284 main.go:141] libmachine: (ha-293078-m02)       <source network='mk-ha-293078'/>
	I0401 18:21:03.703542   27284 main.go:141] libmachine: (ha-293078-m02)       <model type='virtio'/>
	I0401 18:21:03.703550   27284 main.go:141] libmachine: (ha-293078-m02)     </interface>
	I0401 18:21:03.703557   27284 main.go:141] libmachine: (ha-293078-m02)     <interface type='network'>
	I0401 18:21:03.703586   27284 main.go:141] libmachine: (ha-293078-m02)       <source network='default'/>
	I0401 18:21:03.703613   27284 main.go:141] libmachine: (ha-293078-m02)       <model type='virtio'/>
	I0401 18:21:03.703637   27284 main.go:141] libmachine: (ha-293078-m02)     </interface>
	I0401 18:21:03.703655   27284 main.go:141] libmachine: (ha-293078-m02)     <serial type='pty'>
	I0401 18:21:03.703668   27284 main.go:141] libmachine: (ha-293078-m02)       <target port='0'/>
	I0401 18:21:03.703675   27284 main.go:141] libmachine: (ha-293078-m02)     </serial>
	I0401 18:21:03.703687   27284 main.go:141] libmachine: (ha-293078-m02)     <console type='pty'>
	I0401 18:21:03.703699   27284 main.go:141] libmachine: (ha-293078-m02)       <target type='serial' port='0'/>
	I0401 18:21:03.703710   27284 main.go:141] libmachine: (ha-293078-m02)     </console>
	I0401 18:21:03.703721   27284 main.go:141] libmachine: (ha-293078-m02)     <rng model='virtio'>
	I0401 18:21:03.703736   27284 main.go:141] libmachine: (ha-293078-m02)       <backend model='random'>/dev/random</backend>
	I0401 18:21:03.703748   27284 main.go:141] libmachine: (ha-293078-m02)     </rng>
	I0401 18:21:03.703758   27284 main.go:141] libmachine: (ha-293078-m02)     
	I0401 18:21:03.703762   27284 main.go:141] libmachine: (ha-293078-m02)     
	I0401 18:21:03.703767   27284 main.go:141] libmachine: (ha-293078-m02)   </devices>
	I0401 18:21:03.703773   27284 main.go:141] libmachine: (ha-293078-m02) </domain>
	I0401 18:21:03.703780   27284 main.go:141] libmachine: (ha-293078-m02) 
	I0401 18:21:03.710624   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:a9:04:b3 in network default
	I0401 18:21:03.711193   27284 main.go:141] libmachine: (ha-293078-m02) Ensuring networks are active...
	I0401 18:21:03.711240   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:03.711921   27284 main.go:141] libmachine: (ha-293078-m02) Ensuring network default is active
	I0401 18:21:03.712272   27284 main.go:141] libmachine: (ha-293078-m02) Ensuring network mk-ha-293078 is active
	I0401 18:21:03.712652   27284 main.go:141] libmachine: (ha-293078-m02) Getting domain xml...
	I0401 18:21:03.713321   27284 main.go:141] libmachine: (ha-293078-m02) Creating domain...
	I0401 18:21:04.918039   27284 main.go:141] libmachine: (ha-293078-m02) Waiting to get IP...
	I0401 18:21:04.918782   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:04.919154   27284 main.go:141] libmachine: (ha-293078-m02) DBG | unable to find current IP address of domain ha-293078-m02 in network mk-ha-293078
	I0401 18:21:04.919194   27284 main.go:141] libmachine: (ha-293078-m02) DBG | I0401 18:21:04.919136   27619 retry.go:31] will retry after 227.797489ms: waiting for machine to come up
	I0401 18:21:05.149672   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:05.149704   27284 main.go:141] libmachine: (ha-293078-m02) DBG | unable to find current IP address of domain ha-293078-m02 in network mk-ha-293078
	I0401 18:21:05.149752   27284 main.go:141] libmachine: (ha-293078-m02) DBG | I0401 18:21:05.149267   27619 retry.go:31] will retry after 256.715132ms: waiting for machine to come up
	I0401 18:21:05.407614   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:05.407989   27284 main.go:141] libmachine: (ha-293078-m02) DBG | unable to find current IP address of domain ha-293078-m02 in network mk-ha-293078
	I0401 18:21:05.408017   27284 main.go:141] libmachine: (ha-293078-m02) DBG | I0401 18:21:05.407946   27619 retry.go:31] will retry after 318.976551ms: waiting for machine to come up
	I0401 18:21:05.728528   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:05.728967   27284 main.go:141] libmachine: (ha-293078-m02) DBG | unable to find current IP address of domain ha-293078-m02 in network mk-ha-293078
	I0401 18:21:05.729001   27284 main.go:141] libmachine: (ha-293078-m02) DBG | I0401 18:21:05.728928   27619 retry.go:31] will retry after 593.684858ms: waiting for machine to come up
	I0401 18:21:06.324677   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:06.325204   27284 main.go:141] libmachine: (ha-293078-m02) DBG | unable to find current IP address of domain ha-293078-m02 in network mk-ha-293078
	I0401 18:21:06.325228   27284 main.go:141] libmachine: (ha-293078-m02) DBG | I0401 18:21:06.325156   27619 retry.go:31] will retry after 725.038622ms: waiting for machine to come up
	I0401 18:21:07.051601   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:07.052129   27284 main.go:141] libmachine: (ha-293078-m02) DBG | unable to find current IP address of domain ha-293078-m02 in network mk-ha-293078
	I0401 18:21:07.052178   27284 main.go:141] libmachine: (ha-293078-m02) DBG | I0401 18:21:07.052024   27619 retry.go:31] will retry after 794.779612ms: waiting for machine to come up
	I0401 18:21:07.847869   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:07.848306   27284 main.go:141] libmachine: (ha-293078-m02) DBG | unable to find current IP address of domain ha-293078-m02 in network mk-ha-293078
	I0401 18:21:07.848336   27284 main.go:141] libmachine: (ha-293078-m02) DBG | I0401 18:21:07.848259   27619 retry.go:31] will retry after 905.868947ms: waiting for machine to come up
	I0401 18:21:08.755840   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:08.756291   27284 main.go:141] libmachine: (ha-293078-m02) DBG | unable to find current IP address of domain ha-293078-m02 in network mk-ha-293078
	I0401 18:21:08.756320   27284 main.go:141] libmachine: (ha-293078-m02) DBG | I0401 18:21:08.756244   27619 retry.go:31] will retry after 1.176905759s: waiting for machine to come up
	I0401 18:21:09.934471   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:09.934892   27284 main.go:141] libmachine: (ha-293078-m02) DBG | unable to find current IP address of domain ha-293078-m02 in network mk-ha-293078
	I0401 18:21:09.934917   27284 main.go:141] libmachine: (ha-293078-m02) DBG | I0401 18:21:09.934863   27619 retry.go:31] will retry after 1.546450636s: waiting for machine to come up
	I0401 18:21:11.483188   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:11.483679   27284 main.go:141] libmachine: (ha-293078-m02) DBG | unable to find current IP address of domain ha-293078-m02 in network mk-ha-293078
	I0401 18:21:11.483711   27284 main.go:141] libmachine: (ha-293078-m02) DBG | I0401 18:21:11.483608   27619 retry.go:31] will retry after 1.88382657s: waiting for machine to come up
	I0401 18:21:13.369758   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:13.370228   27284 main.go:141] libmachine: (ha-293078-m02) DBG | unable to find current IP address of domain ha-293078-m02 in network mk-ha-293078
	I0401 18:21:13.370280   27284 main.go:141] libmachine: (ha-293078-m02) DBG | I0401 18:21:13.370209   27619 retry.go:31] will retry after 2.400689416s: waiting for machine to come up
	I0401 18:21:15.774266   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:15.774725   27284 main.go:141] libmachine: (ha-293078-m02) DBG | unable to find current IP address of domain ha-293078-m02 in network mk-ha-293078
	I0401 18:21:15.774760   27284 main.go:141] libmachine: (ha-293078-m02) DBG | I0401 18:21:15.774698   27619 retry.go:31] will retry after 2.684241486s: waiting for machine to come up
	I0401 18:21:18.460365   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:18.460851   27284 main.go:141] libmachine: (ha-293078-m02) DBG | unable to find current IP address of domain ha-293078-m02 in network mk-ha-293078
	I0401 18:21:18.460881   27284 main.go:141] libmachine: (ha-293078-m02) DBG | I0401 18:21:18.460803   27619 retry.go:31] will retry after 3.608105612s: waiting for machine to come up
	I0401 18:21:22.070078   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:22.070551   27284 main.go:141] libmachine: (ha-293078-m02) DBG | unable to find current IP address of domain ha-293078-m02 in network mk-ha-293078
	I0401 18:21:22.070568   27284 main.go:141] libmachine: (ha-293078-m02) DBG | I0401 18:21:22.070518   27619 retry.go:31] will retry after 4.235958126s: waiting for machine to come up
	I0401 18:21:26.307669   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:26.308161   27284 main.go:141] libmachine: (ha-293078-m02) Found IP for machine: 192.168.39.161
	I0401 18:21:26.308188   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has current primary IP address 192.168.39.161 and MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:26.308198   27284 main.go:141] libmachine: (ha-293078-m02) Reserving static IP address...
	I0401 18:21:26.308719   27284 main.go:141] libmachine: (ha-293078-m02) DBG | unable to find host DHCP lease matching {name: "ha-293078-m02", mac: "52:54:00:25:7f:87", ip: "192.168.39.161"} in network mk-ha-293078
	I0401 18:21:26.379934   27284 main.go:141] libmachine: (ha-293078-m02) Reserved static IP address: 192.168.39.161
	I0401 18:21:26.379960   27284 main.go:141] libmachine: (ha-293078-m02) Waiting for SSH to be available...
	I0401 18:21:26.379986   27284 main.go:141] libmachine: (ha-293078-m02) DBG | Getting to WaitForSSH function...
	I0401 18:21:26.382918   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:26.383348   27284 main.go:141] libmachine: (ha-293078-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7f:87", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:21:19 +0000 UTC Type:0 Mac:52:54:00:25:7f:87 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:minikube Clientid:01:52:54:00:25:7f:87}
	I0401 18:21:26.383380   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined IP address 192.168.39.161 and MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:26.383533   27284 main.go:141] libmachine: (ha-293078-m02) DBG | Using SSH client type: external
	I0401 18:21:26.383560   27284 main.go:141] libmachine: (ha-293078-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m02/id_rsa (-rw-------)
	I0401 18:21:26.383591   27284 main.go:141] libmachine: (ha-293078-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.161 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0401 18:21:26.383605   27284 main.go:141] libmachine: (ha-293078-m02) DBG | About to run SSH command:
	I0401 18:21:26.383626   27284 main.go:141] libmachine: (ha-293078-m02) DBG | exit 0
	I0401 18:21:26.510059   27284 main.go:141] libmachine: (ha-293078-m02) DBG | SSH cmd err, output: <nil>: 
	I0401 18:21:26.510348   27284 main.go:141] libmachine: (ha-293078-m02) KVM machine creation complete!
	I0401 18:21:26.510788   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetConfigRaw
	I0401 18:21:26.511324   27284 main.go:141] libmachine: (ha-293078-m02) Calling .DriverName
	I0401 18:21:26.511511   27284 main.go:141] libmachine: (ha-293078-m02) Calling .DriverName
	I0401 18:21:26.511702   27284 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0401 18:21:26.511715   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetState
	I0401 18:21:26.512934   27284 main.go:141] libmachine: Detecting operating system of created instance...
	I0401 18:21:26.512950   27284 main.go:141] libmachine: Waiting for SSH to be available...
	I0401 18:21:26.512958   27284 main.go:141] libmachine: Getting to WaitForSSH function...
	I0401 18:21:26.512967   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHHostname
	I0401 18:21:26.515154   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:26.515518   27284 main.go:141] libmachine: (ha-293078-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7f:87", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:21:19 +0000 UTC Type:0 Mac:52:54:00:25:7f:87 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-293078-m02 Clientid:01:52:54:00:25:7f:87}
	I0401 18:21:26.515554   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined IP address 192.168.39.161 and MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:26.515686   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHPort
	I0401 18:21:26.515860   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHKeyPath
	I0401 18:21:26.516022   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHKeyPath
	I0401 18:21:26.516149   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHUsername
	I0401 18:21:26.516281   27284 main.go:141] libmachine: Using SSH client type: native
	I0401 18:21:26.516455   27284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.161 22 <nil> <nil>}
	I0401 18:21:26.516466   27284 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0401 18:21:26.625274   27284 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 18:21:26.625297   27284 main.go:141] libmachine: Detecting the provisioner...
	I0401 18:21:26.625307   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHHostname
	I0401 18:21:26.628826   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:26.629252   27284 main.go:141] libmachine: (ha-293078-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7f:87", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:21:19 +0000 UTC Type:0 Mac:52:54:00:25:7f:87 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-293078-m02 Clientid:01:52:54:00:25:7f:87}
	I0401 18:21:26.629276   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined IP address 192.168.39.161 and MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:26.629444   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHPort
	I0401 18:21:26.629693   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHKeyPath
	I0401 18:21:26.629965   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHKeyPath
	I0401 18:21:26.630129   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHUsername
	I0401 18:21:26.630341   27284 main.go:141] libmachine: Using SSH client type: native
	I0401 18:21:26.630510   27284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.161 22 <nil> <nil>}
	I0401 18:21:26.630525   27284 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0401 18:21:26.743612   27284 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0401 18:21:26.743747   27284 main.go:141] libmachine: found compatible host: buildroot
	I0401 18:21:26.743777   27284 main.go:141] libmachine: Provisioning with buildroot...
	I0401 18:21:26.743793   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetMachineName
	I0401 18:21:26.744087   27284 buildroot.go:166] provisioning hostname "ha-293078-m02"
	I0401 18:21:26.744121   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetMachineName
	I0401 18:21:26.744371   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHHostname
	I0401 18:21:26.747234   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:26.747650   27284 main.go:141] libmachine: (ha-293078-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7f:87", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:21:19 +0000 UTC Type:0 Mac:52:54:00:25:7f:87 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-293078-m02 Clientid:01:52:54:00:25:7f:87}
	I0401 18:21:26.747674   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined IP address 192.168.39.161 and MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:26.747826   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHPort
	I0401 18:21:26.747980   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHKeyPath
	I0401 18:21:26.748133   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHKeyPath
	I0401 18:21:26.748296   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHUsername
	I0401 18:21:26.748480   27284 main.go:141] libmachine: Using SSH client type: native
	I0401 18:21:26.748678   27284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.161 22 <nil> <nil>}
	I0401 18:21:26.748691   27284 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-293078-m02 && echo "ha-293078-m02" | sudo tee /etc/hostname
	I0401 18:21:26.873937   27284 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-293078-m02
	
	I0401 18:21:26.873966   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHHostname
	I0401 18:21:26.876644   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:26.877003   27284 main.go:141] libmachine: (ha-293078-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7f:87", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:21:19 +0000 UTC Type:0 Mac:52:54:00:25:7f:87 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-293078-m02 Clientid:01:52:54:00:25:7f:87}
	I0401 18:21:26.877032   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined IP address 192.168.39.161 and MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:26.877242   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHPort
	I0401 18:21:26.877438   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHKeyPath
	I0401 18:21:26.877682   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHKeyPath
	I0401 18:21:26.877833   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHUsername
	I0401 18:21:26.878018   27284 main.go:141] libmachine: Using SSH client type: native
	I0401 18:21:26.878233   27284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.161 22 <nil> <nil>}
	I0401 18:21:26.878260   27284 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-293078-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-293078-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-293078-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 18:21:26.996402   27284 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 18:21:26.996428   27284 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18233-10493/.minikube CaCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18233-10493/.minikube}
	I0401 18:21:26.996460   27284 buildroot.go:174] setting up certificates
	I0401 18:21:26.996472   27284 provision.go:84] configureAuth start
	I0401 18:21:26.996482   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetMachineName
	I0401 18:21:26.996761   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetIP
	I0401 18:21:26.999638   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:27.000033   27284 main.go:141] libmachine: (ha-293078-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7f:87", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:21:19 +0000 UTC Type:0 Mac:52:54:00:25:7f:87 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-293078-m02 Clientid:01:52:54:00:25:7f:87}
	I0401 18:21:27.000064   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined IP address 192.168.39.161 and MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:27.000191   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHHostname
	I0401 18:21:27.003607   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:27.004035   27284 main.go:141] libmachine: (ha-293078-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7f:87", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:21:19 +0000 UTC Type:0 Mac:52:54:00:25:7f:87 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-293078-m02 Clientid:01:52:54:00:25:7f:87}
	I0401 18:21:27.004057   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined IP address 192.168.39.161 and MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:27.004217   27284 provision.go:143] copyHostCerts
	I0401 18:21:27.004270   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem
	I0401 18:21:27.004311   27284 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem, removing ...
	I0401 18:21:27.004319   27284 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem
	I0401 18:21:27.004399   27284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem (1082 bytes)
	I0401 18:21:27.004497   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem
	I0401 18:21:27.004529   27284 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem, removing ...
	I0401 18:21:27.004536   27284 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem
	I0401 18:21:27.004578   27284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem (1123 bytes)
	I0401 18:21:27.004661   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem
	I0401 18:21:27.004684   27284 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem, removing ...
	I0401 18:21:27.004693   27284 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem
	I0401 18:21:27.004743   27284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem (1679 bytes)
	I0401 18:21:27.004807   27284 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem org=jenkins.ha-293078-m02 san=[127.0.0.1 192.168.39.161 ha-293078-m02 localhost minikube]
	I0401 18:21:27.204268   27284 provision.go:177] copyRemoteCerts
	I0401 18:21:27.204319   27284 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 18:21:27.204339   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHHostname
	I0401 18:21:27.206890   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:27.207315   27284 main.go:141] libmachine: (ha-293078-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7f:87", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:21:19 +0000 UTC Type:0 Mac:52:54:00:25:7f:87 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-293078-m02 Clientid:01:52:54:00:25:7f:87}
	I0401 18:21:27.207342   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined IP address 192.168.39.161 and MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:27.207549   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHPort
	I0401 18:21:27.207738   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHKeyPath
	I0401 18:21:27.207934   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHUsername
	I0401 18:21:27.208135   27284 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m02/id_rsa Username:docker}
	I0401 18:21:27.294958   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0401 18:21:27.295023   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 18:21:27.321972   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0401 18:21:27.322025   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0401 18:21:27.348642   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0401 18:21:27.348716   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 18:21:27.379204   27284 provision.go:87] duration metric: took 382.719932ms to configureAuth
	I0401 18:21:27.379229   27284 buildroot.go:189] setting minikube options for container-runtime
	I0401 18:21:27.379439   27284 config.go:182] Loaded profile config "ha-293078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 18:21:27.379528   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHHostname
	I0401 18:21:27.382418   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:27.382761   27284 main.go:141] libmachine: (ha-293078-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7f:87", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:21:19 +0000 UTC Type:0 Mac:52:54:00:25:7f:87 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-293078-m02 Clientid:01:52:54:00:25:7f:87}
	I0401 18:21:27.382780   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined IP address 192.168.39.161 and MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:27.383016   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHPort
	I0401 18:21:27.383211   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHKeyPath
	I0401 18:21:27.383423   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHKeyPath
	I0401 18:21:27.383621   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHUsername
	I0401 18:21:27.383790   27284 main.go:141] libmachine: Using SSH client type: native
	I0401 18:21:27.383984   27284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.161 22 <nil> <nil>}
	I0401 18:21:27.384008   27284 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 18:21:27.676307   27284 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 18:21:27.676358   27284 main.go:141] libmachine: Checking connection to Docker...
	I0401 18:21:27.676371   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetURL
	I0401 18:21:27.677756   27284 main.go:141] libmachine: (ha-293078-m02) DBG | Using libvirt version 6000000
	I0401 18:21:27.679933   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:27.680321   27284 main.go:141] libmachine: (ha-293078-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7f:87", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:21:19 +0000 UTC Type:0 Mac:52:54:00:25:7f:87 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-293078-m02 Clientid:01:52:54:00:25:7f:87}
	I0401 18:21:27.680348   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined IP address 192.168.39.161 and MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:27.680486   27284 main.go:141] libmachine: Docker is up and running!
	I0401 18:21:27.680501   27284 main.go:141] libmachine: Reticulating splines...
	I0401 18:21:27.680509   27284 client.go:171] duration metric: took 24.291073713s to LocalClient.Create
	I0401 18:21:27.680531   27284 start.go:167] duration metric: took 24.291136909s to libmachine.API.Create "ha-293078"
	I0401 18:21:27.680541   27284 start.go:293] postStartSetup for "ha-293078-m02" (driver="kvm2")
	I0401 18:21:27.680550   27284 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 18:21:27.680560   27284 main.go:141] libmachine: (ha-293078-m02) Calling .DriverName
	I0401 18:21:27.680816   27284 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 18:21:27.680838   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHHostname
	I0401 18:21:27.682693   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:27.683017   27284 main.go:141] libmachine: (ha-293078-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7f:87", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:21:19 +0000 UTC Type:0 Mac:52:54:00:25:7f:87 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-293078-m02 Clientid:01:52:54:00:25:7f:87}
	I0401 18:21:27.683043   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined IP address 192.168.39.161 and MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:27.683188   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHPort
	I0401 18:21:27.683350   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHKeyPath
	I0401 18:21:27.683526   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHUsername
	I0401 18:21:27.683714   27284 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m02/id_rsa Username:docker}
	I0401 18:21:27.771858   27284 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 18:21:27.776684   27284 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 18:21:27.776703   27284 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/addons for local assets ...
	I0401 18:21:27.776776   27284 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/files for local assets ...
	I0401 18:21:27.776861   27284 filesync.go:149] local asset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> 177512.pem in /etc/ssl/certs
	I0401 18:21:27.776874   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> /etc/ssl/certs/177512.pem
	I0401 18:21:27.776970   27284 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 18:21:27.788156   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /etc/ssl/certs/177512.pem (1708 bytes)
	I0401 18:21:27.815024   27284 start.go:296] duration metric: took 134.472512ms for postStartSetup
	I0401 18:21:27.815069   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetConfigRaw
	I0401 18:21:27.815610   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetIP
	I0401 18:21:27.818358   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:27.818716   27284 main.go:141] libmachine: (ha-293078-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7f:87", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:21:19 +0000 UTC Type:0 Mac:52:54:00:25:7f:87 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-293078-m02 Clientid:01:52:54:00:25:7f:87}
	I0401 18:21:27.818744   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined IP address 192.168.39.161 and MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:27.818962   27284 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/config.json ...
	I0401 18:21:27.819126   27284 start.go:128] duration metric: took 24.447591421s to createHost
	I0401 18:21:27.819147   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHHostname
	I0401 18:21:27.821482   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:27.821833   27284 main.go:141] libmachine: (ha-293078-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7f:87", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:21:19 +0000 UTC Type:0 Mac:52:54:00:25:7f:87 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-293078-m02 Clientid:01:52:54:00:25:7f:87}
	I0401 18:21:27.821861   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined IP address 192.168.39.161 and MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:27.822014   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHPort
	I0401 18:21:27.822205   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHKeyPath
	I0401 18:21:27.822399   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHKeyPath
	I0401 18:21:27.822542   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHUsername
	I0401 18:21:27.822720   27284 main.go:141] libmachine: Using SSH client type: native
	I0401 18:21:27.822910   27284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.161 22 <nil> <nil>}
	I0401 18:21:27.822928   27284 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 18:21:27.930959   27284 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711995687.901340486
	
	I0401 18:21:27.930981   27284 fix.go:216] guest clock: 1711995687.901340486
	I0401 18:21:27.930988   27284 fix.go:229] Guest: 2024-04-01 18:21:27.901340486 +0000 UTC Remote: 2024-04-01 18:21:27.819137286 +0000 UTC m=+79.694131970 (delta=82.2032ms)
	I0401 18:21:27.931002   27284 fix.go:200] guest clock delta is within tolerance: 82.2032ms
	I0401 18:21:27.931007   27284 start.go:83] releasing machines lock for "ha-293078-m02", held for 24.559557046s
	I0401 18:21:27.931026   27284 main.go:141] libmachine: (ha-293078-m02) Calling .DriverName
	I0401 18:21:27.931329   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetIP
	I0401 18:21:27.933913   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:27.934296   27284 main.go:141] libmachine: (ha-293078-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7f:87", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:21:19 +0000 UTC Type:0 Mac:52:54:00:25:7f:87 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-293078-m02 Clientid:01:52:54:00:25:7f:87}
	I0401 18:21:27.934328   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined IP address 192.168.39.161 and MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:27.937056   27284 out.go:177] * Found network options:
	I0401 18:21:27.938623   27284 out.go:177]   - NO_PROXY=192.168.39.74
	W0401 18:21:27.940013   27284 proxy.go:119] fail to check proxy env: Error ip not in block
	I0401 18:21:27.940053   27284 main.go:141] libmachine: (ha-293078-m02) Calling .DriverName
	I0401 18:21:27.940565   27284 main.go:141] libmachine: (ha-293078-m02) Calling .DriverName
	I0401 18:21:27.940750   27284 main.go:141] libmachine: (ha-293078-m02) Calling .DriverName
	I0401 18:21:27.940881   27284 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 18:21:27.940918   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHHostname
	W0401 18:21:27.940943   27284 proxy.go:119] fail to check proxy env: Error ip not in block
	I0401 18:21:27.941032   27284 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 18:21:27.941053   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHHostname
	I0401 18:21:27.943773   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:27.944165   27284 main.go:141] libmachine: (ha-293078-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7f:87", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:21:19 +0000 UTC Type:0 Mac:52:54:00:25:7f:87 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-293078-m02 Clientid:01:52:54:00:25:7f:87}
	I0401 18:21:27.944197   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined IP address 192.168.39.161 and MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:27.944228   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:27.944304   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHPort
	I0401 18:21:27.944466   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHKeyPath
	I0401 18:21:27.944626   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHUsername
	I0401 18:21:27.944724   27284 main.go:141] libmachine: (ha-293078-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7f:87", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:21:19 +0000 UTC Type:0 Mac:52:54:00:25:7f:87 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-293078-m02 Clientid:01:52:54:00:25:7f:87}
	I0401 18:21:27.944746   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined IP address 192.168.39.161 and MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:27.944755   27284 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m02/id_rsa Username:docker}
	I0401 18:21:27.944925   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHPort
	I0401 18:21:27.945077   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHKeyPath
	I0401 18:21:27.945233   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHUsername
	I0401 18:21:27.945365   27284 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m02/id_rsa Username:docker}
	I0401 18:21:28.200403   27284 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 18:21:28.207254   27284 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 18:21:28.207305   27284 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 18:21:28.225468   27284 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 18:21:28.225495   27284 start.go:494] detecting cgroup driver to use...
	I0401 18:21:28.225560   27284 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 18:21:28.243950   27284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 18:21:28.259036   27284 docker.go:217] disabling cri-docker service (if available) ...
	I0401 18:21:28.259091   27284 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 18:21:28.275329   27284 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 18:21:28.293101   27284 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 18:21:28.429784   27284 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 18:21:28.565922   27284 docker.go:233] disabling docker service ...
	I0401 18:21:28.565979   27284 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 18:21:28.582906   27284 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 18:21:28.597090   27284 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 18:21:28.735892   27284 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 18:21:28.857513   27284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 18:21:28.873206   27284 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 18:21:28.893313   27284 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0401 18:21:28.893378   27284 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:21:28.905459   27284 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 18:21:28.905506   27284 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:21:28.917308   27284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:21:28.928964   27284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:21:28.940983   27284 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 18:21:28.953029   27284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:21:28.964924   27284 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:21:28.983890   27284 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:21:28.995693   27284 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 18:21:29.006566   27284 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0401 18:21:29.006619   27284 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0401 18:21:29.021111   27284 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 18:21:29.032582   27284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 18:21:29.155407   27284 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 18:21:29.315079   27284 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 18:21:29.315175   27284 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 18:21:29.320619   27284 start.go:562] Will wait 60s for crictl version
	I0401 18:21:29.320677   27284 ssh_runner.go:195] Run: which crictl
	I0401 18:21:29.325296   27284 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 18:21:29.366380   27284 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 18:21:29.366512   27284 ssh_runner.go:195] Run: crio --version
	I0401 18:21:29.397051   27284 ssh_runner.go:195] Run: crio --version
	I0401 18:21:29.434216   27284 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0401 18:21:29.435828   27284 out.go:177]   - env NO_PROXY=192.168.39.74
	I0401 18:21:29.437067   27284 main.go:141] libmachine: (ha-293078-m02) Calling .GetIP
	I0401 18:21:29.439778   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:29.440175   27284 main.go:141] libmachine: (ha-293078-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7f:87", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:21:19 +0000 UTC Type:0 Mac:52:54:00:25:7f:87 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-293078-m02 Clientid:01:52:54:00:25:7f:87}
	I0401 18:21:29.440199   27284 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined IP address 192.168.39.161 and MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:21:29.440477   27284 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0401 18:21:29.445003   27284 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 18:21:29.459388   27284 mustload.go:65] Loading cluster: ha-293078
	I0401 18:21:29.459600   27284 config.go:182] Loaded profile config "ha-293078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 18:21:29.459883   27284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:21:29.459917   27284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:21:29.474595   27284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42923
	I0401 18:21:29.475076   27284 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:21:29.475614   27284 main.go:141] libmachine: Using API Version  1
	I0401 18:21:29.475641   27284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:21:29.475959   27284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:21:29.476112   27284 main.go:141] libmachine: (ha-293078) Calling .GetState
	I0401 18:21:29.477531   27284 host.go:66] Checking if "ha-293078" exists ...
	I0401 18:21:29.477979   27284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:21:29.478024   27284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:21:29.492417   27284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34683
	I0401 18:21:29.492773   27284 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:21:29.493228   27284 main.go:141] libmachine: Using API Version  1
	I0401 18:21:29.493248   27284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:21:29.493565   27284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:21:29.493762   27284 main.go:141] libmachine: (ha-293078) Calling .DriverName
	I0401 18:21:29.493915   27284 certs.go:68] Setting up /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078 for IP: 192.168.39.161
	I0401 18:21:29.493927   27284 certs.go:194] generating shared ca certs ...
	I0401 18:21:29.493945   27284 certs.go:226] acquiring lock for ca certs: {Name:mk348b3e250c104b662139cd7212c6c6dfda3180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:21:29.494073   27284 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key
	I0401 18:21:29.494125   27284 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key
	I0401 18:21:29.494138   27284 certs.go:256] generating profile certs ...
	I0401 18:21:29.494228   27284 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/client.key
	I0401 18:21:29.494256   27284 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.key.f03fd53c
	I0401 18:21:29.494273   27284 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.crt.f03fd53c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.74 192.168.39.161 192.168.39.254]
	I0401 18:21:29.870971   27284 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.crt.f03fd53c ...
	I0401 18:21:29.871001   27284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.crt.f03fd53c: {Name:mke372832f38ab7a4216acc7c1af71be3e4ec4f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:21:29.871164   27284 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.key.f03fd53c ...
	I0401 18:21:29.871182   27284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.key.f03fd53c: {Name:mk8efe869f9d01338acd73cedfd8d1cae8bb0860 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:21:29.871254   27284 certs.go:381] copying /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.crt.f03fd53c -> /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.crt
	I0401 18:21:29.871373   27284 certs.go:385] copying /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.key.f03fd53c -> /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.key
	I0401 18:21:29.871541   27284 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/proxy-client.key
	I0401 18:21:29.871563   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0401 18:21:29.871583   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0401 18:21:29.871602   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0401 18:21:29.871620   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0401 18:21:29.871639   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0401 18:21:29.871658   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0401 18:21:29.871674   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0401 18:21:29.871691   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0401 18:21:29.871765   27284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem (1338 bytes)
	W0401 18:21:29.871814   27284 certs.go:480] ignoring /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751_empty.pem, impossibly tiny 0 bytes
	I0401 18:21:29.871827   27284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem (1675 bytes)
	I0401 18:21:29.871870   27284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem (1082 bytes)
	I0401 18:21:29.871898   27284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem (1123 bytes)
	I0401 18:21:29.871925   27284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem (1679 bytes)
	I0401 18:21:29.871967   27284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem (1708 bytes)
	I0401 18:21:29.871996   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem -> /usr/share/ca-certificates/17751.pem
	I0401 18:21:29.872006   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> /usr/share/ca-certificates/177512.pem
	I0401 18:21:29.872015   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0401 18:21:29.872044   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:21:29.875227   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:21:29.875638   27284 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:21:29.875668   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:21:29.875816   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:21:29.875993   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:21:29.876159   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:21:29.876283   27284 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078/id_rsa Username:docker}
	I0401 18:21:29.953963   27284 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0401 18:21:29.959680   27284 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0401 18:21:29.976628   27284 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0401 18:21:29.981919   27284 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0401 18:21:29.995078   27284 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0401 18:21:29.999828   27284 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0401 18:21:30.013735   27284 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0401 18:21:30.018722   27284 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0401 18:21:30.039963   27284 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0401 18:21:30.044688   27284 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0401 18:21:30.064396   27284 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0401 18:21:30.070169   27284 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0401 18:21:30.082483   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 18:21:30.109594   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 18:21:30.135600   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 18:21:30.161192   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 18:21:30.187105   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0401 18:21:30.213458   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0401 18:21:30.239352   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 18:21:30.265197   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 18:21:30.291678   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem --> /usr/share/ca-certificates/17751.pem (1338 bytes)
	I0401 18:21:30.318717   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /usr/share/ca-certificates/177512.pem (1708 bytes)
	I0401 18:21:30.344296   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 18:21:30.369529   27284 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0401 18:21:30.387557   27284 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0401 18:21:30.405503   27284 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0401 18:21:30.424468   27284 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0401 18:21:30.443647   27284 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0401 18:21:30.465042   27284 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0401 18:21:30.484989   27284 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (758 bytes)
	I0401 18:21:30.504775   27284 ssh_runner.go:195] Run: openssl version
	I0401 18:21:30.511640   27284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17751.pem && ln -fs /usr/share/ca-certificates/17751.pem /etc/ssl/certs/17751.pem"
	I0401 18:21:30.525146   27284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17751.pem
	I0401 18:21:30.530456   27284 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 18:15 /usr/share/ca-certificates/17751.pem
	I0401 18:21:30.530514   27284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17751.pem
	I0401 18:21:30.537052   27284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17751.pem /etc/ssl/certs/51391683.0"
	I0401 18:21:30.550061   27284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177512.pem && ln -fs /usr/share/ca-certificates/177512.pem /etc/ssl/certs/177512.pem"
	I0401 18:21:30.563523   27284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177512.pem
	I0401 18:21:30.568383   27284 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 18:15 /usr/share/ca-certificates/177512.pem
	I0401 18:21:30.568430   27284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177512.pem
	I0401 18:21:30.574495   27284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177512.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 18:21:30.586895   27284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 18:21:30.599226   27284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 18:21:30.604131   27284 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 18:07 /usr/share/ca-certificates/minikubeCA.pem
	I0401 18:21:30.604187   27284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 18:21:30.610966   27284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 18:21:30.623311   27284 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 18:21:30.627965   27284 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 18:21:30.628011   27284 kubeadm.go:928] updating node {m02 192.168.39.161 8443 v1.29.3 crio true true} ...
	I0401 18:21:30.628083   27284 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-293078-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.161
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-293078 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 18:21:30.628107   27284 kube-vip.go:111] generating kube-vip config ...
	I0401 18:21:30.628139   27284 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0401 18:21:30.647325   27284 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0401 18:21:30.647833   27284 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0401 18:21:30.647928   27284 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0401 18:21:30.660414   27284 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.29.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.29.3': No such file or directory
	
	Initiating transfer...
	I0401 18:21:30.660467   27284 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.29.3
	I0401 18:21:30.671923   27284 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl.sha256
	I0401 18:21:30.671943   27284 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18233-10493/.minikube/cache/linux/amd64/v1.29.3/kubelet
	I0401 18:21:30.671955   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/linux/amd64/v1.29.3/kubectl -> /var/lib/minikube/binaries/v1.29.3/kubectl
	I0401 18:21:30.671961   27284 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18233-10493/.minikube/cache/linux/amd64/v1.29.3/kubeadm
	I0401 18:21:30.672032   27284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl
	I0401 18:21:30.677820   27284 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubectl': No such file or directory
	I0401 18:21:30.677851   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/cache/linux/amd64/v1.29.3/kubectl --> /var/lib/minikube/binaries/v1.29.3/kubectl (49799168 bytes)
	I0401 18:21:31.695308   27284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 18:21:31.710741   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/linux/amd64/v1.29.3/kubelet -> /var/lib/minikube/binaries/v1.29.3/kubelet
	I0401 18:21:31.710854   27284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet
	I0401 18:21:31.715776   27284 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubelet': No such file or directory
	I0401 18:21:31.715803   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/cache/linux/amd64/v1.29.3/kubelet --> /var/lib/minikube/binaries/v1.29.3/kubelet (111919104 bytes)
	I0401 18:21:34.245735   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/linux/amd64/v1.29.3/kubeadm -> /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0401 18:21:34.245806   27284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0401 18:21:34.251520   27284 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubeadm': No such file or directory
	I0401 18:21:34.251548   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/cache/linux/amd64/v1.29.3/kubeadm --> /var/lib/minikube/binaries/v1.29.3/kubeadm (48340992 bytes)
	I0401 18:21:34.507577   27284 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0401 18:21:34.517903   27284 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0401 18:21:34.536870   27284 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 18:21:34.556070   27284 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0401 18:21:34.574843   27284 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0401 18:21:34.579428   27284 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 18:21:34.593517   27284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 18:21:34.731800   27284 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 18:21:34.753012   27284 host.go:66] Checking if "ha-293078" exists ...
	I0401 18:21:34.753348   27284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:21:34.753395   27284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:21:34.772501   27284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44899
	I0401 18:21:34.772965   27284 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:21:34.773483   27284 main.go:141] libmachine: Using API Version  1
	I0401 18:21:34.773510   27284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:21:34.773865   27284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:21:34.774108   27284 main.go:141] libmachine: (ha-293078) Calling .DriverName
	I0401 18:21:34.774255   27284 start.go:316] joinCluster: &{Name:ha-293078 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cluster
Name:ha-293078 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.74 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.161 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 18:21:34.774371   27284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0401 18:21:34.774395   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:21:34.777425   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:21:34.777860   27284 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:21:34.777888   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:21:34.778025   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:21:34.778182   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:21:34.778322   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:21:34.778488   27284 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078/id_rsa Username:docker}
	I0401 18:21:34.965606   27284 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.161 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 18:21:34.965663   27284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ofhm0a.qkw6l4ee4v53jhnf --discovery-token-ca-cert-hash sha256:b8a0197ad47aa27a5800307c57228d22e61e4d31af785fa8a896f2b7fab267b8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-293078-m02 --control-plane --apiserver-advertise-address=192.168.39.161 --apiserver-bind-port=8443"
	I0401 18:21:59.525215   27284 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ofhm0a.qkw6l4ee4v53jhnf --discovery-token-ca-cert-hash sha256:b8a0197ad47aa27a5800307c57228d22e61e4d31af785fa8a896f2b7fab267b8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-293078-m02 --control-plane --apiserver-advertise-address=192.168.39.161 --apiserver-bind-port=8443": (24.559527937s)
	I0401 18:21:59.525250   27284 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0401 18:22:00.071046   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-293078-m02 minikube.k8s.io/updated_at=2024_04_01T18_22_00_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2 minikube.k8s.io/name=ha-293078 minikube.k8s.io/primary=false
	I0401 18:22:00.282010   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-293078-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0401 18:22:00.453543   27284 start.go:318] duration metric: took 25.679284795s to joinCluster
	I0401 18:22:00.453612   27284 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.161 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 18:22:00.455031   27284 out.go:177] * Verifying Kubernetes components...
	I0401 18:22:00.453905   27284 config.go:182] Loaded profile config "ha-293078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 18:22:00.456414   27284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 18:22:00.728746   27284 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 18:22:00.777673   27284 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 18:22:00.777997   27284 kapi.go:59] client config for ha-293078: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/client.crt", KeyFile:"/home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/client.key", CAFile:"/home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5ca00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0401 18:22:00.778058   27284 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.74:8443
	I0401 18:22:00.778281   27284 node_ready.go:35] waiting up to 6m0s for node "ha-293078-m02" to be "Ready" ...
	I0401 18:22:00.778364   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:00.778374   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:00.778384   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:00.778393   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:00.789042   27284 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0401 18:22:01.279023   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:01.279052   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:01.279065   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:01.279073   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:01.307534   27284 round_trippers.go:574] Response Status: 200 OK in 28 milliseconds
	I0401 18:22:01.779315   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:01.779344   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:01.779356   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:01.779363   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:01.783215   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:02.278842   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:02.278862   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:02.278873   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:02.278878   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:02.284787   27284 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 18:22:02.778873   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:02.778897   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:02.778909   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:02.778917   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:02.782128   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:02.783305   27284 node_ready.go:53] node "ha-293078-m02" has status "Ready":"False"
	I0401 18:22:03.279117   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:03.279135   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:03.279143   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:03.279147   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:03.283600   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:22:03.778553   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:03.778574   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:03.778583   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:03.778587   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:03.884749   27284 round_trippers.go:574] Response Status: 200 OK in 106 milliseconds
	I0401 18:22:04.278810   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:04.278829   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:04.278838   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:04.278843   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:04.282332   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:04.778503   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:04.778524   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:04.778533   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:04.778538   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:04.783222   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:22:04.783797   27284 node_ready.go:49] node "ha-293078-m02" has status "Ready":"True"
	I0401 18:22:04.783830   27284 node_ready.go:38] duration metric: took 4.005519623s for node "ha-293078-m02" to be "Ready" ...
	I0401 18:22:04.783842   27284 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 18:22:04.783912   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods
	I0401 18:22:04.783944   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:04.783954   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:04.783960   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:04.791208   27284 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0401 18:22:04.798444   27284 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-8v456" in "kube-system" namespace to be "Ready" ...
	I0401 18:22:04.798536   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-8v456
	I0401 18:22:04.798547   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:04.798557   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:04.798566   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:04.802242   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:04.802982   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078
	I0401 18:22:04.802999   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:04.803005   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:04.803009   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:04.806438   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:04.807430   27284 pod_ready.go:92] pod "coredns-76f75df574-8v456" in "kube-system" namespace has status "Ready":"True"
	I0401 18:22:04.807444   27284 pod_ready.go:81] duration metric: took 8.97802ms for pod "coredns-76f75df574-8v456" in "kube-system" namespace to be "Ready" ...
	I0401 18:22:04.807452   27284 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-sqxnb" in "kube-system" namespace to be "Ready" ...
	I0401 18:22:04.807504   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-sqxnb
	I0401 18:22:04.807513   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:04.807520   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:04.807523   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:04.811176   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:04.812196   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078
	I0401 18:22:04.812212   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:04.812239   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:04.812252   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:04.816308   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:22:04.817575   27284 pod_ready.go:92] pod "coredns-76f75df574-sqxnb" in "kube-system" namespace has status "Ready":"True"
	I0401 18:22:04.817588   27284 pod_ready.go:81] duration metric: took 10.130855ms for pod "coredns-76f75df574-sqxnb" in "kube-system" namespace to be "Ready" ...
	I0401 18:22:04.817596   27284 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-293078" in "kube-system" namespace to be "Ready" ...
	I0401 18:22:04.817632   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078
	I0401 18:22:04.817640   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:04.817669   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:04.817679   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:04.821221   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:04.822718   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078
	I0401 18:22:04.822740   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:04.822750   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:04.822755   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:04.825701   27284 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 18:22:04.826315   27284 pod_ready.go:92] pod "etcd-ha-293078" in "kube-system" namespace has status "Ready":"True"
	I0401 18:22:04.826328   27284 pod_ready.go:81] duration metric: took 8.726774ms for pod "etcd-ha-293078" in "kube-system" namespace to be "Ready" ...
	I0401 18:22:04.826335   27284 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-293078-m02" in "kube-system" namespace to be "Ready" ...
	I0401 18:22:04.826387   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m02
	I0401 18:22:04.826399   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:04.826405   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:04.826410   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:04.829258   27284 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 18:22:04.829923   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:04.829939   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:04.829949   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:04.829956   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:04.832707   27284 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 18:22:05.326706   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m02
	I0401 18:22:05.326736   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:05.326756   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:05.326766   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:05.334458   27284 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0401 18:22:05.335602   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:05.335615   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:05.335622   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:05.335626   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:05.338260   27284 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 18:22:05.827277   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m02
	I0401 18:22:05.827298   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:05.827306   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:05.827310   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:05.831742   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:22:05.832715   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:05.832732   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:05.832744   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:05.832752   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:05.837361   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:22:06.327168   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m02
	I0401 18:22:06.327192   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:06.327202   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:06.327207   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:06.336534   27284 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0401 18:22:06.337580   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:06.337593   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:06.337600   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:06.337603   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:06.340686   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:06.826711   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m02
	I0401 18:22:06.826736   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:06.826749   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:06.826756   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:06.830585   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:06.831349   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:06.831364   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:06.831371   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:06.831374   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:06.834217   27284 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 18:22:06.835019   27284 pod_ready.go:102] pod "etcd-ha-293078-m02" in "kube-system" namespace has status "Ready":"False"
	I0401 18:22:07.327487   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m02
	I0401 18:22:07.327506   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:07.327514   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:07.327517   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:07.331111   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:07.331797   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:07.331813   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:07.331826   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:07.331831   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:07.334667   27284 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 18:22:07.826898   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m02
	I0401 18:22:07.826920   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:07.826932   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:07.826938   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:07.830645   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:07.831546   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:07.831560   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:07.831566   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:07.831570   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:07.834451   27284 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 18:22:08.327484   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m02
	I0401 18:22:08.327507   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:08.327519   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:08.327525   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:08.330783   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:08.331544   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:08.331561   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:08.331571   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:08.331576   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:08.334362   27284 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 18:22:08.826689   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m02
	I0401 18:22:08.826715   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:08.826726   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:08.826731   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:08.830433   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:08.831292   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:08.831305   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:08.831312   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:08.831316   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:08.834778   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:08.835556   27284 pod_ready.go:102] pod "etcd-ha-293078-m02" in "kube-system" namespace has status "Ready":"False"
	I0401 18:22:09.326898   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m02
	I0401 18:22:09.326920   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:09.326926   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:09.326931   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:09.331521   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:22:09.332596   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:09.332613   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:09.332624   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:09.332631   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:09.335403   27284 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 18:22:09.827031   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m02
	I0401 18:22:09.827051   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:09.827059   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:09.827063   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:09.830688   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:09.831534   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:09.831550   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:09.831558   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:09.831564   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:09.834158   27284 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 18:22:10.327268   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m02
	I0401 18:22:10.327299   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:10.327313   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:10.327319   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:10.331738   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:22:10.332555   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:10.332568   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:10.332575   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:10.332582   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:10.335800   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:10.826731   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m02
	I0401 18:22:10.826756   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:10.826762   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:10.826767   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:10.830932   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:22:10.832090   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:10.832106   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:10.832116   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:10.832121   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:10.835606   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:10.836626   27284 pod_ready.go:102] pod "etcd-ha-293078-m02" in "kube-system" namespace has status "Ready":"False"
	I0401 18:22:11.327167   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m02
	I0401 18:22:11.327188   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:11.327196   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:11.327200   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:11.331583   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:22:11.332547   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:11.332565   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:11.332572   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:11.332576   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:11.336818   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:22:11.826902   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m02
	I0401 18:22:11.826922   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:11.826930   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:11.826933   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:11.830821   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:11.831489   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:11.831503   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:11.831511   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:11.831518   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:11.834772   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:12.327209   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m02
	I0401 18:22:12.327229   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:12.327237   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:12.327242   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:12.330514   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:12.331380   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:12.331401   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:12.331411   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:12.331417   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:12.333986   27284 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 18:22:12.826955   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m02
	I0401 18:22:12.826990   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:12.827002   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:12.827009   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:12.831259   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:22:12.832243   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:12.832258   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:12.832266   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:12.832270   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:12.835430   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:12.836219   27284 pod_ready.go:92] pod "etcd-ha-293078-m02" in "kube-system" namespace has status "Ready":"True"
	I0401 18:22:12.836240   27284 pod_ready.go:81] duration metric: took 8.009896637s for pod "etcd-ha-293078-m02" in "kube-system" namespace to be "Ready" ...
	I0401 18:22:12.836253   27284 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-293078" in "kube-system" namespace to be "Ready" ...
	I0401 18:22:12.836299   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-293078
	I0401 18:22:12.836307   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:12.836314   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:12.836318   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:12.839517   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:12.840372   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078
	I0401 18:22:12.840386   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:12.840394   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:12.840398   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:12.843655   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:12.844210   27284 pod_ready.go:92] pod "kube-apiserver-ha-293078" in "kube-system" namespace has status "Ready":"True"
	I0401 18:22:12.844226   27284 pod_ready.go:81] duration metric: took 7.966941ms for pod "kube-apiserver-ha-293078" in "kube-system" namespace to be "Ready" ...
	I0401 18:22:12.844235   27284 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-293078-m02" in "kube-system" namespace to be "Ready" ...
	I0401 18:22:12.844277   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-293078-m02
	I0401 18:22:12.844285   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:12.844292   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:12.844296   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:12.847446   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:12.848196   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:12.848209   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:12.848214   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:12.848217   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:12.851037   27284 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 18:22:12.851650   27284 pod_ready.go:92] pod "kube-apiserver-ha-293078-m02" in "kube-system" namespace has status "Ready":"True"
	I0401 18:22:12.851669   27284 pod_ready.go:81] duration metric: took 7.426737ms for pod "kube-apiserver-ha-293078-m02" in "kube-system" namespace to be "Ready" ...
	I0401 18:22:12.851690   27284 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-293078" in "kube-system" namespace to be "Ready" ...
	I0401 18:22:12.851748   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-293078
	I0401 18:22:12.851759   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:12.851769   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:12.851778   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:12.855044   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:12.855667   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078
	I0401 18:22:12.855682   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:12.855689   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:12.855692   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:12.858465   27284 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 18:22:12.858957   27284 pod_ready.go:92] pod "kube-controller-manager-ha-293078" in "kube-system" namespace has status "Ready":"True"
	I0401 18:22:12.858973   27284 pod_ready.go:81] duration metric: took 7.275432ms for pod "kube-controller-manager-ha-293078" in "kube-system" namespace to be "Ready" ...
	I0401 18:22:12.858982   27284 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-293078-m02" in "kube-system" namespace to be "Ready" ...
	I0401 18:22:12.859022   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-293078-m02
	I0401 18:22:12.859031   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:12.859038   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:12.859042   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:12.861085   27284 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 18:22:12.862016   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:12.862031   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:12.862039   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:12.862043   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:12.864616   27284 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 18:22:12.865108   27284 pod_ready.go:92] pod "kube-controller-manager-ha-293078-m02" in "kube-system" namespace has status "Ready":"True"
	I0401 18:22:12.865126   27284 pod_ready.go:81] duration metric: took 6.138033ms for pod "kube-controller-manager-ha-293078-m02" in "kube-system" namespace to be "Ready" ...
	I0401 18:22:12.865134   27284 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8s2xk" in "kube-system" namespace to be "Ready" ...
	I0401 18:22:13.027544   27284 request.go:629] Waited for 162.351985ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8s2xk
	I0401 18:22:13.027629   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8s2xk
	I0401 18:22:13.027636   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:13.027643   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:13.027650   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:13.031974   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:22:13.227010   27284 request.go:629] Waited for 194.019992ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:13.227094   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:13.227100   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:13.227107   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:13.227112   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:13.231013   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:13.231701   27284 pod_ready.go:92] pod "kube-proxy-8s2xk" in "kube-system" namespace has status "Ready":"True"
	I0401 18:22:13.231718   27284 pod_ready.go:81] duration metric: took 366.578291ms for pod "kube-proxy-8s2xk" in "kube-system" namespace to be "Ready" ...
	I0401 18:22:13.231727   27284 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-l5q2p" in "kube-system" namespace to be "Ready" ...
	I0401 18:22:13.427765   27284 request.go:629] Waited for 195.984154ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-proxy-l5q2p
	I0401 18:22:13.427877   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-proxy-l5q2p
	I0401 18:22:13.427887   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:13.427895   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:13.427902   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:13.431624   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:13.627544   27284 request.go:629] Waited for 195.356488ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/nodes/ha-293078
	I0401 18:22:13.627593   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078
	I0401 18:22:13.627598   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:13.627605   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:13.627609   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:13.631300   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:13.631993   27284 pod_ready.go:92] pod "kube-proxy-l5q2p" in "kube-system" namespace has status "Ready":"True"
	I0401 18:22:13.632008   27284 pod_ready.go:81] duration metric: took 400.275723ms for pod "kube-proxy-l5q2p" in "kube-system" namespace to be "Ready" ...
	I0401 18:22:13.632017   27284 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-293078" in "kube-system" namespace to be "Ready" ...
	I0401 18:22:13.827038   27284 request.go:629] Waited for 194.954892ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-293078
	I0401 18:22:13.827120   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-293078
	I0401 18:22:13.827132   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:13.827140   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:13.827143   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:13.830031   27284 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 18:22:14.026947   27284 request.go:629] Waited for 196.301082ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/nodes/ha-293078
	I0401 18:22:14.027054   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078
	I0401 18:22:14.027066   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:14.027076   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:14.027085   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:14.030795   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:22:14.031457   27284 pod_ready.go:92] pod "kube-scheduler-ha-293078" in "kube-system" namespace has status "Ready":"True"
	I0401 18:22:14.031475   27284 pod_ready.go:81] duration metric: took 399.452485ms for pod "kube-scheduler-ha-293078" in "kube-system" namespace to be "Ready" ...
	I0401 18:22:14.031486   27284 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-293078-m02" in "kube-system" namespace to be "Ready" ...
	I0401 18:22:14.227580   27284 request.go:629] Waited for 196.015009ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-293078-m02
	I0401 18:22:14.227630   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-293078-m02
	I0401 18:22:14.227635   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:14.227643   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:14.227647   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:14.231946   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:22:14.427080   27284 request.go:629] Waited for 194.287738ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:14.427191   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:22:14.427203   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:14.427215   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:14.427224   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:14.431562   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:22:14.432377   27284 pod_ready.go:92] pod "kube-scheduler-ha-293078-m02" in "kube-system" namespace has status "Ready":"True"
	I0401 18:22:14.432395   27284 pod_ready.go:81] duration metric: took 400.902592ms for pod "kube-scheduler-ha-293078-m02" in "kube-system" namespace to be "Ready" ...
	I0401 18:22:14.432406   27284 pod_ready.go:38] duration metric: took 9.648548345s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 18:22:14.432418   27284 api_server.go:52] waiting for apiserver process to appear ...
	I0401 18:22:14.432473   27284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 18:22:14.453096   27284 api_server.go:72] duration metric: took 13.999441922s to wait for apiserver process to appear ...
	I0401 18:22:14.453126   27284 api_server.go:88] waiting for apiserver healthz status ...
	I0401 18:22:14.453147   27284 api_server.go:253] Checking apiserver healthz at https://192.168.39.74:8443/healthz ...
	I0401 18:22:14.458725   27284 api_server.go:279] https://192.168.39.74:8443/healthz returned 200:
	ok
	I0401 18:22:14.458792   27284 round_trippers.go:463] GET https://192.168.39.74:8443/version
	I0401 18:22:14.458803   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:14.458810   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:14.458814   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:14.459980   27284 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0401 18:22:14.460095   27284 api_server.go:141] control plane version: v1.29.3
	I0401 18:22:14.460117   27284 api_server.go:131] duration metric: took 6.983041ms to wait for apiserver health ...
	I0401 18:22:14.460124   27284 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 18:22:14.627204   27284 request.go:629] Waited for 167.017662ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods
	I0401 18:22:14.627251   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods
	I0401 18:22:14.627256   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:14.627263   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:14.627266   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:14.634703   27284 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0401 18:22:14.641337   27284 system_pods.go:59] 17 kube-system pods found
	I0401 18:22:14.641363   27284 system_pods.go:61] "coredns-76f75df574-8v456" [28cf6a1d-90df-4802-ad3c-9c0276380a44] Running
	I0401 18:22:14.641368   27284 system_pods.go:61] "coredns-76f75df574-sqxnb" [17868bbd-b0e9-460c-b191-9707f613af0a] Running
	I0401 18:22:14.641371   27284 system_pods.go:61] "etcd-ha-293078" [0cf5a089-d409-4fa2-85de-fcc012d79ff3] Running
	I0401 18:22:14.641375   27284 system_pods.go:61] "etcd-ha-293078-m02" [8acd3424-a11f-4a40-97cf-b7e8b4a0975f] Running
	I0401 18:22:14.641378   27284 system_pods.go:61] "kindnet-f4djp" [5b26be41-434f-4908-95aa-64da9fe7ecb0] Running
	I0401 18:22:14.641381   27284 system_pods.go:61] "kindnet-rjfcj" [63f6ecc3-4bd0-406b-8096-ffd6115a2de3] Running
	I0401 18:22:14.641384   27284 system_pods.go:61] "kube-apiserver-ha-293078" [a0e08a32-b673-46b9-b965-9d321e4db6f1] Running
	I0401 18:22:14.641387   27284 system_pods.go:61] "kube-apiserver-ha-293078-m02" [533b0e64-f078-44f0-be6f-a8a3d880138a] Running
	I0401 18:22:14.641390   27284 system_pods.go:61] "kube-controller-manager-ha-293078" [3e9c2dbe-f437-4619-9b04-f30d9dab7f61] Running
	I0401 18:22:14.641392   27284 system_pods.go:61] "kube-controller-manager-ha-293078-m02" [e8879a89-4775-488b-9229-e86c2c891b5f] Running
	I0401 18:22:14.641395   27284 system_pods.go:61] "kube-proxy-8s2xk" [4fc029ea-1f23-497b-8fe3-38fc0e0a4c38] Running
	I0401 18:22:14.641398   27284 system_pods.go:61] "kube-proxy-l5q2p" [167db687-ac11-4f57-83c1-048c31a7b2cb] Running
	I0401 18:22:14.641400   27284 system_pods.go:61] "kube-scheduler-ha-293078" [87acbf1d-d53b-47d7-816a-492ba644ad0e] Running
	I0401 18:22:14.641403   27284 system_pods.go:61] "kube-scheduler-ha-293078-m02" [17a9003c-fd9f-48e2-b4b7-1ee6606ef480] Running
	I0401 18:22:14.641406   27284 system_pods.go:61] "kube-vip-ha-293078" [543de9ec-6f50-46b9-b6ec-f58964f81f12] Running
	I0401 18:22:14.641408   27284 system_pods.go:61] "kube-vip-ha-293078-m02" [6714926d-3bce-4773-92d6-e3811f532a37] Running
	I0401 18:22:14.641411   27284 system_pods.go:61] "storage-provisioner" [3d7c42eb-192e-4ae0-b5ae-0883ef5e740c] Running
	I0401 18:22:14.641416   27284 system_pods.go:74] duration metric: took 181.287454ms to wait for pod list to return data ...
	I0401 18:22:14.641425   27284 default_sa.go:34] waiting for default service account to be created ...
	I0401 18:22:14.827644   27284 request.go:629] Waited for 186.160719ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/namespaces/default/serviceaccounts
	I0401 18:22:14.827690   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/default/serviceaccounts
	I0401 18:22:14.827695   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:14.827703   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:14.827706   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:14.831978   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:22:14.832185   27284 default_sa.go:45] found service account: "default"
	I0401 18:22:14.832208   27284 default_sa.go:55] duration metric: took 190.776754ms for default service account to be created ...
	I0401 18:22:14.832216   27284 system_pods.go:116] waiting for k8s-apps to be running ...
	I0401 18:22:15.027643   27284 request.go:629] Waited for 195.364671ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods
	I0401 18:22:15.027690   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods
	I0401 18:22:15.027704   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:15.027734   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:15.027746   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:15.033998   27284 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 18:22:15.039475   27284 system_pods.go:86] 17 kube-system pods found
	I0401 18:22:15.039500   27284 system_pods.go:89] "coredns-76f75df574-8v456" [28cf6a1d-90df-4802-ad3c-9c0276380a44] Running
	I0401 18:22:15.039506   27284 system_pods.go:89] "coredns-76f75df574-sqxnb" [17868bbd-b0e9-460c-b191-9707f613af0a] Running
	I0401 18:22:15.039511   27284 system_pods.go:89] "etcd-ha-293078" [0cf5a089-d409-4fa2-85de-fcc012d79ff3] Running
	I0401 18:22:15.039515   27284 system_pods.go:89] "etcd-ha-293078-m02" [8acd3424-a11f-4a40-97cf-b7e8b4a0975f] Running
	I0401 18:22:15.039519   27284 system_pods.go:89] "kindnet-f4djp" [5b26be41-434f-4908-95aa-64da9fe7ecb0] Running
	I0401 18:22:15.039524   27284 system_pods.go:89] "kindnet-rjfcj" [63f6ecc3-4bd0-406b-8096-ffd6115a2de3] Running
	I0401 18:22:15.039528   27284 system_pods.go:89] "kube-apiserver-ha-293078" [a0e08a32-b673-46b9-b965-9d321e4db6f1] Running
	I0401 18:22:15.039532   27284 system_pods.go:89] "kube-apiserver-ha-293078-m02" [533b0e64-f078-44f0-be6f-a8a3d880138a] Running
	I0401 18:22:15.039536   27284 system_pods.go:89] "kube-controller-manager-ha-293078" [3e9c2dbe-f437-4619-9b04-f30d9dab7f61] Running
	I0401 18:22:15.039540   27284 system_pods.go:89] "kube-controller-manager-ha-293078-m02" [e8879a89-4775-488b-9229-e86c2c891b5f] Running
	I0401 18:22:15.039544   27284 system_pods.go:89] "kube-proxy-8s2xk" [4fc029ea-1f23-497b-8fe3-38fc0e0a4c38] Running
	I0401 18:22:15.039548   27284 system_pods.go:89] "kube-proxy-l5q2p" [167db687-ac11-4f57-83c1-048c31a7b2cb] Running
	I0401 18:22:15.039552   27284 system_pods.go:89] "kube-scheduler-ha-293078" [87acbf1d-d53b-47d7-816a-492ba644ad0e] Running
	I0401 18:22:15.039556   27284 system_pods.go:89] "kube-scheduler-ha-293078-m02" [17a9003c-fd9f-48e2-b4b7-1ee6606ef480] Running
	I0401 18:22:15.039560   27284 system_pods.go:89] "kube-vip-ha-293078" [543de9ec-6f50-46b9-b6ec-f58964f81f12] Running
	I0401 18:22:15.039564   27284 system_pods.go:89] "kube-vip-ha-293078-m02" [6714926d-3bce-4773-92d6-e3811f532a37] Running
	I0401 18:22:15.039567   27284 system_pods.go:89] "storage-provisioner" [3d7c42eb-192e-4ae0-b5ae-0883ef5e740c] Running
	I0401 18:22:15.039573   27284 system_pods.go:126] duration metric: took 207.352029ms to wait for k8s-apps to be running ...
	I0401 18:22:15.039583   27284 system_svc.go:44] waiting for kubelet service to be running ....
	I0401 18:22:15.039624   27284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 18:22:15.056116   27284 system_svc.go:56] duration metric: took 16.524636ms WaitForService to wait for kubelet
	I0401 18:22:15.056148   27284 kubeadm.go:576] duration metric: took 14.602509719s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 18:22:15.056166   27284 node_conditions.go:102] verifying NodePressure condition ...
	I0401 18:22:15.227566   27284 request.go:629] Waited for 171.325356ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/nodes
	I0401 18:22:15.227614   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes
	I0401 18:22:15.227620   27284 round_trippers.go:469] Request Headers:
	I0401 18:22:15.227634   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:22:15.227638   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:22:15.231685   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:22:15.232381   27284 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 18:22:15.232404   27284 node_conditions.go:123] node cpu capacity is 2
	I0401 18:22:15.232433   27284 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 18:22:15.232439   27284 node_conditions.go:123] node cpu capacity is 2
	I0401 18:22:15.232459   27284 node_conditions.go:105] duration metric: took 176.287569ms to run NodePressure ...
	I0401 18:22:15.232477   27284 start.go:240] waiting for startup goroutines ...
	I0401 18:22:15.232520   27284 start.go:254] writing updated cluster config ...
	I0401 18:22:15.234875   27284 out.go:177] 
	I0401 18:22:15.236604   27284 config.go:182] Loaded profile config "ha-293078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 18:22:15.236698   27284 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/config.json ...
	I0401 18:22:15.238641   27284 out.go:177] * Starting "ha-293078-m03" control-plane node in "ha-293078" cluster
	I0401 18:22:15.240260   27284 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0401 18:22:15.240290   27284 cache.go:56] Caching tarball of preloaded images
	I0401 18:22:15.240404   27284 preload.go:173] Found /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 18:22:15.240418   27284 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0401 18:22:15.240543   27284 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/config.json ...
	I0401 18:22:15.240778   27284 start.go:360] acquireMachinesLock for ha-293078-m03: {Name:mk6b7472209a8db5f40be4c2f0565da7e0094c19 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 18:22:15.240832   27284 start.go:364] duration metric: took 30.303µs to acquireMachinesLock for "ha-293078-m03"
	I0401 18:22:15.240860   27284 start.go:93] Provisioning new machine with config: &{Name:ha-293078 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:ha-293078 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.74 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.161 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 18:22:15.240993   27284 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0401 18:22:15.242691   27284 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0401 18:22:15.242789   27284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:22:15.242834   27284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:22:15.257972   27284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43817
	I0401 18:22:15.258429   27284 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:22:15.258845   27284 main.go:141] libmachine: Using API Version  1
	I0401 18:22:15.258869   27284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:22:15.259209   27284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:22:15.259398   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetMachineName
	I0401 18:22:15.259542   27284 main.go:141] libmachine: (ha-293078-m03) Calling .DriverName
	I0401 18:22:15.259715   27284 start.go:159] libmachine.API.Create for "ha-293078" (driver="kvm2")
	I0401 18:22:15.259750   27284 client.go:168] LocalClient.Create starting
	I0401 18:22:15.259798   27284 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem
	I0401 18:22:15.259839   27284 main.go:141] libmachine: Decoding PEM data...
	I0401 18:22:15.259859   27284 main.go:141] libmachine: Parsing certificate...
	I0401 18:22:15.259921   27284 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem
	I0401 18:22:15.259946   27284 main.go:141] libmachine: Decoding PEM data...
	I0401 18:22:15.259963   27284 main.go:141] libmachine: Parsing certificate...
	I0401 18:22:15.259987   27284 main.go:141] libmachine: Running pre-create checks...
	I0401 18:22:15.259999   27284 main.go:141] libmachine: (ha-293078-m03) Calling .PreCreateCheck
	I0401 18:22:15.260182   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetConfigRaw
	I0401 18:22:15.260627   27284 main.go:141] libmachine: Creating machine...
	I0401 18:22:15.260646   27284 main.go:141] libmachine: (ha-293078-m03) Calling .Create
	I0401 18:22:15.260790   27284 main.go:141] libmachine: (ha-293078-m03) Creating KVM machine...
	I0401 18:22:15.262008   27284 main.go:141] libmachine: (ha-293078-m03) DBG | found existing default KVM network
	I0401 18:22:15.262207   27284 main.go:141] libmachine: (ha-293078-m03) DBG | found existing private KVM network mk-ha-293078
	I0401 18:22:15.262323   27284 main.go:141] libmachine: (ha-293078-m03) Setting up store path in /home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m03 ...
	I0401 18:22:15.262357   27284 main.go:141] libmachine: (ha-293078-m03) Building disk image from file:///home/jenkins/minikube-integration/18233-10493/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso
	I0401 18:22:15.262397   27284 main.go:141] libmachine: (ha-293078-m03) DBG | I0401 18:22:15.262312   27927 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18233-10493/.minikube
	I0401 18:22:15.262503   27284 main.go:141] libmachine: (ha-293078-m03) Downloading /home/jenkins/minikube-integration/18233-10493/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18233-10493/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso...
	I0401 18:22:15.480658   27284 main.go:141] libmachine: (ha-293078-m03) DBG | I0401 18:22:15.480550   27927 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m03/id_rsa...
	I0401 18:22:15.697387   27284 main.go:141] libmachine: (ha-293078-m03) DBG | I0401 18:22:15.697262   27927 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m03/ha-293078-m03.rawdisk...
	I0401 18:22:15.697420   27284 main.go:141] libmachine: (ha-293078-m03) DBG | Writing magic tar header
	I0401 18:22:15.697434   27284 main.go:141] libmachine: (ha-293078-m03) DBG | Writing SSH key tar header
	I0401 18:22:15.697447   27284 main.go:141] libmachine: (ha-293078-m03) DBG | I0401 18:22:15.697381   27927 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m03 ...
	I0401 18:22:15.697533   27284 main.go:141] libmachine: (ha-293078-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m03
	I0401 18:22:15.697567   27284 main.go:141] libmachine: (ha-293078-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18233-10493/.minikube/machines
	I0401 18:22:15.697581   27284 main.go:141] libmachine: (ha-293078-m03) Setting executable bit set on /home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m03 (perms=drwx------)
	I0401 18:22:15.697597   27284 main.go:141] libmachine: (ha-293078-m03) Setting executable bit set on /home/jenkins/minikube-integration/18233-10493/.minikube/machines (perms=drwxr-xr-x)
	I0401 18:22:15.697606   27284 main.go:141] libmachine: (ha-293078-m03) Setting executable bit set on /home/jenkins/minikube-integration/18233-10493/.minikube (perms=drwxr-xr-x)
	I0401 18:22:15.697616   27284 main.go:141] libmachine: (ha-293078-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18233-10493/.minikube
	I0401 18:22:15.697635   27284 main.go:141] libmachine: (ha-293078-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18233-10493
	I0401 18:22:15.697668   27284 main.go:141] libmachine: (ha-293078-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0401 18:22:15.697682   27284 main.go:141] libmachine: (ha-293078-m03) Setting executable bit set on /home/jenkins/minikube-integration/18233-10493 (perms=drwxrwxr-x)
	I0401 18:22:15.697694   27284 main.go:141] libmachine: (ha-293078-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0401 18:22:15.697702   27284 main.go:141] libmachine: (ha-293078-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0401 18:22:15.697713   27284 main.go:141] libmachine: (ha-293078-m03) Creating domain...
	I0401 18:22:15.697723   27284 main.go:141] libmachine: (ha-293078-m03) DBG | Checking permissions on dir: /home/jenkins
	I0401 18:22:15.697731   27284 main.go:141] libmachine: (ha-293078-m03) DBG | Checking permissions on dir: /home
	I0401 18:22:15.697740   27284 main.go:141] libmachine: (ha-293078-m03) DBG | Skipping /home - not owner
	I0401 18:22:15.698636   27284 main.go:141] libmachine: (ha-293078-m03) define libvirt domain using xml: 
	I0401 18:22:15.698654   27284 main.go:141] libmachine: (ha-293078-m03) <domain type='kvm'>
	I0401 18:22:15.698664   27284 main.go:141] libmachine: (ha-293078-m03)   <name>ha-293078-m03</name>
	I0401 18:22:15.698672   27284 main.go:141] libmachine: (ha-293078-m03)   <memory unit='MiB'>2200</memory>
	I0401 18:22:15.698681   27284 main.go:141] libmachine: (ha-293078-m03)   <vcpu>2</vcpu>
	I0401 18:22:15.698695   27284 main.go:141] libmachine: (ha-293078-m03)   <features>
	I0401 18:22:15.698705   27284 main.go:141] libmachine: (ha-293078-m03)     <acpi/>
	I0401 18:22:15.698712   27284 main.go:141] libmachine: (ha-293078-m03)     <apic/>
	I0401 18:22:15.698722   27284 main.go:141] libmachine: (ha-293078-m03)     <pae/>
	I0401 18:22:15.698738   27284 main.go:141] libmachine: (ha-293078-m03)     
	I0401 18:22:15.698750   27284 main.go:141] libmachine: (ha-293078-m03)   </features>
	I0401 18:22:15.698758   27284 main.go:141] libmachine: (ha-293078-m03)   <cpu mode='host-passthrough'>
	I0401 18:22:15.698784   27284 main.go:141] libmachine: (ha-293078-m03)   
	I0401 18:22:15.698806   27284 main.go:141] libmachine: (ha-293078-m03)   </cpu>
	I0401 18:22:15.698829   27284 main.go:141] libmachine: (ha-293078-m03)   <os>
	I0401 18:22:15.698847   27284 main.go:141] libmachine: (ha-293078-m03)     <type>hvm</type>
	I0401 18:22:15.698857   27284 main.go:141] libmachine: (ha-293078-m03)     <boot dev='cdrom'/>
	I0401 18:22:15.698867   27284 main.go:141] libmachine: (ha-293078-m03)     <boot dev='hd'/>
	I0401 18:22:15.698877   27284 main.go:141] libmachine: (ha-293078-m03)     <bootmenu enable='no'/>
	I0401 18:22:15.698891   27284 main.go:141] libmachine: (ha-293078-m03)   </os>
	I0401 18:22:15.698910   27284 main.go:141] libmachine: (ha-293078-m03)   <devices>
	I0401 18:22:15.698924   27284 main.go:141] libmachine: (ha-293078-m03)     <disk type='file' device='cdrom'>
	I0401 18:22:15.698939   27284 main.go:141] libmachine: (ha-293078-m03)       <source file='/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m03/boot2docker.iso'/>
	I0401 18:22:15.698952   27284 main.go:141] libmachine: (ha-293078-m03)       <target dev='hdc' bus='scsi'/>
	I0401 18:22:15.698965   27284 main.go:141] libmachine: (ha-293078-m03)       <readonly/>
	I0401 18:22:15.698977   27284 main.go:141] libmachine: (ha-293078-m03)     </disk>
	I0401 18:22:15.698988   27284 main.go:141] libmachine: (ha-293078-m03)     <disk type='file' device='disk'>
	I0401 18:22:15.699001   27284 main.go:141] libmachine: (ha-293078-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0401 18:22:15.699020   27284 main.go:141] libmachine: (ha-293078-m03)       <source file='/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m03/ha-293078-m03.rawdisk'/>
	I0401 18:22:15.699036   27284 main.go:141] libmachine: (ha-293078-m03)       <target dev='hda' bus='virtio'/>
	I0401 18:22:15.699049   27284 main.go:141] libmachine: (ha-293078-m03)     </disk>
	I0401 18:22:15.699061   27284 main.go:141] libmachine: (ha-293078-m03)     <interface type='network'>
	I0401 18:22:15.699075   27284 main.go:141] libmachine: (ha-293078-m03)       <source network='mk-ha-293078'/>
	I0401 18:22:15.699087   27284 main.go:141] libmachine: (ha-293078-m03)       <model type='virtio'/>
	I0401 18:22:15.699114   27284 main.go:141] libmachine: (ha-293078-m03)     </interface>
	I0401 18:22:15.699128   27284 main.go:141] libmachine: (ha-293078-m03)     <interface type='network'>
	I0401 18:22:15.699136   27284 main.go:141] libmachine: (ha-293078-m03)       <source network='default'/>
	I0401 18:22:15.699143   27284 main.go:141] libmachine: (ha-293078-m03)       <model type='virtio'/>
	I0401 18:22:15.699149   27284 main.go:141] libmachine: (ha-293078-m03)     </interface>
	I0401 18:22:15.699156   27284 main.go:141] libmachine: (ha-293078-m03)     <serial type='pty'>
	I0401 18:22:15.699162   27284 main.go:141] libmachine: (ha-293078-m03)       <target port='0'/>
	I0401 18:22:15.699168   27284 main.go:141] libmachine: (ha-293078-m03)     </serial>
	I0401 18:22:15.699174   27284 main.go:141] libmachine: (ha-293078-m03)     <console type='pty'>
	I0401 18:22:15.699179   27284 main.go:141] libmachine: (ha-293078-m03)       <target type='serial' port='0'/>
	I0401 18:22:15.699184   27284 main.go:141] libmachine: (ha-293078-m03)     </console>
	I0401 18:22:15.699190   27284 main.go:141] libmachine: (ha-293078-m03)     <rng model='virtio'>
	I0401 18:22:15.699197   27284 main.go:141] libmachine: (ha-293078-m03)       <backend model='random'>/dev/random</backend>
	I0401 18:22:15.699207   27284 main.go:141] libmachine: (ha-293078-m03)     </rng>
	I0401 18:22:15.699220   27284 main.go:141] libmachine: (ha-293078-m03)     
	I0401 18:22:15.699229   27284 main.go:141] libmachine: (ha-293078-m03)     
	I0401 18:22:15.699237   27284 main.go:141] libmachine: (ha-293078-m03)   </devices>
	I0401 18:22:15.699249   27284 main.go:141] libmachine: (ha-293078-m03) </domain>
	I0401 18:22:15.699284   27284 main.go:141] libmachine: (ha-293078-m03) 
	I0401 18:22:15.706407   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:f3:b3:14 in network default
	I0401 18:22:15.706938   27284 main.go:141] libmachine: (ha-293078-m03) Ensuring networks are active...
	I0401 18:22:15.706962   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:15.707640   27284 main.go:141] libmachine: (ha-293078-m03) Ensuring network default is active
	I0401 18:22:15.707975   27284 main.go:141] libmachine: (ha-293078-m03) Ensuring network mk-ha-293078 is active
	I0401 18:22:15.708235   27284 main.go:141] libmachine: (ha-293078-m03) Getting domain xml...
	I0401 18:22:15.708926   27284 main.go:141] libmachine: (ha-293078-m03) Creating domain...
	I0401 18:22:16.934106   27284 main.go:141] libmachine: (ha-293078-m03) Waiting to get IP...
	I0401 18:22:16.934793   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:16.935189   27284 main.go:141] libmachine: (ha-293078-m03) DBG | unable to find current IP address of domain ha-293078-m03 in network mk-ha-293078
	I0401 18:22:16.935223   27284 main.go:141] libmachine: (ha-293078-m03) DBG | I0401 18:22:16.935181   27927 retry.go:31] will retry after 274.998784ms: waiting for machine to come up
	I0401 18:22:17.211745   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:17.212222   27284 main.go:141] libmachine: (ha-293078-m03) DBG | unable to find current IP address of domain ha-293078-m03 in network mk-ha-293078
	I0401 18:22:17.212247   27284 main.go:141] libmachine: (ha-293078-m03) DBG | I0401 18:22:17.212194   27927 retry.go:31] will retry after 343.27575ms: waiting for machine to come up
	I0401 18:22:17.556896   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:17.557376   27284 main.go:141] libmachine: (ha-293078-m03) DBG | unable to find current IP address of domain ha-293078-m03 in network mk-ha-293078
	I0401 18:22:17.557407   27284 main.go:141] libmachine: (ha-293078-m03) DBG | I0401 18:22:17.557329   27927 retry.go:31] will retry after 324.461798ms: waiting for machine to come up
	I0401 18:22:17.883686   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:17.884228   27284 main.go:141] libmachine: (ha-293078-m03) DBG | unable to find current IP address of domain ha-293078-m03 in network mk-ha-293078
	I0401 18:22:17.884252   27284 main.go:141] libmachine: (ha-293078-m03) DBG | I0401 18:22:17.884197   27927 retry.go:31] will retry after 570.272916ms: waiting for machine to come up
	I0401 18:22:18.455961   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:18.456493   27284 main.go:141] libmachine: (ha-293078-m03) DBG | unable to find current IP address of domain ha-293078-m03 in network mk-ha-293078
	I0401 18:22:18.456519   27284 main.go:141] libmachine: (ha-293078-m03) DBG | I0401 18:22:18.456448   27927 retry.go:31] will retry after 574.872908ms: waiting for machine to come up
	I0401 18:22:19.033116   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:19.033611   27284 main.go:141] libmachine: (ha-293078-m03) DBG | unable to find current IP address of domain ha-293078-m03 in network mk-ha-293078
	I0401 18:22:19.033660   27284 main.go:141] libmachine: (ha-293078-m03) DBG | I0401 18:22:19.033584   27927 retry.go:31] will retry after 712.864102ms: waiting for machine to come up
	I0401 18:22:19.747796   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:19.748252   27284 main.go:141] libmachine: (ha-293078-m03) DBG | unable to find current IP address of domain ha-293078-m03 in network mk-ha-293078
	I0401 18:22:19.748284   27284 main.go:141] libmachine: (ha-293078-m03) DBG | I0401 18:22:19.748204   27927 retry.go:31] will retry after 802.917773ms: waiting for machine to come up
	I0401 18:22:20.552842   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:20.553261   27284 main.go:141] libmachine: (ha-293078-m03) DBG | unable to find current IP address of domain ha-293078-m03 in network mk-ha-293078
	I0401 18:22:20.553304   27284 main.go:141] libmachine: (ha-293078-m03) DBG | I0401 18:22:20.553230   27927 retry.go:31] will retry after 1.335699542s: waiting for machine to come up
	I0401 18:22:21.889998   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:21.890536   27284 main.go:141] libmachine: (ha-293078-m03) DBG | unable to find current IP address of domain ha-293078-m03 in network mk-ha-293078
	I0401 18:22:21.890560   27284 main.go:141] libmachine: (ha-293078-m03) DBG | I0401 18:22:21.890491   27927 retry.go:31] will retry after 1.340623586s: waiting for machine to come up
	I0401 18:22:23.232366   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:23.232762   27284 main.go:141] libmachine: (ha-293078-m03) DBG | unable to find current IP address of domain ha-293078-m03 in network mk-ha-293078
	I0401 18:22:23.232784   27284 main.go:141] libmachine: (ha-293078-m03) DBG | I0401 18:22:23.232718   27927 retry.go:31] will retry after 1.518373355s: waiting for machine to come up
	I0401 18:22:24.753484   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:24.754025   27284 main.go:141] libmachine: (ha-293078-m03) DBG | unable to find current IP address of domain ha-293078-m03 in network mk-ha-293078
	I0401 18:22:24.754078   27284 main.go:141] libmachine: (ha-293078-m03) DBG | I0401 18:22:24.753975   27927 retry.go:31] will retry after 2.792717607s: waiting for machine to come up
	I0401 18:22:27.548044   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:27.548363   27284 main.go:141] libmachine: (ha-293078-m03) DBG | unable to find current IP address of domain ha-293078-m03 in network mk-ha-293078
	I0401 18:22:27.548389   27284 main.go:141] libmachine: (ha-293078-m03) DBG | I0401 18:22:27.548324   27927 retry.go:31] will retry after 3.534393293s: waiting for machine to come up
	I0401 18:22:31.084675   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:31.085143   27284 main.go:141] libmachine: (ha-293078-m03) DBG | unable to find current IP address of domain ha-293078-m03 in network mk-ha-293078
	I0401 18:22:31.085168   27284 main.go:141] libmachine: (ha-293078-m03) DBG | I0401 18:22:31.085102   27927 retry.go:31] will retry after 3.093541151s: waiting for machine to come up
	I0401 18:22:34.181384   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:34.181872   27284 main.go:141] libmachine: (ha-293078-m03) DBG | unable to find current IP address of domain ha-293078-m03 in network mk-ha-293078
	I0401 18:22:34.181901   27284 main.go:141] libmachine: (ha-293078-m03) DBG | I0401 18:22:34.181831   27927 retry.go:31] will retry after 4.953837373s: waiting for machine to come up
	I0401 18:22:39.138773   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:39.139329   27284 main.go:141] libmachine: (ha-293078-m03) Found IP for machine: 192.168.39.210
	I0401 18:22:39.139349   27284 main.go:141] libmachine: (ha-293078-m03) Reserving static IP address...
	I0401 18:22:39.139359   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has current primary IP address 192.168.39.210 and MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:39.139669   27284 main.go:141] libmachine: (ha-293078-m03) DBG | unable to find host DHCP lease matching {name: "ha-293078-m03", mac: "52:54:00:48:33:4d", ip: "192.168.39.210"} in network mk-ha-293078
	I0401 18:22:39.210594   27284 main.go:141] libmachine: (ha-293078-m03) DBG | Getting to WaitForSSH function...
	I0401 18:22:39.210623   27284 main.go:141] libmachine: (ha-293078-m03) Reserved static IP address: 192.168.39.210
	I0401 18:22:39.210641   27284 main.go:141] libmachine: (ha-293078-m03) Waiting for SSH to be available...
	I0401 18:22:39.213525   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:39.213907   27284 main.go:141] libmachine: (ha-293078-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:48:33:4d", ip: ""} in network mk-ha-293078
	I0401 18:22:39.213934   27284 main.go:141] libmachine: (ha-293078-m03) DBG | unable to find defined IP address of network mk-ha-293078 interface with MAC address 52:54:00:48:33:4d
	I0401 18:22:39.214010   27284 main.go:141] libmachine: (ha-293078-m03) DBG | Using SSH client type: external
	I0401 18:22:39.214033   27284 main.go:141] libmachine: (ha-293078-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m03/id_rsa (-rw-------)
	I0401 18:22:39.214063   27284 main.go:141] libmachine: (ha-293078-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0401 18:22:39.214088   27284 main.go:141] libmachine: (ha-293078-m03) DBG | About to run SSH command:
	I0401 18:22:39.214125   27284 main.go:141] libmachine: (ha-293078-m03) DBG | exit 0
	I0401 18:22:39.217897   27284 main.go:141] libmachine: (ha-293078-m03) DBG | SSH cmd err, output: exit status 255: 
	I0401 18:22:39.217913   27284 main.go:141] libmachine: (ha-293078-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0401 18:22:39.217923   27284 main.go:141] libmachine: (ha-293078-m03) DBG | command : exit 0
	I0401 18:22:39.217931   27284 main.go:141] libmachine: (ha-293078-m03) DBG | err     : exit status 255
	I0401 18:22:39.217942   27284 main.go:141] libmachine: (ha-293078-m03) DBG | output  : 
	I0401 18:22:42.218406   27284 main.go:141] libmachine: (ha-293078-m03) DBG | Getting to WaitForSSH function...
	I0401 18:22:42.220893   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:42.221309   27284 main.go:141] libmachine: (ha-293078-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:33:4d", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:22:31 +0000 UTC Type:0 Mac:52:54:00:48:33:4d Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-293078-m03 Clientid:01:52:54:00:48:33:4d}
	I0401 18:22:42.221341   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:42.221481   27284 main.go:141] libmachine: (ha-293078-m03) DBG | Using SSH client type: external
	I0401 18:22:42.221500   27284 main.go:141] libmachine: (ha-293078-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m03/id_rsa (-rw-------)
	I0401 18:22:42.221518   27284 main.go:141] libmachine: (ha-293078-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.210 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0401 18:22:42.221528   27284 main.go:141] libmachine: (ha-293078-m03) DBG | About to run SSH command:
	I0401 18:22:42.221540   27284 main.go:141] libmachine: (ha-293078-m03) DBG | exit 0
	I0401 18:22:42.350186   27284 main.go:141] libmachine: (ha-293078-m03) DBG | SSH cmd err, output: <nil>: 
	I0401 18:22:42.350514   27284 main.go:141] libmachine: (ha-293078-m03) KVM machine creation complete!
	I0401 18:22:42.350907   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetConfigRaw
	I0401 18:22:42.351491   27284 main.go:141] libmachine: (ha-293078-m03) Calling .DriverName
	I0401 18:22:42.351695   27284 main.go:141] libmachine: (ha-293078-m03) Calling .DriverName
	I0401 18:22:42.351877   27284 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0401 18:22:42.351893   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetState
	I0401 18:22:42.353328   27284 main.go:141] libmachine: Detecting operating system of created instance...
	I0401 18:22:42.353344   27284 main.go:141] libmachine: Waiting for SSH to be available...
	I0401 18:22:42.353353   27284 main.go:141] libmachine: Getting to WaitForSSH function...
	I0401 18:22:42.353361   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHHostname
	I0401 18:22:42.355867   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:42.356217   27284 main.go:141] libmachine: (ha-293078-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:33:4d", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:22:31 +0000 UTC Type:0 Mac:52:54:00:48:33:4d Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-293078-m03 Clientid:01:52:54:00:48:33:4d}
	I0401 18:22:42.356244   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:42.356387   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHPort
	I0401 18:22:42.356589   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHKeyPath
	I0401 18:22:42.356761   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHKeyPath
	I0401 18:22:42.356906   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHUsername
	I0401 18:22:42.357068   27284 main.go:141] libmachine: Using SSH client type: native
	I0401 18:22:42.357354   27284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0401 18:22:42.357370   27284 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0401 18:22:42.469324   27284 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 18:22:42.469346   27284 main.go:141] libmachine: Detecting the provisioner...
	I0401 18:22:42.469356   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHHostname
	I0401 18:22:42.472136   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:42.472608   27284 main.go:141] libmachine: (ha-293078-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:33:4d", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:22:31 +0000 UTC Type:0 Mac:52:54:00:48:33:4d Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-293078-m03 Clientid:01:52:54:00:48:33:4d}
	I0401 18:22:42.472640   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:42.472927   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHPort
	I0401 18:22:42.473124   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHKeyPath
	I0401 18:22:42.473349   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHKeyPath
	I0401 18:22:42.473510   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHUsername
	I0401 18:22:42.473769   27284 main.go:141] libmachine: Using SSH client type: native
	I0401 18:22:42.473989   27284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0401 18:22:42.474007   27284 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0401 18:22:42.587119   27284 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0401 18:22:42.587197   27284 main.go:141] libmachine: found compatible host: buildroot
	I0401 18:22:42.587208   27284 main.go:141] libmachine: Provisioning with buildroot...
	I0401 18:22:42.587215   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetMachineName
	I0401 18:22:42.587512   27284 buildroot.go:166] provisioning hostname "ha-293078-m03"
	I0401 18:22:42.587540   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetMachineName
	I0401 18:22:42.587740   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHHostname
	I0401 18:22:42.590585   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:42.590866   27284 main.go:141] libmachine: (ha-293078-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:33:4d", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:22:31 +0000 UTC Type:0 Mac:52:54:00:48:33:4d Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-293078-m03 Clientid:01:52:54:00:48:33:4d}
	I0401 18:22:42.590909   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:42.591022   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHPort
	I0401 18:22:42.591263   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHKeyPath
	I0401 18:22:42.591423   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHKeyPath
	I0401 18:22:42.591528   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHUsername
	I0401 18:22:42.591685   27284 main.go:141] libmachine: Using SSH client type: native
	I0401 18:22:42.591832   27284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0401 18:22:42.591844   27284 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-293078-m03 && echo "ha-293078-m03" | sudo tee /etc/hostname
	I0401 18:22:42.722982   27284 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-293078-m03
	
	I0401 18:22:42.723057   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHHostname
	I0401 18:22:42.726506   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:42.726906   27284 main.go:141] libmachine: (ha-293078-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:33:4d", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:22:31 +0000 UTC Type:0 Mac:52:54:00:48:33:4d Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-293078-m03 Clientid:01:52:54:00:48:33:4d}
	I0401 18:22:42.726929   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:42.727143   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHPort
	I0401 18:22:42.727315   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHKeyPath
	I0401 18:22:42.727475   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHKeyPath
	I0401 18:22:42.727608   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHUsername
	I0401 18:22:42.727796   27284 main.go:141] libmachine: Using SSH client type: native
	I0401 18:22:42.728012   27284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0401 18:22:42.728036   27284 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-293078-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-293078-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-293078-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 18:22:42.853808   27284 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 18:22:42.853838   27284 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18233-10493/.minikube CaCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18233-10493/.minikube}
	I0401 18:22:42.853857   27284 buildroot.go:174] setting up certificates
	I0401 18:22:42.853869   27284 provision.go:84] configureAuth start
	I0401 18:22:42.853881   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetMachineName
	I0401 18:22:42.854202   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetIP
	I0401 18:22:42.856795   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:42.857151   27284 main.go:141] libmachine: (ha-293078-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:33:4d", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:22:31 +0000 UTC Type:0 Mac:52:54:00:48:33:4d Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-293078-m03 Clientid:01:52:54:00:48:33:4d}
	I0401 18:22:42.857177   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:42.857343   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHHostname
	I0401 18:22:42.859327   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:42.859700   27284 main.go:141] libmachine: (ha-293078-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:33:4d", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:22:31 +0000 UTC Type:0 Mac:52:54:00:48:33:4d Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-293078-m03 Clientid:01:52:54:00:48:33:4d}
	I0401 18:22:42.859727   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:42.859864   27284 provision.go:143] copyHostCerts
	I0401 18:22:42.859892   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem
	I0401 18:22:42.859929   27284 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem, removing ...
	I0401 18:22:42.859942   27284 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem
	I0401 18:22:42.860016   27284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem (1082 bytes)
	I0401 18:22:42.860104   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem
	I0401 18:22:42.860132   27284 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem, removing ...
	I0401 18:22:42.860142   27284 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem
	I0401 18:22:42.860180   27284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem (1123 bytes)
	I0401 18:22:42.860275   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem
	I0401 18:22:42.860299   27284 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem, removing ...
	I0401 18:22:42.860318   27284 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem
	I0401 18:22:42.860377   27284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem (1679 bytes)
	I0401 18:22:42.860445   27284 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem org=jenkins.ha-293078-m03 san=[127.0.0.1 192.168.39.210 ha-293078-m03 localhost minikube]
	I0401 18:22:43.069193   27284 provision.go:177] copyRemoteCerts
	I0401 18:22:43.069245   27284 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 18:22:43.069265   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHHostname
	I0401 18:22:43.072120   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:43.072524   27284 main.go:141] libmachine: (ha-293078-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:33:4d", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:22:31 +0000 UTC Type:0 Mac:52:54:00:48:33:4d Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-293078-m03 Clientid:01:52:54:00:48:33:4d}
	I0401 18:22:43.072558   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:43.072758   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHPort
	I0401 18:22:43.072958   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHKeyPath
	I0401 18:22:43.073150   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHUsername
	I0401 18:22:43.073348   27284 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m03/id_rsa Username:docker}
	I0401 18:22:43.160885   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0401 18:22:43.161038   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 18:22:43.189775   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0401 18:22:43.189846   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 18:22:43.217958   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0401 18:22:43.218044   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0401 18:22:43.246492   27284 provision.go:87] duration metric: took 392.611337ms to configureAuth
	I0401 18:22:43.246516   27284 buildroot.go:189] setting minikube options for container-runtime
	I0401 18:22:43.246728   27284 config.go:182] Loaded profile config "ha-293078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 18:22:43.246805   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHHostname
	I0401 18:22:43.250048   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:43.250413   27284 main.go:141] libmachine: (ha-293078-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:33:4d", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:22:31 +0000 UTC Type:0 Mac:52:54:00:48:33:4d Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-293078-m03 Clientid:01:52:54:00:48:33:4d}
	I0401 18:22:43.250436   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:43.250629   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHPort
	I0401 18:22:43.250848   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHKeyPath
	I0401 18:22:43.251032   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHKeyPath
	I0401 18:22:43.251197   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHUsername
	I0401 18:22:43.251358   27284 main.go:141] libmachine: Using SSH client type: native
	I0401 18:22:43.251558   27284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0401 18:22:43.251577   27284 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 18:22:43.559740   27284 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 18:22:43.559778   27284 main.go:141] libmachine: Checking connection to Docker...
	I0401 18:22:43.559790   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetURL
	I0401 18:22:43.561050   27284 main.go:141] libmachine: (ha-293078-m03) DBG | Using libvirt version 6000000
	I0401 18:22:43.563234   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:43.563588   27284 main.go:141] libmachine: (ha-293078-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:33:4d", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:22:31 +0000 UTC Type:0 Mac:52:54:00:48:33:4d Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-293078-m03 Clientid:01:52:54:00:48:33:4d}
	I0401 18:22:43.563618   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:43.563855   27284 main.go:141] libmachine: Docker is up and running!
	I0401 18:22:43.563875   27284 main.go:141] libmachine: Reticulating splines...
	I0401 18:22:43.563886   27284 client.go:171] duration metric: took 28.304121653s to LocalClient.Create
	I0401 18:22:43.563928   27284 start.go:167] duration metric: took 28.304201294s to libmachine.API.Create "ha-293078"
	I0401 18:22:43.563942   27284 start.go:293] postStartSetup for "ha-293078-m03" (driver="kvm2")
	I0401 18:22:43.563957   27284 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 18:22:43.563978   27284 main.go:141] libmachine: (ha-293078-m03) Calling .DriverName
	I0401 18:22:43.564208   27284 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 18:22:43.564231   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHHostname
	I0401 18:22:43.566382   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:43.566669   27284 main.go:141] libmachine: (ha-293078-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:33:4d", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:22:31 +0000 UTC Type:0 Mac:52:54:00:48:33:4d Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-293078-m03 Clientid:01:52:54:00:48:33:4d}
	I0401 18:22:43.566696   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:43.566840   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHPort
	I0401 18:22:43.567044   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHKeyPath
	I0401 18:22:43.567216   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHUsername
	I0401 18:22:43.567360   27284 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m03/id_rsa Username:docker}
	I0401 18:22:43.658485   27284 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 18:22:43.663610   27284 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 18:22:43.663634   27284 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/addons for local assets ...
	I0401 18:22:43.663699   27284 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/files for local assets ...
	I0401 18:22:43.663813   27284 filesync.go:149] local asset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> 177512.pem in /etc/ssl/certs
	I0401 18:22:43.663826   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> /etc/ssl/certs/177512.pem
	I0401 18:22:43.663946   27284 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 18:22:43.674306   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /etc/ssl/certs/177512.pem (1708 bytes)
	I0401 18:22:43.707812   27284 start.go:296] duration metric: took 143.85525ms for postStartSetup
	I0401 18:22:43.707865   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetConfigRaw
	I0401 18:22:43.708531   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetIP
	I0401 18:22:43.711192   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:43.711524   27284 main.go:141] libmachine: (ha-293078-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:33:4d", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:22:31 +0000 UTC Type:0 Mac:52:54:00:48:33:4d Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-293078-m03 Clientid:01:52:54:00:48:33:4d}
	I0401 18:22:43.711553   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:43.711756   27284 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/config.json ...
	I0401 18:22:43.711950   27284 start.go:128] duration metric: took 28.470946976s to createHost
	I0401 18:22:43.711978   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHHostname
	I0401 18:22:43.714466   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:43.714826   27284 main.go:141] libmachine: (ha-293078-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:33:4d", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:22:31 +0000 UTC Type:0 Mac:52:54:00:48:33:4d Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-293078-m03 Clientid:01:52:54:00:48:33:4d}
	I0401 18:22:43.714854   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:43.715058   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHPort
	I0401 18:22:43.715263   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHKeyPath
	I0401 18:22:43.715460   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHKeyPath
	I0401 18:22:43.715657   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHUsername
	I0401 18:22:43.715928   27284 main.go:141] libmachine: Using SSH client type: native
	I0401 18:22:43.716121   27284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0401 18:22:43.716137   27284 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 18:22:43.831268   27284 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711995763.818865754
	
	I0401 18:22:43.831295   27284 fix.go:216] guest clock: 1711995763.818865754
	I0401 18:22:43.831304   27284 fix.go:229] Guest: 2024-04-01 18:22:43.818865754 +0000 UTC Remote: 2024-04-01 18:22:43.711963464 +0000 UTC m=+155.586958148 (delta=106.90229ms)
	I0401 18:22:43.831323   27284 fix.go:200] guest clock delta is within tolerance: 106.90229ms
	I0401 18:22:43.831331   27284 start.go:83] releasing machines lock for "ha-293078-m03", held for 28.590480335s
	I0401 18:22:43.831356   27284 main.go:141] libmachine: (ha-293078-m03) Calling .DriverName
	I0401 18:22:43.831656   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetIP
	I0401 18:22:43.834240   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:43.834620   27284 main.go:141] libmachine: (ha-293078-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:33:4d", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:22:31 +0000 UTC Type:0 Mac:52:54:00:48:33:4d Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-293078-m03 Clientid:01:52:54:00:48:33:4d}
	I0401 18:22:43.834650   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:43.837074   27284 out.go:177] * Found network options:
	I0401 18:22:43.838633   27284 out.go:177]   - NO_PROXY=192.168.39.74,192.168.39.161
	W0401 18:22:43.840028   27284 proxy.go:119] fail to check proxy env: Error ip not in block
	W0401 18:22:43.840051   27284 proxy.go:119] fail to check proxy env: Error ip not in block
	I0401 18:22:43.840067   27284 main.go:141] libmachine: (ha-293078-m03) Calling .DriverName
	I0401 18:22:43.840552   27284 main.go:141] libmachine: (ha-293078-m03) Calling .DriverName
	I0401 18:22:43.840682   27284 main.go:141] libmachine: (ha-293078-m03) Calling .DriverName
	I0401 18:22:43.840802   27284 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 18:22:43.840846   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHHostname
	W0401 18:22:43.840962   27284 proxy.go:119] fail to check proxy env: Error ip not in block
	W0401 18:22:43.840990   27284 proxy.go:119] fail to check proxy env: Error ip not in block
	I0401 18:22:43.841048   27284 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 18:22:43.841070   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHHostname
	I0401 18:22:43.843535   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:43.843912   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:43.843951   27284 main.go:141] libmachine: (ha-293078-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:33:4d", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:22:31 +0000 UTC Type:0 Mac:52:54:00:48:33:4d Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-293078-m03 Clientid:01:52:54:00:48:33:4d}
	I0401 18:22:43.843973   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:43.844132   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHPort
	I0401 18:22:43.844301   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHKeyPath
	I0401 18:22:43.844371   27284 main.go:141] libmachine: (ha-293078-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:33:4d", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:22:31 +0000 UTC Type:0 Mac:52:54:00:48:33:4d Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-293078-m03 Clientid:01:52:54:00:48:33:4d}
	I0401 18:22:43.844396   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:43.844458   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHUsername
	I0401 18:22:43.844612   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHPort
	I0401 18:22:43.844630   27284 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m03/id_rsa Username:docker}
	I0401 18:22:43.844804   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHKeyPath
	I0401 18:22:43.844948   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHUsername
	I0401 18:22:43.845092   27284 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m03/id_rsa Username:docker}
	I0401 18:22:44.088753   27284 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 18:22:44.096862   27284 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 18:22:44.096933   27284 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 18:22:44.116332   27284 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 18:22:44.116354   27284 start.go:494] detecting cgroup driver to use...
	I0401 18:22:44.116426   27284 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 18:22:44.134504   27284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 18:22:44.150718   27284 docker.go:217] disabling cri-docker service (if available) ...
	I0401 18:22:44.150777   27284 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 18:22:44.166834   27284 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 18:22:44.182147   27284 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 18:22:44.301129   27284 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 18:22:44.463554   27284 docker.go:233] disabling docker service ...
	I0401 18:22:44.463608   27284 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 18:22:44.479887   27284 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 18:22:44.495528   27284 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 18:22:44.621231   27284 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 18:22:44.756683   27284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 18:22:44.773052   27284 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 18:22:44.795770   27284 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0401 18:22:44.795842   27284 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:22:44.808660   27284 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 18:22:44.808719   27284 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:22:44.820537   27284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:22:44.832408   27284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:22:44.844498   27284 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 18:22:44.858051   27284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:22:44.871522   27284 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:22:44.893438   27284 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:22:44.906913   27284 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 18:22:44.916966   27284 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0401 18:22:44.917022   27284 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0401 18:22:44.931059   27284 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 18:22:44.943888   27284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 18:22:45.065749   27284 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 18:22:45.216685   27284 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 18:22:45.216747   27284 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 18:22:45.222543   27284 start.go:562] Will wait 60s for crictl version
	I0401 18:22:45.222606   27284 ssh_runner.go:195] Run: which crictl
	I0401 18:22:45.226850   27284 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 18:22:45.275028   27284 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 18:22:45.275103   27284 ssh_runner.go:195] Run: crio --version
	I0401 18:22:45.306557   27284 ssh_runner.go:195] Run: crio --version
	I0401 18:22:45.345397   27284 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0401 18:22:45.346780   27284 out.go:177]   - env NO_PROXY=192.168.39.74
	I0401 18:22:45.348069   27284 out.go:177]   - env NO_PROXY=192.168.39.74,192.168.39.161
	I0401 18:22:45.349221   27284 main.go:141] libmachine: (ha-293078-m03) Calling .GetIP
	I0401 18:22:45.352039   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:45.352397   27284 main.go:141] libmachine: (ha-293078-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:33:4d", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:22:31 +0000 UTC Type:0 Mac:52:54:00:48:33:4d Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-293078-m03 Clientid:01:52:54:00:48:33:4d}
	I0401 18:22:45.352420   27284 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:22:45.352637   27284 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0401 18:22:45.357452   27284 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 18:22:45.374262   27284 mustload.go:65] Loading cluster: ha-293078
	I0401 18:22:45.374525   27284 config.go:182] Loaded profile config "ha-293078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 18:22:45.374841   27284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:22:45.374880   27284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:22:45.390376   27284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34985
	I0401 18:22:45.390855   27284 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:22:45.391310   27284 main.go:141] libmachine: Using API Version  1
	I0401 18:22:45.391339   27284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:22:45.391689   27284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:22:45.391880   27284 main.go:141] libmachine: (ha-293078) Calling .GetState
	I0401 18:22:45.393476   27284 host.go:66] Checking if "ha-293078" exists ...
	I0401 18:22:45.393795   27284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:22:45.393835   27284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:22:45.409540   27284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42477
	I0401 18:22:45.410102   27284 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:22:45.410576   27284 main.go:141] libmachine: Using API Version  1
	I0401 18:22:45.410598   27284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:22:45.410902   27284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:22:45.411103   27284 main.go:141] libmachine: (ha-293078) Calling .DriverName
	I0401 18:22:45.411351   27284 certs.go:68] Setting up /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078 for IP: 192.168.39.210
	I0401 18:22:45.411363   27284 certs.go:194] generating shared ca certs ...
	I0401 18:22:45.411378   27284 certs.go:226] acquiring lock for ca certs: {Name:mk348b3e250c104b662139cd7212c6c6dfda3180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:22:45.411516   27284 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key
	I0401 18:22:45.411585   27284 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key
	I0401 18:22:45.411601   27284 certs.go:256] generating profile certs ...
	I0401 18:22:45.411689   27284 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/client.key
	I0401 18:22:45.411722   27284 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.key.b60f2778
	I0401 18:22:45.411741   27284 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.crt.b60f2778 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.74 192.168.39.161 192.168.39.210 192.168.39.254]
	I0401 18:22:45.477539   27284 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.crt.b60f2778 ...
	I0401 18:22:45.477567   27284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.crt.b60f2778: {Name:mk94d9c7e7188961a9f9c22990b934c3aa1a24dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:22:45.477762   27284 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.key.b60f2778 ...
	I0401 18:22:45.477777   27284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.key.b60f2778: {Name:mk111f6467b10a108cf38d970880495e36f6720e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:22:45.477856   27284 certs.go:381] copying /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.crt.b60f2778 -> /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.crt
	I0401 18:22:45.477978   27284 certs.go:385] copying /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.key.b60f2778 -> /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.key
	I0401 18:22:45.478092   27284 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/proxy-client.key
	I0401 18:22:45.478108   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0401 18:22:45.478126   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0401 18:22:45.478139   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0401 18:22:45.478152   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0401 18:22:45.478165   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0401 18:22:45.478180   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0401 18:22:45.478202   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0401 18:22:45.478216   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0401 18:22:45.478272   27284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem (1338 bytes)
	W0401 18:22:45.478311   27284 certs.go:480] ignoring /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751_empty.pem, impossibly tiny 0 bytes
	I0401 18:22:45.478328   27284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem (1675 bytes)
	I0401 18:22:45.478361   27284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem (1082 bytes)
	I0401 18:22:45.478392   27284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem (1123 bytes)
	I0401 18:22:45.478426   27284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem (1679 bytes)
	I0401 18:22:45.478477   27284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem (1708 bytes)
	I0401 18:22:45.478514   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem -> /usr/share/ca-certificates/17751.pem
	I0401 18:22:45.478534   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> /usr/share/ca-certificates/177512.pem
	I0401 18:22:45.478553   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0401 18:22:45.478592   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:22:45.481548   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:22:45.482060   27284 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:22:45.482091   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:22:45.482329   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:22:45.482502   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:22:45.482724   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:22:45.482872   27284 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078/id_rsa Username:docker}
	I0401 18:22:45.566043   27284 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0401 18:22:45.572928   27284 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0401 18:22:45.587934   27284 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0401 18:22:45.593056   27284 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0401 18:22:45.606420   27284 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0401 18:22:45.611489   27284 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0401 18:22:45.623901   27284 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0401 18:22:45.629467   27284 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0401 18:22:45.644835   27284 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0401 18:22:45.650094   27284 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0401 18:22:45.664304   27284 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0401 18:22:45.669609   27284 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0401 18:22:45.686761   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 18:22:45.721908   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 18:22:45.751085   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 18:22:45.780434   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 18:22:45.809788   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0401 18:22:45.841183   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 18:22:45.870562   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 18:22:45.898873   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 18:22:45.925581   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem --> /usr/share/ca-certificates/17751.pem (1338 bytes)
	I0401 18:22:45.954742   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /usr/share/ca-certificates/177512.pem (1708 bytes)
	I0401 18:22:45.982628   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 18:22:46.009204   27284 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0401 18:22:46.028338   27284 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0401 18:22:46.047940   27284 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0401 18:22:46.067139   27284 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0401 18:22:46.087039   27284 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0401 18:22:46.107123   27284 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0401 18:22:46.127244   27284 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (758 bytes)
	I0401 18:22:46.147755   27284 ssh_runner.go:195] Run: openssl version
	I0401 18:22:46.154170   27284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177512.pem && ln -fs /usr/share/ca-certificates/177512.pem /etc/ssl/certs/177512.pem"
	I0401 18:22:46.167851   27284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177512.pem
	I0401 18:22:46.172914   27284 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 18:15 /usr/share/ca-certificates/177512.pem
	I0401 18:22:46.172960   27284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177512.pem
	I0401 18:22:46.179198   27284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177512.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 18:22:46.192958   27284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 18:22:46.205944   27284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 18:22:46.210934   27284 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 18:07 /usr/share/ca-certificates/minikubeCA.pem
	I0401 18:22:46.210976   27284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 18:22:46.217079   27284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 18:22:46.230500   27284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17751.pem && ln -fs /usr/share/ca-certificates/17751.pem /etc/ssl/certs/17751.pem"
	I0401 18:22:46.245390   27284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17751.pem
	I0401 18:22:46.250183   27284 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 18:15 /usr/share/ca-certificates/17751.pem
	I0401 18:22:46.250231   27284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17751.pem
	I0401 18:22:46.257033   27284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17751.pem /etc/ssl/certs/51391683.0"
	I0401 18:22:46.270296   27284 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 18:22:46.275209   27284 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 18:22:46.275263   27284 kubeadm.go:928] updating node {m03 192.168.39.210 8443 v1.29.3 crio true true} ...
	I0401 18:22:46.275333   27284 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-293078-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.210
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-293078 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 18:22:46.275355   27284 kube-vip.go:111] generating kube-vip config ...
	I0401 18:22:46.275392   27284 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0401 18:22:46.295926   27284 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0401 18:22:46.295991   27284 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0401 18:22:46.296045   27284 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0401 18:22:46.316543   27284 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.29.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.29.3': No such file or directory
	
	Initiating transfer...
	I0401 18:22:46.316630   27284 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.29.3
	I0401 18:22:46.329508   27284 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm.sha256
	I0401 18:22:46.329541   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/linux/amd64/v1.29.3/kubeadm -> /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0401 18:22:46.329607   27284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0401 18:22:46.329508   27284 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl.sha256
	I0401 18:22:46.329513   27284 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet.sha256
	I0401 18:22:46.329657   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/linux/amd64/v1.29.3/kubectl -> /var/lib/minikube/binaries/v1.29.3/kubectl
	I0401 18:22:46.329699   27284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 18:22:46.329773   27284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl
	I0401 18:22:46.348542   27284 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubeadm': No such file or directory
	I0401 18:22:46.348583   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/cache/linux/amd64/v1.29.3/kubeadm --> /var/lib/minikube/binaries/v1.29.3/kubeadm (48340992 bytes)
	I0401 18:22:46.348597   27284 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/linux/amd64/v1.29.3/kubelet -> /var/lib/minikube/binaries/v1.29.3/kubelet
	I0401 18:22:46.348633   27284 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubectl': No such file or directory
	I0401 18:22:46.348658   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/cache/linux/amd64/v1.29.3/kubectl --> /var/lib/minikube/binaries/v1.29.3/kubectl (49799168 bytes)
	I0401 18:22:46.348703   27284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet
	I0401 18:22:46.387206   27284 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubelet': No such file or directory
	I0401 18:22:46.387253   27284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/cache/linux/amd64/v1.29.3/kubelet --> /var/lib/minikube/binaries/v1.29.3/kubelet (111919104 bytes)
	I0401 18:22:47.461486   27284 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0401 18:22:47.472555   27284 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0401 18:22:47.493624   27284 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 18:22:47.514093   27284 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0401 18:22:47.533490   27284 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0401 18:22:47.538325   27284 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 18:22:47.553275   27284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 18:22:47.678379   27284 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 18:22:47.700985   27284 host.go:66] Checking if "ha-293078" exists ...
	I0401 18:22:47.701374   27284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:22:47.701417   27284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:22:47.717953   27284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39563
	I0401 18:22:47.718395   27284 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:22:47.718861   27284 main.go:141] libmachine: Using API Version  1
	I0401 18:22:47.718887   27284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:22:47.719230   27284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:22:47.719427   27284 main.go:141] libmachine: (ha-293078) Calling .DriverName
	I0401 18:22:47.719576   27284 start.go:316] joinCluster: &{Name:ha-293078 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cluster
Name:ha-293078 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.74 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.161 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 18:22:47.719684   27284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0401 18:22:47.719705   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:22:47.722784   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:22:47.723256   27284 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:22:47.723283   27284 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:22:47.723430   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:22:47.723592   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:22:47.723772   27284 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:22:47.723904   27284 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078/id_rsa Username:docker}
	I0401 18:22:47.897889   27284 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 18:22:47.897928   27284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zlz8yw.nr6jjfmjltmu3ae7 --discovery-token-ca-cert-hash sha256:b8a0197ad47aa27a5800307c57228d22e61e4d31af785fa8a896f2b7fab267b8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-293078-m03 --control-plane --apiserver-advertise-address=192.168.39.210 --apiserver-bind-port=8443"
	I0401 18:23:14.465535   27284 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zlz8yw.nr6jjfmjltmu3ae7 --discovery-token-ca-cert-hash sha256:b8a0197ad47aa27a5800307c57228d22e61e4d31af785fa8a896f2b7fab267b8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-293078-m03 --control-plane --apiserver-advertise-address=192.168.39.210 --apiserver-bind-port=8443": (26.567580255s)
	I0401 18:23:14.465575   27284 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0401 18:23:15.164461   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-293078-m03 minikube.k8s.io/updated_at=2024_04_01T18_23_15_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2 minikube.k8s.io/name=ha-293078 minikube.k8s.io/primary=false
	I0401 18:23:15.331409   27284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-293078-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0401 18:23:15.534561   27284 start.go:318] duration metric: took 27.814978339s to joinCluster
	I0401 18:23:15.534645   27284 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 18:23:15.536312   27284 out.go:177] * Verifying Kubernetes components...
	I0401 18:23:15.535095   27284 config.go:182] Loaded profile config "ha-293078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 18:23:15.537738   27284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 18:23:15.846349   27284 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 18:23:15.888554   27284 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 18:23:15.888927   27284 kapi.go:59] client config for ha-293078: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/client.crt", KeyFile:"/home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/client.key", CAFile:"/home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5ca00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0401 18:23:15.889004   27284 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.74:8443
	I0401 18:23:15.889291   27284 node_ready.go:35] waiting up to 6m0s for node "ha-293078-m03" to be "Ready" ...
	I0401 18:23:15.889384   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:15.889396   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:15.889406   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:15.889412   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:15.893245   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:16.389548   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:16.389570   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:16.389580   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:16.389585   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:16.393292   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:16.889673   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:16.889709   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:16.889722   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:16.889729   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:16.893549   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:17.389820   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:17.389845   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:17.389857   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:17.389865   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:17.394046   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:23:17.890405   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:17.890431   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:17.890442   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:17.890448   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:17.893911   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:17.894753   27284 node_ready.go:53] node "ha-293078-m03" has status "Ready":"False"
	I0401 18:23:18.389954   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:18.389979   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:18.389987   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:18.389992   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:18.392851   27284 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 18:23:18.890430   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:18.890452   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:18.890463   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:18.890473   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:18.901311   27284 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0401 18:23:18.902247   27284 node_ready.go:49] node "ha-293078-m03" has status "Ready":"True"
	I0401 18:23:18.902271   27284 node_ready.go:38] duration metric: took 3.012956296s for node "ha-293078-m03" to be "Ready" ...
	I0401 18:23:18.902282   27284 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 18:23:18.902357   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods
	I0401 18:23:18.902371   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:18.902380   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:18.902388   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:18.908821   27284 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 18:23:18.917997   27284 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-8v456" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:18.918068   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-8v456
	I0401 18:23:18.918078   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:18.918086   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:18.918090   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:18.921933   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:18.922667   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078
	I0401 18:23:18.922680   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:18.922688   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:18.922692   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:18.926297   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:18.927021   27284 pod_ready.go:92] pod "coredns-76f75df574-8v456" in "kube-system" namespace has status "Ready":"True"
	I0401 18:23:18.927042   27284 pod_ready.go:81] duration metric: took 9.022032ms for pod "coredns-76f75df574-8v456" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:18.927050   27284 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-sqxnb" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:18.927098   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-sqxnb
	I0401 18:23:18.927126   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:18.927133   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:18.927137   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:18.930084   27284 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 18:23:18.930729   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078
	I0401 18:23:18.930746   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:18.930755   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:18.930761   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:18.933368   27284 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 18:23:18.933987   27284 pod_ready.go:92] pod "coredns-76f75df574-sqxnb" in "kube-system" namespace has status "Ready":"True"
	I0401 18:23:18.934003   27284 pod_ready.go:81] duration metric: took 6.947943ms for pod "coredns-76f75df574-sqxnb" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:18.934011   27284 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-293078" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:18.934050   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078
	I0401 18:23:18.934057   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:18.934063   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:18.934071   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:18.936855   27284 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 18:23:18.937424   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078
	I0401 18:23:18.937438   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:18.937445   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:18.937448   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:18.939930   27284 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 18:23:18.940414   27284 pod_ready.go:92] pod "etcd-ha-293078" in "kube-system" namespace has status "Ready":"True"
	I0401 18:23:18.940432   27284 pod_ready.go:81] duration metric: took 6.414484ms for pod "etcd-ha-293078" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:18.940445   27284 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-293078-m02" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:18.940500   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m02
	I0401 18:23:18.940510   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:18.940520   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:18.940529   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:18.943265   27284 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 18:23:18.943837   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:23:18.943856   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:18.943865   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:18.943869   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:18.946965   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:18.948091   27284 pod_ready.go:92] pod "etcd-ha-293078-m02" in "kube-system" namespace has status "Ready":"True"
	I0401 18:23:18.948105   27284 pod_ready.go:81] duration metric: took 7.654042ms for pod "etcd-ha-293078-m02" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:18.948112   27284 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-293078-m03" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:19.090402   27284 request.go:629] Waited for 142.236463ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m03
	I0401 18:23:19.090485   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m03
	I0401 18:23:19.090497   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:19.090504   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:19.090509   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:19.094389   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:19.290798   27284 request.go:629] Waited for 195.392287ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:19.290849   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:19.290854   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:19.290868   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:19.290879   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:19.294707   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:19.490696   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m03
	I0401 18:23:19.490718   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:19.490730   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:19.490736   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:19.494934   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:23:19.691066   27284 request.go:629] Waited for 195.382723ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:19.691114   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:19.691119   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:19.691127   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:19.691133   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:19.694536   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:19.948893   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m03
	I0401 18:23:19.948914   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:19.948921   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:19.948926   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:19.953796   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:23:20.090832   27284 request.go:629] Waited for 136.206148ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:20.090909   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:20.090917   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:20.090926   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:20.090932   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:20.094356   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:20.448982   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m03
	I0401 18:23:20.449001   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:20.449009   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:20.449013   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:20.454151   27284 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 18:23:20.491189   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:20.491213   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:20.491225   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:20.491232   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:20.495220   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:20.948349   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m03
	I0401 18:23:20.948371   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:20.948379   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:20.948383   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:20.951977   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:20.952768   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:20.952793   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:20.952805   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:20.952810   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:20.955572   27284 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 18:23:20.956401   27284 pod_ready.go:102] pod "etcd-ha-293078-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 18:23:21.448709   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m03
	I0401 18:23:21.448730   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:21.448741   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:21.448746   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:21.453047   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:23:21.453827   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:21.453846   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:21.453857   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:21.453862   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:21.459506   27284 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 18:23:21.948488   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m03
	I0401 18:23:21.948510   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:21.948518   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:21.948522   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:21.952961   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:23:21.954159   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:21.954177   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:21.954188   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:21.954192   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:21.957281   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:22.448514   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m03
	I0401 18:23:22.448531   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:22.448539   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:22.448543   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:22.454087   27284 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 18:23:22.455015   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:22.455034   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:22.455044   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:22.455052   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:22.458813   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:22.949284   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m03
	I0401 18:23:22.949307   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:22.949320   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:22.949327   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:22.952746   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:22.953718   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:22.953736   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:22.953747   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:22.953757   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:22.957590   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:22.958751   27284 pod_ready.go:102] pod "etcd-ha-293078-m03" in "kube-system" namespace has status "Ready":"False"
	I0401 18:23:23.449230   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m03
	I0401 18:23:23.449255   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:23.449264   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:23.449271   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:23.455030   27284 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 18:23:23.455927   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:23.455949   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:23.455960   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:23.455967   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:23.462037   27284 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 18:23:23.948885   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m03
	I0401 18:23:23.948907   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:23.948917   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:23.948921   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:23.952749   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:23.953885   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:23.953905   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:23.953915   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:23.953919   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:23.957406   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:24.448874   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m03
	I0401 18:23:24.448904   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:24.448916   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:24.448922   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:24.456101   27284 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0401 18:23:24.456945   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:24.456967   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:24.456977   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:24.456982   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:24.460705   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:24.949033   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/etcd-ha-293078-m03
	I0401 18:23:24.949052   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:24.949060   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:24.949064   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:24.952838   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:24.954000   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:24.954014   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:24.954022   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:24.954027   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:24.957518   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:24.958453   27284 pod_ready.go:92] pod "etcd-ha-293078-m03" in "kube-system" namespace has status "Ready":"True"
	I0401 18:23:24.958478   27284 pod_ready.go:81] duration metric: took 6.010358402s for pod "etcd-ha-293078-m03" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:24.958500   27284 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-293078" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:24.958573   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-293078
	I0401 18:23:24.958585   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:24.958595   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:24.958605   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:24.961697   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:24.962630   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078
	I0401 18:23:24.962648   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:24.962658   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:24.962662   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:24.965776   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:24.966674   27284 pod_ready.go:92] pod "kube-apiserver-ha-293078" in "kube-system" namespace has status "Ready":"True"
	I0401 18:23:24.966696   27284 pod_ready.go:81] duration metric: took 8.18025ms for pod "kube-apiserver-ha-293078" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:24.966708   27284 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-293078-m02" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:24.966772   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-293078-m02
	I0401 18:23:24.966783   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:24.966793   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:24.966804   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:24.969662   27284 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 18:23:24.970502   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:23:24.970517   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:24.970525   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:24.970531   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:24.973202   27284 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 18:23:24.973799   27284 pod_ready.go:92] pod "kube-apiserver-ha-293078-m02" in "kube-system" namespace has status "Ready":"True"
	I0401 18:23:24.973813   27284 pod_ready.go:81] duration metric: took 7.09873ms for pod "kube-apiserver-ha-293078-m02" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:24.973827   27284 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-293078-m03" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:25.091128   27284 request.go:629] Waited for 117.24775ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-293078-m03
	I0401 18:23:25.091215   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-293078-m03
	I0401 18:23:25.091226   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:25.091238   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:25.091248   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:25.097772   27284 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 18:23:25.291420   27284 request.go:629] Waited for 192.202464ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:25.291485   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:25.291490   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:25.291501   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:25.291505   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:25.295096   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:25.490850   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-293078-m03
	I0401 18:23:25.490872   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:25.490880   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:25.490885   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:25.494669   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:25.690833   27284 request.go:629] Waited for 195.192282ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:25.690924   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:25.690935   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:25.690943   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:25.690951   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:25.694528   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:25.974711   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-293078-m03
	I0401 18:23:25.974741   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:25.974753   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:25.974777   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:25.978565   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:26.091037   27284 request.go:629] Waited for 111.305411ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:26.091102   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:26.091109   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:26.091121   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:26.091133   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:26.095369   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:23:26.095925   27284 pod_ready.go:92] pod "kube-apiserver-ha-293078-m03" in "kube-system" namespace has status "Ready":"True"
	I0401 18:23:26.095941   27284 pod_ready.go:81] duration metric: took 1.12210596s for pod "kube-apiserver-ha-293078-m03" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:26.095951   27284 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-293078" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:26.291382   27284 request.go:629] Waited for 195.357086ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-293078
	I0401 18:23:26.291451   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-293078
	I0401 18:23:26.291459   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:26.291469   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:26.291483   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:26.294977   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:26.490960   27284 request.go:629] Waited for 195.101001ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/nodes/ha-293078
	I0401 18:23:26.491035   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078
	I0401 18:23:26.491044   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:26.491052   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:26.491061   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:26.497846   27284 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 18:23:26.498954   27284 pod_ready.go:92] pod "kube-controller-manager-ha-293078" in "kube-system" namespace has status "Ready":"True"
	I0401 18:23:26.498971   27284 pod_ready.go:81] duration metric: took 403.014452ms for pod "kube-controller-manager-ha-293078" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:26.498981   27284 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-293078-m02" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:26.691040   27284 request.go:629] Waited for 192.000125ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-293078-m02
	I0401 18:23:26.691105   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-293078-m02
	I0401 18:23:26.691113   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:26.691121   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:26.691128   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:26.695475   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:23:26.890804   27284 request.go:629] Waited for 194.161305ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:23:26.890856   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:23:26.890862   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:26.890869   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:26.890874   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:26.894943   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:23:26.895672   27284 pod_ready.go:92] pod "kube-controller-manager-ha-293078-m02" in "kube-system" namespace has status "Ready":"True"
	I0401 18:23:26.895688   27284 pod_ready.go:81] duration metric: took 396.701752ms for pod "kube-controller-manager-ha-293078-m02" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:26.895700   27284 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-293078-m03" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:27.090831   27284 request.go:629] Waited for 195.062948ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-293078-m03
	I0401 18:23:27.090887   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-293078-m03
	I0401 18:23:27.090907   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:27.090938   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:27.090946   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:27.094642   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:27.290846   27284 request.go:629] Waited for 195.388812ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:27.290925   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:27.290935   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:27.290950   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:27.290958   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:27.294816   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:27.490793   27284 request.go:629] Waited for 94.269817ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-293078-m03
	I0401 18:23:27.490843   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-293078-m03
	I0401 18:23:27.490849   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:27.490857   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:27.490863   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:27.494415   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:27.690578   27284 request.go:629] Waited for 195.27528ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:27.690627   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:27.690632   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:27.690639   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:27.690645   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:27.694778   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:23:27.896224   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-293078-m03
	I0401 18:23:27.896247   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:27.896255   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:27.896259   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:27.899642   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:28.091172   27284 request.go:629] Waited for 190.354464ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:28.091255   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:28.091331   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:28.091368   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:28.091382   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:28.095777   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:23:28.396434   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-293078-m03
	I0401 18:23:28.396459   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:28.396471   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:28.396476   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:28.401744   27284 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 18:23:28.490810   27284 request.go:629] Waited for 88.074658ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:28.490869   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:28.490880   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:28.490891   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:28.490901   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:28.494508   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:28.896766   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-293078-m03
	I0401 18:23:28.896791   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:28.896803   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:28.896808   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:28.901043   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:23:28.902086   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:28.902104   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:28.902114   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:28.902119   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:28.904997   27284 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0401 18:23:28.905761   27284 pod_ready.go:92] pod "kube-controller-manager-ha-293078-m03" in "kube-system" namespace has status "Ready":"True"
	I0401 18:23:28.905778   27284 pod_ready.go:81] duration metric: took 2.010067506s for pod "kube-controller-manager-ha-293078-m03" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:28.905787   27284 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8s2xk" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:29.091199   27284 request.go:629] Waited for 185.33684ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8s2xk
	I0401 18:23:29.091265   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8s2xk
	I0401 18:23:29.091272   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:29.091288   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:29.091294   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:29.095118   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:29.291460   27284 request.go:629] Waited for 195.339258ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:23:29.291536   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:23:29.291550   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:29.291558   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:29.291565   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:29.297770   27284 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0401 18:23:29.298430   27284 pod_ready.go:92] pod "kube-proxy-8s2xk" in "kube-system" namespace has status "Ready":"True"
	I0401 18:23:29.298446   27284 pod_ready.go:81] duration metric: took 392.653814ms for pod "kube-proxy-8s2xk" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:29.298457   27284 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-l5q2p" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:29.491448   27284 request.go:629] Waited for 192.915985ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-proxy-l5q2p
	I0401 18:23:29.491496   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-proxy-l5q2p
	I0401 18:23:29.491502   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:29.491519   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:29.491531   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:29.495189   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:29.690845   27284 request.go:629] Waited for 194.460949ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/nodes/ha-293078
	I0401 18:23:29.690928   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078
	I0401 18:23:29.690939   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:29.690950   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:29.690960   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:29.695740   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:23:29.696844   27284 pod_ready.go:92] pod "kube-proxy-l5q2p" in "kube-system" namespace has status "Ready":"True"
	I0401 18:23:29.696861   27284 pod_ready.go:81] duration metric: took 398.393999ms for pod "kube-proxy-l5q2p" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:29.696871   27284 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xjx5z" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:29.891428   27284 request.go:629] Waited for 194.489302ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xjx5z
	I0401 18:23:29.891499   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xjx5z
	I0401 18:23:29.891511   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:29.891528   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:29.891541   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:29.895446   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:30.090662   27284 request.go:629] Waited for 194.28593ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:30.090737   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:30.090745   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:30.090754   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:30.090767   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:30.094756   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:30.095328   27284 pod_ready.go:92] pod "kube-proxy-xjx5z" in "kube-system" namespace has status "Ready":"True"
	I0401 18:23:30.095346   27284 pod_ready.go:81] duration metric: took 398.469601ms for pod "kube-proxy-xjx5z" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:30.095355   27284 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-293078" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:30.291055   27284 request.go:629] Waited for 195.637359ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-293078
	I0401 18:23:30.291123   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-293078
	I0401 18:23:30.291135   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:30.291144   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:30.291188   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:30.295411   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:23:30.490842   27284 request.go:629] Waited for 194.702568ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/nodes/ha-293078
	I0401 18:23:30.490893   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078
	I0401 18:23:30.490899   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:30.490907   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:30.490917   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:30.494562   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:30.495041   27284 pod_ready.go:92] pod "kube-scheduler-ha-293078" in "kube-system" namespace has status "Ready":"True"
	I0401 18:23:30.495059   27284 pod_ready.go:81] duration metric: took 399.697969ms for pod "kube-scheduler-ha-293078" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:30.495069   27284 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-293078-m02" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:30.691215   27284 request.go:629] Waited for 196.08517ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-293078-m02
	I0401 18:23:30.691301   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-293078-m02
	I0401 18:23:30.691309   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:30.691320   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:30.691330   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:30.695045   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:30.891331   27284 request.go:629] Waited for 195.360086ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:23:30.891408   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m02
	I0401 18:23:30.891413   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:30.891424   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:30.891435   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:30.895318   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:30.895896   27284 pod_ready.go:92] pod "kube-scheduler-ha-293078-m02" in "kube-system" namespace has status "Ready":"True"
	I0401 18:23:30.895918   27284 pod_ready.go:81] duration metric: took 400.84198ms for pod "kube-scheduler-ha-293078-m02" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:30.895934   27284 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-293078-m03" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:31.090980   27284 request.go:629] Waited for 194.941422ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-293078-m03
	I0401 18:23:31.091102   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-293078-m03
	I0401 18:23:31.091117   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:31.091129   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:31.091140   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:31.095751   27284 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0401 18:23:31.290824   27284 request.go:629] Waited for 194.375107ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:31.290876   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes/ha-293078-m03
	I0401 18:23:31.290881   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:31.290893   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:31.290911   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:31.294665   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:31.295287   27284 pod_ready.go:92] pod "kube-scheduler-ha-293078-m03" in "kube-system" namespace has status "Ready":"True"
	I0401 18:23:31.295308   27284 pod_ready.go:81] duration metric: took 399.359654ms for pod "kube-scheduler-ha-293078-m03" in "kube-system" namespace to be "Ready" ...
	I0401 18:23:31.295326   27284 pod_ready.go:38] duration metric: took 12.393032861s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 18:23:31.295348   27284 api_server.go:52] waiting for apiserver process to appear ...
	I0401 18:23:31.295409   27284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 18:23:31.311779   27284 api_server.go:72] duration metric: took 15.777106233s to wait for apiserver process to appear ...
	I0401 18:23:31.311796   27284 api_server.go:88] waiting for apiserver healthz status ...
	I0401 18:23:31.311811   27284 api_server.go:253] Checking apiserver healthz at https://192.168.39.74:8443/healthz ...
	I0401 18:23:31.317726   27284 api_server.go:279] https://192.168.39.74:8443/healthz returned 200:
	ok
	I0401 18:23:31.317790   27284 round_trippers.go:463] GET https://192.168.39.74:8443/version
	I0401 18:23:31.317803   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:31.317814   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:31.317820   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:31.318781   27284 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0401 18:23:31.318832   27284 api_server.go:141] control plane version: v1.29.3
	I0401 18:23:31.318850   27284 api_server.go:131] duration metric: took 7.047195ms to wait for apiserver health ...
	I0401 18:23:31.318858   27284 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 18:23:31.491267   27284 request.go:629] Waited for 172.34838ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods
	I0401 18:23:31.491326   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods
	I0401 18:23:31.491333   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:31.491340   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:31.491345   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:31.499252   27284 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0401 18:23:31.506086   27284 system_pods.go:59] 24 kube-system pods found
	I0401 18:23:31.506121   27284 system_pods.go:61] "coredns-76f75df574-8v456" [28cf6a1d-90df-4802-ad3c-9c0276380a44] Running
	I0401 18:23:31.506129   27284 system_pods.go:61] "coredns-76f75df574-sqxnb" [17868bbd-b0e9-460c-b191-9707f613af0a] Running
	I0401 18:23:31.506136   27284 system_pods.go:61] "etcd-ha-293078" [0cf5a089-d409-4fa2-85de-fcc012d79ff3] Running
	I0401 18:23:31.506143   27284 system_pods.go:61] "etcd-ha-293078-m02" [8acd3424-a11f-4a40-97cf-b7e8b4a0975f] Running
	I0401 18:23:31.506151   27284 system_pods.go:61] "etcd-ha-293078-m03" [473cf563-e7fb-4aee-8faa-eda7611bdff1] Running
	I0401 18:23:31.506157   27284 system_pods.go:61] "kindnet-ccxmv" [d3c6474c-bc4a-43fe-85cf-1f250eaaf7a9] Running
	I0401 18:23:31.506165   27284 system_pods.go:61] "kindnet-f4djp" [5b26be41-434f-4908-95aa-64da9fe7ecb0] Running
	I0401 18:23:31.506170   27284 system_pods.go:61] "kindnet-rjfcj" [63f6ecc3-4bd0-406b-8096-ffd6115a2de3] Running
	I0401 18:23:31.506176   27284 system_pods.go:61] "kube-apiserver-ha-293078" [a0e08a32-b673-46b9-b965-9d321e4db6f1] Running
	I0401 18:23:31.506183   27284 system_pods.go:61] "kube-apiserver-ha-293078-m02" [533b0e64-f078-44f0-be6f-a8a3d880138a] Running
	I0401 18:23:31.506189   27284 system_pods.go:61] "kube-apiserver-ha-293078-m03" [ba831509-c5d3-459b-a79e-fbaead3e632d] Running
	I0401 18:23:31.506196   27284 system_pods.go:61] "kube-controller-manager-ha-293078" [3e9c2dbe-f437-4619-9b04-f30d9dab7f61] Running
	I0401 18:23:31.506203   27284 system_pods.go:61] "kube-controller-manager-ha-293078-m02" [e8879a89-4775-488b-9229-e86c2c891b5f] Running
	I0401 18:23:31.506209   27284 system_pods.go:61] "kube-controller-manager-ha-293078-m03" [d38e0572-a059-44bb-a05a-ddf69667c6f6] Running
	I0401 18:23:31.506217   27284 system_pods.go:61] "kube-proxy-8s2xk" [4fc029ea-1f23-497b-8fe3-38fc0e0a4c38] Running
	I0401 18:23:31.506229   27284 system_pods.go:61] "kube-proxy-l5q2p" [167db687-ac11-4f57-83c1-048c31a7b2cb] Running
	I0401 18:23:31.506237   27284 system_pods.go:61] "kube-proxy-xjx5z" [7278ced7-d2eb-4c92-b78a-3d76ba7ad4c8] Running
	I0401 18:23:31.506241   27284 system_pods.go:61] "kube-scheduler-ha-293078" [87acbf1d-d53b-47d7-816a-492ba644ad0e] Running
	I0401 18:23:31.506244   27284 system_pods.go:61] "kube-scheduler-ha-293078-m02" [17a9003c-fd9f-48e2-b4b7-1ee6606ef480] Running
	I0401 18:23:31.506247   27284 system_pods.go:61] "kube-scheduler-ha-293078-m03" [2a7eb692-9006-42af-9cbf-e8c0101b08ce] Running
	I0401 18:23:31.506250   27284 system_pods.go:61] "kube-vip-ha-293078" [543de9ec-6f50-46b9-b6ec-f58964f81f12] Running
	I0401 18:23:31.506253   27284 system_pods.go:61] "kube-vip-ha-293078-m02" [6714926d-3bce-4773-92d6-e3811f532a37] Running
	I0401 18:23:31.506257   27284 system_pods.go:61] "kube-vip-ha-293078-m03" [36491063-d52a-4b27-bded-7d615c52cb80] Running
	I0401 18:23:31.506260   27284 system_pods.go:61] "storage-provisioner" [3d7c42eb-192e-4ae0-b5ae-0883ef5e740c] Running
	I0401 18:23:31.506266   27284 system_pods.go:74] duration metric: took 187.399526ms to wait for pod list to return data ...
	I0401 18:23:31.506282   27284 default_sa.go:34] waiting for default service account to be created ...
	I0401 18:23:31.690506   27284 request.go:629] Waited for 184.152285ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/namespaces/default/serviceaccounts
	I0401 18:23:31.690562   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/default/serviceaccounts
	I0401 18:23:31.690568   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:31.690576   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:31.690580   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:31.695667   27284 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0401 18:23:31.695775   27284 default_sa.go:45] found service account: "default"
	I0401 18:23:31.695792   27284 default_sa.go:55] duration metric: took 189.503133ms for default service account to be created ...
	I0401 18:23:31.695802   27284 system_pods.go:116] waiting for k8s-apps to be running ...
	I0401 18:23:31.891161   27284 request.go:629] Waited for 195.268872ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods
	I0401 18:23:31.891217   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/namespaces/kube-system/pods
	I0401 18:23:31.891224   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:31.891235   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:31.891245   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:31.903457   27284 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0401 18:23:31.910479   27284 system_pods.go:86] 24 kube-system pods found
	I0401 18:23:31.910503   27284 system_pods.go:89] "coredns-76f75df574-8v456" [28cf6a1d-90df-4802-ad3c-9c0276380a44] Running
	I0401 18:23:31.910508   27284 system_pods.go:89] "coredns-76f75df574-sqxnb" [17868bbd-b0e9-460c-b191-9707f613af0a] Running
	I0401 18:23:31.910512   27284 system_pods.go:89] "etcd-ha-293078" [0cf5a089-d409-4fa2-85de-fcc012d79ff3] Running
	I0401 18:23:31.910516   27284 system_pods.go:89] "etcd-ha-293078-m02" [8acd3424-a11f-4a40-97cf-b7e8b4a0975f] Running
	I0401 18:23:31.910520   27284 system_pods.go:89] "etcd-ha-293078-m03" [473cf563-e7fb-4aee-8faa-eda7611bdff1] Running
	I0401 18:23:31.910523   27284 system_pods.go:89] "kindnet-ccxmv" [d3c6474c-bc4a-43fe-85cf-1f250eaaf7a9] Running
	I0401 18:23:31.910527   27284 system_pods.go:89] "kindnet-f4djp" [5b26be41-434f-4908-95aa-64da9fe7ecb0] Running
	I0401 18:23:31.910531   27284 system_pods.go:89] "kindnet-rjfcj" [63f6ecc3-4bd0-406b-8096-ffd6115a2de3] Running
	I0401 18:23:31.910535   27284 system_pods.go:89] "kube-apiserver-ha-293078" [a0e08a32-b673-46b9-b965-9d321e4db6f1] Running
	I0401 18:23:31.910539   27284 system_pods.go:89] "kube-apiserver-ha-293078-m02" [533b0e64-f078-44f0-be6f-a8a3d880138a] Running
	I0401 18:23:31.910543   27284 system_pods.go:89] "kube-apiserver-ha-293078-m03" [ba831509-c5d3-459b-a79e-fbaead3e632d] Running
	I0401 18:23:31.910546   27284 system_pods.go:89] "kube-controller-manager-ha-293078" [3e9c2dbe-f437-4619-9b04-f30d9dab7f61] Running
	I0401 18:23:31.910550   27284 system_pods.go:89] "kube-controller-manager-ha-293078-m02" [e8879a89-4775-488b-9229-e86c2c891b5f] Running
	I0401 18:23:31.910554   27284 system_pods.go:89] "kube-controller-manager-ha-293078-m03" [d38e0572-a059-44bb-a05a-ddf69667c6f6] Running
	I0401 18:23:31.910558   27284 system_pods.go:89] "kube-proxy-8s2xk" [4fc029ea-1f23-497b-8fe3-38fc0e0a4c38] Running
	I0401 18:23:31.910561   27284 system_pods.go:89] "kube-proxy-l5q2p" [167db687-ac11-4f57-83c1-048c31a7b2cb] Running
	I0401 18:23:31.910565   27284 system_pods.go:89] "kube-proxy-xjx5z" [7278ced7-d2eb-4c92-b78a-3d76ba7ad4c8] Running
	I0401 18:23:31.910569   27284 system_pods.go:89] "kube-scheduler-ha-293078" [87acbf1d-d53b-47d7-816a-492ba644ad0e] Running
	I0401 18:23:31.910574   27284 system_pods.go:89] "kube-scheduler-ha-293078-m02" [17a9003c-fd9f-48e2-b4b7-1ee6606ef480] Running
	I0401 18:23:31.910582   27284 system_pods.go:89] "kube-scheduler-ha-293078-m03" [2a7eb692-9006-42af-9cbf-e8c0101b08ce] Running
	I0401 18:23:31.910585   27284 system_pods.go:89] "kube-vip-ha-293078" [543de9ec-6f50-46b9-b6ec-f58964f81f12] Running
	I0401 18:23:31.910588   27284 system_pods.go:89] "kube-vip-ha-293078-m02" [6714926d-3bce-4773-92d6-e3811f532a37] Running
	I0401 18:23:31.910591   27284 system_pods.go:89] "kube-vip-ha-293078-m03" [36491063-d52a-4b27-bded-7d615c52cb80] Running
	I0401 18:23:31.910595   27284 system_pods.go:89] "storage-provisioner" [3d7c42eb-192e-4ae0-b5ae-0883ef5e740c] Running
	I0401 18:23:31.910601   27284 system_pods.go:126] duration metric: took 214.793197ms to wait for k8s-apps to be running ...
	I0401 18:23:31.910610   27284 system_svc.go:44] waiting for kubelet service to be running ....
	I0401 18:23:31.910660   27284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 18:23:31.928486   27284 system_svc.go:56] duration metric: took 17.86774ms WaitForService to wait for kubelet
	I0401 18:23:31.928520   27284 kubeadm.go:576] duration metric: took 16.39384603s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 18:23:31.928545   27284 node_conditions.go:102] verifying NodePressure condition ...
	I0401 18:23:32.090928   27284 request.go:629] Waited for 162.316288ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.74:8443/api/v1/nodes
	I0401 18:23:32.090980   27284 round_trippers.go:463] GET https://192.168.39.74:8443/api/v1/nodes
	I0401 18:23:32.090985   27284 round_trippers.go:469] Request Headers:
	I0401 18:23:32.090992   27284 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0401 18:23:32.090996   27284 round_trippers.go:473]     Accept: application/json, */*
	I0401 18:23:32.094874   27284 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0401 18:23:32.096205   27284 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 18:23:32.096229   27284 node_conditions.go:123] node cpu capacity is 2
	I0401 18:23:32.096242   27284 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 18:23:32.096247   27284 node_conditions.go:123] node cpu capacity is 2
	I0401 18:23:32.096253   27284 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 18:23:32.096258   27284 node_conditions.go:123] node cpu capacity is 2
	I0401 18:23:32.096267   27284 node_conditions.go:105] duration metric: took 167.715883ms to run NodePressure ...
	I0401 18:23:32.096281   27284 start.go:240] waiting for startup goroutines ...
	I0401 18:23:32.096309   27284 start.go:254] writing updated cluster config ...
	I0401 18:23:32.096594   27284 ssh_runner.go:195] Run: rm -f paused
	I0401 18:23:32.148580   27284 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0401 18:23:32.150915   27284 out.go:177] * Done! kubectl is now configured to use "ha-293078" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 01 18:28:04 ha-293078 crio[679]: time="2024-04-01 18:28:04.047758246Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711996084047734381,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=87fea23a-0b26-42e6-8472-ac5e465dbc94 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 18:28:04 ha-293078 crio[679]: time="2024-04-01 18:28:04.048318737Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6ae82a9f-1504-47bf-bc46-3a053cea8b34 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:28:04 ha-293078 crio[679]: time="2024-04-01 18:28:04.048472129Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6ae82a9f-1504-47bf-bc46-3a053cea8b34 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:28:04 ha-293078 crio[679]: time="2024-04-01 18:28:04.048774141Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:61d746cfabdcf1e527c0a0136c923d19be52285d3c766da6faaba4eb3b3c013d,PodSandboxId:d2ac86b05a9f4d146abfc431861426b75aa121e86155e33f6885c2287d35c2d9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1711995814759224430,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-7tn8z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cf87f47-0b2d-42b9-9aa6-e4e3736ca728,},Annotations:map[string]string{io.kubernetes.container.hash: 94944394,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4afd34fc1a474daf3c2e777ef35aa4ae136ec34f86760a743d050e2e52749213,PodSandboxId:55c5a220e09f3ccc632cd8580e6c21d3fd866632a80c3f27ffa1c7eba62a598b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711995665052098243,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d7c42eb-192e-4ae0-b5ae-0883ef5e740c,},Annotations:map[string]string{io.kubernetes.container.hash: 245032af,io.kubernetes.container.restartCount: 0,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce906a6132be484cf993679eea95d6637b9e3b3e9884820e95723b2b2c33e7e6,PodSandboxId:184b6f8a0b09d310e6167558bc2e043f793ec8069ada3f99f07f8c4bf5bbe2a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711995665008742384,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-8v456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28cf6a1d-90df-4802-ad3c-9c0276380a44,},Annotations:map[string]string{io.kubernetes.container.hash: 286c3144,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"n
ame\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be43b3abd52fcb26f579806533a081948a895cdd479befbbc9bd5446fdc060e9,PodSandboxId:f885d7f062d4925a0c12a93de7fab4a08ad786e7dc47a543daf4c046acd992d8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711995665020678327,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-sqxnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17868bbd-b
0e9-460c-b191-9707f613af0a,},Annotations:map[string]string{io.kubernetes.container.hash: 48f6bb3c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39a744dcbdcbe85e94e3ddfb1c32297919a24a7d666cb56091bb090ab4f1b169,PodSandboxId:478784c20d5b4ddab5f45c2a97205bec4962f4b790bbc0e5366d0feba71d6a56,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1711995
663098635767,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rjfcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f6ecc3-4bd0-406b-8096-ffd6115a2de3,},Annotations:map[string]string{io.kubernetes.container.hash: 1c24bf0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d7ab06dacb1f801ea9714513d3f23a0bad938d609fb9f291d0ec0c4903d8d6a,PodSandboxId:849ffff6ee9e4b1fed8bc9e2950a7f2d227adf1318502c7d46a0e03e73165ca2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711995662809497703,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l5q2p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 167db687-ac11-4f57-83c1-048c31a7b2cb,},Annotations:map[string]string{io.kubernetes.container.hash: a09407a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1af36287bacaf83243c8481c963e2cf6f3ec89e4ffb87b80a135b18652a2c9d,PodSandboxId:ac02e9b682f1fb8db19ffd11802dd48a07afe084c904748e3e5127b031338d62,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1711995644476713228,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cee692bcccd6b0feab0f0ba7206df66e,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bd1ccbceec8c5056f450169f49c17acf202e064825e6c51a55ca89e591e25b5,PodSandboxId:91aa9ea508a082ce745f620d0c3c5161f596f6efef8dca30ddfad2fdc5376338,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711995642771196752,Labels:map[string]string{io.kubernetes.container.name: ku
be-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14a552ff6182f687744d2f77e0ce85cc,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d9284db03ef8c515d8a7475c032ebbaa4d501954b6e1f5c383cdcdb3ebf6afb,PodSandboxId:141c3ab4ae279ab738ee7ad84077cefbc2db4a8489f0ea7b3526708562786979,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711995642820315476,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kuber
netes.pod.name: kube-apiserver-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 111b7388841713ed3598aaf599c56758,},Annotations:map[string]string{io.kubernetes.container.hash: 886f76f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8471f59f3de235b71fe57e79412f27884ceb62d668027d7fe3730009d2fbb1fa,PodSandboxId:34af251b6243e69ca34eeeb959254863f3933b8142c33d2027be0d4f7647ea8b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711995642748010111,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-293078,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed3d89e46aa7fdf04d31b28a37841ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 5bcf3746,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e36af39fdf13dd3cf98d2d4a8e7666aea913228d31de663d19c302848663d798,PodSandboxId:4706bec6244a3acd46c920d54796080f4432348e280610cc7f24ee816e251423,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711995642730928046,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-293078,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 431f977c37ad2da28fe70e24f8f4cfb5,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6ae82a9f-1504-47bf-bc46-3a053cea8b34 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:28:04 ha-293078 crio[679]: time="2024-04-01 18:28:04.091158138Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=27f88726-98fe-4b72-9ac0-f548be096a96 name=/runtime.v1.RuntimeService/Version
	Apr 01 18:28:04 ha-293078 crio[679]: time="2024-04-01 18:28:04.091264902Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=27f88726-98fe-4b72-9ac0-f548be096a96 name=/runtime.v1.RuntimeService/Version
	Apr 01 18:28:04 ha-293078 crio[679]: time="2024-04-01 18:28:04.098242837Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e97fe5e2-e11c-43a7-b4db-d5a7e6754066 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 18:28:04 ha-293078 crio[679]: time="2024-04-01 18:28:04.098792802Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711996084098768560,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e97fe5e2-e11c-43a7-b4db-d5a7e6754066 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 18:28:04 ha-293078 crio[679]: time="2024-04-01 18:28:04.099335699Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f62ae2f7-4c6e-46dc-93ef-8d439282363c name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:28:04 ha-293078 crio[679]: time="2024-04-01 18:28:04.099472415Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f62ae2f7-4c6e-46dc-93ef-8d439282363c name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:28:04 ha-293078 crio[679]: time="2024-04-01 18:28:04.100154501Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:61d746cfabdcf1e527c0a0136c923d19be52285d3c766da6faaba4eb3b3c013d,PodSandboxId:d2ac86b05a9f4d146abfc431861426b75aa121e86155e33f6885c2287d35c2d9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1711995814759224430,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-7tn8z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cf87f47-0b2d-42b9-9aa6-e4e3736ca728,},Annotations:map[string]string{io.kubernetes.container.hash: 94944394,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4afd34fc1a474daf3c2e777ef35aa4ae136ec34f86760a743d050e2e52749213,PodSandboxId:55c5a220e09f3ccc632cd8580e6c21d3fd866632a80c3f27ffa1c7eba62a598b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711995665052098243,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d7c42eb-192e-4ae0-b5ae-0883ef5e740c,},Annotations:map[string]string{io.kubernetes.container.hash: 245032af,io.kubernetes.container.restartCount: 0,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce906a6132be484cf993679eea95d6637b9e3b3e9884820e95723b2b2c33e7e6,PodSandboxId:184b6f8a0b09d310e6167558bc2e043f793ec8069ada3f99f07f8c4bf5bbe2a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711995665008742384,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-8v456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28cf6a1d-90df-4802-ad3c-9c0276380a44,},Annotations:map[string]string{io.kubernetes.container.hash: 286c3144,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"n
ame\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be43b3abd52fcb26f579806533a081948a895cdd479befbbc9bd5446fdc060e9,PodSandboxId:f885d7f062d4925a0c12a93de7fab4a08ad786e7dc47a543daf4c046acd992d8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711995665020678327,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-sqxnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17868bbd-b
0e9-460c-b191-9707f613af0a,},Annotations:map[string]string{io.kubernetes.container.hash: 48f6bb3c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39a744dcbdcbe85e94e3ddfb1c32297919a24a7d666cb56091bb090ab4f1b169,PodSandboxId:478784c20d5b4ddab5f45c2a97205bec4962f4b790bbc0e5366d0feba71d6a56,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1711995
663098635767,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rjfcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f6ecc3-4bd0-406b-8096-ffd6115a2de3,},Annotations:map[string]string{io.kubernetes.container.hash: 1c24bf0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d7ab06dacb1f801ea9714513d3f23a0bad938d609fb9f291d0ec0c4903d8d6a,PodSandboxId:849ffff6ee9e4b1fed8bc9e2950a7f2d227adf1318502c7d46a0e03e73165ca2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711995662809497703,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l5q2p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 167db687-ac11-4f57-83c1-048c31a7b2cb,},Annotations:map[string]string{io.kubernetes.container.hash: a09407a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1af36287bacaf83243c8481c963e2cf6f3ec89e4ffb87b80a135b18652a2c9d,PodSandboxId:ac02e9b682f1fb8db19ffd11802dd48a07afe084c904748e3e5127b031338d62,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1711995644476713228,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cee692bcccd6b0feab0f0ba7206df66e,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bd1ccbceec8c5056f450169f49c17acf202e064825e6c51a55ca89e591e25b5,PodSandboxId:91aa9ea508a082ce745f620d0c3c5161f596f6efef8dca30ddfad2fdc5376338,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711995642771196752,Labels:map[string]string{io.kubernetes.container.name: ku
be-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14a552ff6182f687744d2f77e0ce85cc,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d9284db03ef8c515d8a7475c032ebbaa4d501954b6e1f5c383cdcdb3ebf6afb,PodSandboxId:141c3ab4ae279ab738ee7ad84077cefbc2db4a8489f0ea7b3526708562786979,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711995642820315476,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kuber
netes.pod.name: kube-apiserver-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 111b7388841713ed3598aaf599c56758,},Annotations:map[string]string{io.kubernetes.container.hash: 886f76f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8471f59f3de235b71fe57e79412f27884ceb62d668027d7fe3730009d2fbb1fa,PodSandboxId:34af251b6243e69ca34eeeb959254863f3933b8142c33d2027be0d4f7647ea8b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711995642748010111,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-293078,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed3d89e46aa7fdf04d31b28a37841ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 5bcf3746,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e36af39fdf13dd3cf98d2d4a8e7666aea913228d31de663d19c302848663d798,PodSandboxId:4706bec6244a3acd46c920d54796080f4432348e280610cc7f24ee816e251423,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711995642730928046,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-293078,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 431f977c37ad2da28fe70e24f8f4cfb5,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f62ae2f7-4c6e-46dc-93ef-8d439282363c name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:28:04 ha-293078 crio[679]: time="2024-04-01 18:28:04.146064537Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=66527676-3f7d-498f-aff9-afdfa57f383e name=/runtime.v1.RuntimeService/Version
	Apr 01 18:28:04 ha-293078 crio[679]: time="2024-04-01 18:28:04.146211871Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=66527676-3f7d-498f-aff9-afdfa57f383e name=/runtime.v1.RuntimeService/Version
	Apr 01 18:28:04 ha-293078 crio[679]: time="2024-04-01 18:28:04.148523670Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f881168b-beaf-4186-85d4-a2987e75b4cb name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 18:28:04 ha-293078 crio[679]: time="2024-04-01 18:28:04.149002451Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711996084148939588,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f881168b-beaf-4186-85d4-a2987e75b4cb name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 18:28:04 ha-293078 crio[679]: time="2024-04-01 18:28:04.149702243Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0f31ee06-7b2c-4981-85a6-bb7449e13d15 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:28:04 ha-293078 crio[679]: time="2024-04-01 18:28:04.149780983Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0f31ee06-7b2c-4981-85a6-bb7449e13d15 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:28:04 ha-293078 crio[679]: time="2024-04-01 18:28:04.150081167Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:61d746cfabdcf1e527c0a0136c923d19be52285d3c766da6faaba4eb3b3c013d,PodSandboxId:d2ac86b05a9f4d146abfc431861426b75aa121e86155e33f6885c2287d35c2d9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1711995814759224430,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-7tn8z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cf87f47-0b2d-42b9-9aa6-e4e3736ca728,},Annotations:map[string]string{io.kubernetes.container.hash: 94944394,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4afd34fc1a474daf3c2e777ef35aa4ae136ec34f86760a743d050e2e52749213,PodSandboxId:55c5a220e09f3ccc632cd8580e6c21d3fd866632a80c3f27ffa1c7eba62a598b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711995665052098243,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d7c42eb-192e-4ae0-b5ae-0883ef5e740c,},Annotations:map[string]string{io.kubernetes.container.hash: 245032af,io.kubernetes.container.restartCount: 0,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce906a6132be484cf993679eea95d6637b9e3b3e9884820e95723b2b2c33e7e6,PodSandboxId:184b6f8a0b09d310e6167558bc2e043f793ec8069ada3f99f07f8c4bf5bbe2a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711995665008742384,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-8v456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28cf6a1d-90df-4802-ad3c-9c0276380a44,},Annotations:map[string]string{io.kubernetes.container.hash: 286c3144,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"n
ame\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be43b3abd52fcb26f579806533a081948a895cdd479befbbc9bd5446fdc060e9,PodSandboxId:f885d7f062d4925a0c12a93de7fab4a08ad786e7dc47a543daf4c046acd992d8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711995665020678327,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-sqxnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17868bbd-b
0e9-460c-b191-9707f613af0a,},Annotations:map[string]string{io.kubernetes.container.hash: 48f6bb3c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39a744dcbdcbe85e94e3ddfb1c32297919a24a7d666cb56091bb090ab4f1b169,PodSandboxId:478784c20d5b4ddab5f45c2a97205bec4962f4b790bbc0e5366d0feba71d6a56,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1711995
663098635767,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rjfcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f6ecc3-4bd0-406b-8096-ffd6115a2de3,},Annotations:map[string]string{io.kubernetes.container.hash: 1c24bf0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d7ab06dacb1f801ea9714513d3f23a0bad938d609fb9f291d0ec0c4903d8d6a,PodSandboxId:849ffff6ee9e4b1fed8bc9e2950a7f2d227adf1318502c7d46a0e03e73165ca2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711995662809497703,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l5q2p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 167db687-ac11-4f57-83c1-048c31a7b2cb,},Annotations:map[string]string{io.kubernetes.container.hash: a09407a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1af36287bacaf83243c8481c963e2cf6f3ec89e4ffb87b80a135b18652a2c9d,PodSandboxId:ac02e9b682f1fb8db19ffd11802dd48a07afe084c904748e3e5127b031338d62,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1711995644476713228,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cee692bcccd6b0feab0f0ba7206df66e,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bd1ccbceec8c5056f450169f49c17acf202e064825e6c51a55ca89e591e25b5,PodSandboxId:91aa9ea508a082ce745f620d0c3c5161f596f6efef8dca30ddfad2fdc5376338,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711995642771196752,Labels:map[string]string{io.kubernetes.container.name: ku
be-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14a552ff6182f687744d2f77e0ce85cc,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d9284db03ef8c515d8a7475c032ebbaa4d501954b6e1f5c383cdcdb3ebf6afb,PodSandboxId:141c3ab4ae279ab738ee7ad84077cefbc2db4a8489f0ea7b3526708562786979,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711995642820315476,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kuber
netes.pod.name: kube-apiserver-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 111b7388841713ed3598aaf599c56758,},Annotations:map[string]string{io.kubernetes.container.hash: 886f76f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8471f59f3de235b71fe57e79412f27884ceb62d668027d7fe3730009d2fbb1fa,PodSandboxId:34af251b6243e69ca34eeeb959254863f3933b8142c33d2027be0d4f7647ea8b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711995642748010111,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-293078,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed3d89e46aa7fdf04d31b28a37841ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 5bcf3746,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e36af39fdf13dd3cf98d2d4a8e7666aea913228d31de663d19c302848663d798,PodSandboxId:4706bec6244a3acd46c920d54796080f4432348e280610cc7f24ee816e251423,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711995642730928046,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-293078,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 431f977c37ad2da28fe70e24f8f4cfb5,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0f31ee06-7b2c-4981-85a6-bb7449e13d15 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:28:04 ha-293078 crio[679]: time="2024-04-01 18:28:04.199651679Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=149cf7b9-4b88-4932-a565-5fbc1d85757a name=/runtime.v1.RuntimeService/Version
	Apr 01 18:28:04 ha-293078 crio[679]: time="2024-04-01 18:28:04.199720776Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=149cf7b9-4b88-4932-a565-5fbc1d85757a name=/runtime.v1.RuntimeService/Version
	Apr 01 18:28:04 ha-293078 crio[679]: time="2024-04-01 18:28:04.201373128Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5ba26820-1613-488f-842f-ad67689d0430 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 18:28:04 ha-293078 crio[679]: time="2024-04-01 18:28:04.202138691Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711996084202114598,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5ba26820-1613-488f-842f-ad67689d0430 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 18:28:04 ha-293078 crio[679]: time="2024-04-01 18:28:04.203191873Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=512fc64e-2990-45fb-96ed-a16059a26c72 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:28:04 ha-293078 crio[679]: time="2024-04-01 18:28:04.203271205Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=512fc64e-2990-45fb-96ed-a16059a26c72 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:28:04 ha-293078 crio[679]: time="2024-04-01 18:28:04.203661546Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:61d746cfabdcf1e527c0a0136c923d19be52285d3c766da6faaba4eb3b3c013d,PodSandboxId:d2ac86b05a9f4d146abfc431861426b75aa121e86155e33f6885c2287d35c2d9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1711995814759224430,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-7tn8z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cf87f47-0b2d-42b9-9aa6-e4e3736ca728,},Annotations:map[string]string{io.kubernetes.container.hash: 94944394,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4afd34fc1a474daf3c2e777ef35aa4ae136ec34f86760a743d050e2e52749213,PodSandboxId:55c5a220e09f3ccc632cd8580e6c21d3fd866632a80c3f27ffa1c7eba62a598b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711995665052098243,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d7c42eb-192e-4ae0-b5ae-0883ef5e740c,},Annotations:map[string]string{io.kubernetes.container.hash: 245032af,io.kubernetes.container.restartCount: 0,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce906a6132be484cf993679eea95d6637b9e3b3e9884820e95723b2b2c33e7e6,PodSandboxId:184b6f8a0b09d310e6167558bc2e043f793ec8069ada3f99f07f8c4bf5bbe2a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711995665008742384,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-8v456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28cf6a1d-90df-4802-ad3c-9c0276380a44,},Annotations:map[string]string{io.kubernetes.container.hash: 286c3144,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"n
ame\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be43b3abd52fcb26f579806533a081948a895cdd479befbbc9bd5446fdc060e9,PodSandboxId:f885d7f062d4925a0c12a93de7fab4a08ad786e7dc47a543daf4c046acd992d8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711995665020678327,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-sqxnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17868bbd-b
0e9-460c-b191-9707f613af0a,},Annotations:map[string]string{io.kubernetes.container.hash: 48f6bb3c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39a744dcbdcbe85e94e3ddfb1c32297919a24a7d666cb56091bb090ab4f1b169,PodSandboxId:478784c20d5b4ddab5f45c2a97205bec4962f4b790bbc0e5366d0feba71d6a56,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1711995
663098635767,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rjfcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f6ecc3-4bd0-406b-8096-ffd6115a2de3,},Annotations:map[string]string{io.kubernetes.container.hash: 1c24bf0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d7ab06dacb1f801ea9714513d3f23a0bad938d609fb9f291d0ec0c4903d8d6a,PodSandboxId:849ffff6ee9e4b1fed8bc9e2950a7f2d227adf1318502c7d46a0e03e73165ca2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711995662809497703,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l5q2p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 167db687-ac11-4f57-83c1-048c31a7b2cb,},Annotations:map[string]string{io.kubernetes.container.hash: a09407a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1af36287bacaf83243c8481c963e2cf6f3ec89e4ffb87b80a135b18652a2c9d,PodSandboxId:ac02e9b682f1fb8db19ffd11802dd48a07afe084c904748e3e5127b031338d62,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1711995644476713228,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cee692bcccd6b0feab0f0ba7206df66e,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bd1ccbceec8c5056f450169f49c17acf202e064825e6c51a55ca89e591e25b5,PodSandboxId:91aa9ea508a082ce745f620d0c3c5161f596f6efef8dca30ddfad2fdc5376338,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711995642771196752,Labels:map[string]string{io.kubernetes.container.name: ku
be-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14a552ff6182f687744d2f77e0ce85cc,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d9284db03ef8c515d8a7475c032ebbaa4d501954b6e1f5c383cdcdb3ebf6afb,PodSandboxId:141c3ab4ae279ab738ee7ad84077cefbc2db4a8489f0ea7b3526708562786979,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711995642820315476,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kuber
netes.pod.name: kube-apiserver-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 111b7388841713ed3598aaf599c56758,},Annotations:map[string]string{io.kubernetes.container.hash: 886f76f4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8471f59f3de235b71fe57e79412f27884ceb62d668027d7fe3730009d2fbb1fa,PodSandboxId:34af251b6243e69ca34eeeb959254863f3933b8142c33d2027be0d4f7647ea8b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711995642748010111,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-293078,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed3d89e46aa7fdf04d31b28a37841ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 5bcf3746,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e36af39fdf13dd3cf98d2d4a8e7666aea913228d31de663d19c302848663d798,PodSandboxId:4706bec6244a3acd46c920d54796080f4432348e280610cc7f24ee816e251423,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711995642730928046,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-293078,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 431f977c37ad2da28fe70e24f8f4cfb5,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=512fc64e-2990-45fb-96ed-a16059a26c72 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	61d746cfabdcf       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   d2ac86b05a9f4       busybox-7fdf7869d9-7tn8z
	4afd34fc1a474       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   55c5a220e09f3       storage-provisioner
	be43b3abd52fc       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   f885d7f062d49       coredns-76f75df574-sqxnb
	ce906a6132be4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   184b6f8a0b09d       coredns-76f75df574-8v456
	39a744dcbdcbe       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      7 minutes ago       Running             kindnet-cni               0                   478784c20d5b4       kindnet-rjfcj
	8d7ab06dacb1f       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      7 minutes ago       Running             kube-proxy                0                   849ffff6ee9e4       kube-proxy-l5q2p
	c1af36287baca       ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a     7 minutes ago       Running             kube-vip                  0                   ac02e9b682f1f       kube-vip-ha-293078
	9d9284db03ef8       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      7 minutes ago       Running             kube-apiserver            0                   141c3ab4ae279       kube-apiserver-ha-293078
	6bd1ccbceec8c       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      7 minutes ago       Running             kube-scheduler            0                   91aa9ea508a08       kube-scheduler-ha-293078
	8471f59f3de23       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   34af251b6243e       etcd-ha-293078
	e36af39fdf13d       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      7 minutes ago       Running             kube-controller-manager   0                   4706bec6244a3       kube-controller-manager-ha-293078
	
	
	==> coredns [be43b3abd52fcb26f579806533a081948a895cdd479befbbc9bd5446fdc060e9] <==
	[INFO] 127.0.0.1:60623 - 22139 "HINFO IN 659470979403797556.9141881756457822511. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.009678045s
	[INFO] 10.244.0.4:33543 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.003949293s
	[INFO] 10.244.1.2:36542 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000304415s
	[INFO] 10.244.1.2:60003 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000141661s
	[INFO] 10.244.1.2:49897 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002016415s
	[INFO] 10.244.0.4:48954 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004445287s
	[INFO] 10.244.0.4:41430 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00325614s
	[INFO] 10.244.0.4:43938 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000214694s
	[INFO] 10.244.0.4:55272 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000150031s
	[INFO] 10.244.1.2:53484 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00036286s
	[INFO] 10.244.1.2:40882 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000191317s
	[INFO] 10.244.1.2:44362 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000231809s
	[INFO] 10.244.2.2:38878 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000130983s
	[INFO] 10.244.2.2:55123 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000140829s
	[INFO] 10.244.2.2:60293 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000207687s
	[INFO] 10.244.2.2:42748 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000162463s
	[INFO] 10.244.0.4:51962 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000171832s
	[INFO] 10.244.1.2:34522 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169219s
	[INFO] 10.244.1.2:45853 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000149138s
	[INFO] 10.244.0.4:34814 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000154553s
	[INFO] 10.244.1.2:51449 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125618s
	[INFO] 10.244.1.2:53188 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000205396s
	[INFO] 10.244.2.2:55517 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00011978s
	[INFO] 10.244.2.2:58847 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00014087s
	[INFO] 10.244.2.2:55721 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000148617s
	
	
	==> coredns [ce906a6132be484cf993679eea95d6637b9e3b3e9884820e95723b2b2c33e7e6] <==
	[INFO] 10.244.0.4:39293 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000139049s
	[INFO] 10.244.1.2:34347 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153693s
	[INFO] 10.244.1.2:53017 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002482407s
	[INFO] 10.244.1.2:42256 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00177498s
	[INFO] 10.244.1.2:45121 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00042512s
	[INFO] 10.244.1.2:46630 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000135925s
	[INFO] 10.244.2.2:37886 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147427s
	[INFO] 10.244.2.2:47974 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002026718s
	[INFO] 10.244.2.2:36742 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000132507s
	[INFO] 10.244.2.2:60458 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001236853s
	[INFO] 10.244.0.4:36514 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000079136s
	[INFO] 10.244.0.4:54146 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000061884s
	[INFO] 10.244.0.4:48422 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000049796s
	[INFO] 10.244.1.2:53602 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000174827s
	[INFO] 10.244.1.2:52752 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000123202s
	[INFO] 10.244.2.2:42824 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122778s
	[INFO] 10.244.2.2:39412 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000138599s
	[INFO] 10.244.2.2:46213 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000134624s
	[INFO] 10.244.2.2:41423 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000104186s
	[INFO] 10.244.0.4:56317 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189039s
	[INFO] 10.244.0.4:49692 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000121271s
	[INFO] 10.244.0.4:55372 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000369332s
	[INFO] 10.244.1.2:44134 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000161425s
	[INFO] 10.244.1.2:45595 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000086429s
	[INFO] 10.244.2.2:52399 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000233085s
	
	
	==> describe nodes <==
	Name:               ha-293078
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-293078
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2
	                    minikube.k8s.io/name=ha-293078
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_01T18_20_50_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Apr 2024 18:20:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-293078
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Apr 2024 18:27:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Apr 2024 18:23:53 +0000   Mon, 01 Apr 2024 18:20:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Apr 2024 18:23:53 +0000   Mon, 01 Apr 2024 18:20:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Apr 2024 18:23:53 +0000   Mon, 01 Apr 2024 18:20:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Apr 2024 18:23:53 +0000   Mon, 01 Apr 2024 18:21:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.74
	  Hostname:    ha-293078
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3e3b54c701944ac9af1db6484a71e599
	  System UUID:                3e3b54c7-0194-4ac9-af1d-b6484a71e599
	  Boot ID:                    7f2e19c7-2c6d-417a-9d2d-1c4d117eee25
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-7tn8z             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m31s
	  kube-system                 coredns-76f75df574-8v456             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m2s
	  kube-system                 coredns-76f75df574-sqxnb             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m2s
	  kube-system                 etcd-ha-293078                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m14s
	  kube-system                 kindnet-rjfcj                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m2s
	  kube-system                 kube-apiserver-ha-293078             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m14s
	  kube-system                 kube-controller-manager-ha-293078    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m14s
	  kube-system                 kube-proxy-l5q2p                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m2s
	  kube-system                 kube-scheduler-ha-293078             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m14s
	  kube-system                 kube-vip-ha-293078                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m14s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m1s   kube-proxy       
	  Normal  Starting                 7m15s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m15s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m15s  kubelet          Node ha-293078 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m15s  kubelet          Node ha-293078 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m15s  kubelet          Node ha-293078 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m3s   node-controller  Node ha-293078 event: Registered Node ha-293078 in Controller
	  Normal  NodeReady                7m     kubelet          Node ha-293078 status is now: NodeReady
	  Normal  RegisteredNode           5m49s  node-controller  Node ha-293078 event: Registered Node ha-293078 in Controller
	  Normal  RegisteredNode           4m36s  node-controller  Node ha-293078 event: Registered Node ha-293078 in Controller
	
	
	Name:               ha-293078-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-293078-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2
	                    minikube.k8s.io/name=ha-293078
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_01T18_22_00_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Apr 2024 18:21:55 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-293078-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Apr 2024 18:24:39 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 01 Apr 2024 18:23:58 +0000   Mon, 01 Apr 2024 18:25:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 01 Apr 2024 18:23:58 +0000   Mon, 01 Apr 2024 18:25:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 01 Apr 2024 18:23:58 +0000   Mon, 01 Apr 2024 18:25:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 01 Apr 2024 18:23:58 +0000   Mon, 01 Apr 2024 18:25:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.161
	  Hostname:    ha-293078-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ca6adfb154a0459d8158168bf9a31bb6
	  System UUID:                ca6adfb1-54a0-459d-8158-168bf9a31bb6
	  Boot ID:                    f909c6ea-f445-457c-a1c2-304f35f07b9d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-ntbk4                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m31s
	  kube-system                 etcd-ha-293078-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m6s
	  kube-system                 kindnet-f4djp                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m9s
	  kube-system                 kube-apiserver-ha-293078-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m7s
	  kube-system                 kube-controller-manager-ha-293078-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m7s
	  kube-system                 kube-proxy-8s2xk                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m9s
	  kube-system                 kube-scheduler-ha-293078-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m4s
	  kube-system                 kube-vip-ha-293078-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 6m4s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  6m9s (x8 over 6m9s)  kubelet          Node ha-293078-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m9s (x8 over 6m9s)  kubelet          Node ha-293078-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m9s (x7 over 6m9s)  kubelet          Node ha-293078-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m8s                 node-controller  Node ha-293078-m02 event: Registered Node ha-293078-m02 in Controller
	  Normal  RegisteredNode           5m49s                node-controller  Node ha-293078-m02 event: Registered Node ha-293078-m02 in Controller
	  Normal  RegisteredNode           4m36s                node-controller  Node ha-293078-m02 event: Registered Node ha-293078-m02 in Controller
	  Normal  NodeNotReady             2m43s                node-controller  Node ha-293078-m02 status is now: NodeNotReady
	
	
	Name:               ha-293078-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-293078-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2
	                    minikube.k8s.io/name=ha-293078
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_01T18_23_15_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Apr 2024 18:23:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-293078-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Apr 2024 18:27:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Apr 2024 18:23:40 +0000   Mon, 01 Apr 2024 18:23:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Apr 2024 18:23:40 +0000   Mon, 01 Apr 2024 18:23:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Apr 2024 18:23:40 +0000   Mon, 01 Apr 2024 18:23:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Apr 2024 18:23:40 +0000   Mon, 01 Apr 2024 18:23:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.210
	  Hostname:    ha-293078-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c0e3d05a853946ce973ab987568f85f7
	  System UUID:                c0e3d05a-8539-46ce-973a-b987568f85f7
	  Boot ID:                    4961ebe8-8ffa-4300-aa70-cb90bb457245
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-z89qx                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m31s
	  kube-system                 etcd-ha-293078-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m52s
	  kube-system                 kindnet-ccxmv                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m55s
	  kube-system                 kube-apiserver-ha-293078-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m53s
	  kube-system                 kube-controller-manager-ha-293078-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m46s
	  kube-system                 kube-proxy-xjx5z                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m55s
	  kube-system                 kube-scheduler-ha-293078-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m48s
	  kube-system                 kube-vip-ha-293078-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m51s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m55s (x8 over 4m55s)  kubelet          Node ha-293078-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m55s (x8 over 4m55s)  kubelet          Node ha-293078-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m55s (x7 over 4m55s)  kubelet          Node ha-293078-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m55s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m54s                  node-controller  Node ha-293078-m03 event: Registered Node ha-293078-m03 in Controller
	  Normal  RegisteredNode           4m53s                  node-controller  Node ha-293078-m03 event: Registered Node ha-293078-m03 in Controller
	  Normal  RegisteredNode           4m36s                  node-controller  Node ha-293078-m03 event: Registered Node ha-293078-m03 in Controller
	
	
	Name:               ha-293078-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-293078-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2
	                    minikube.k8s.io/name=ha-293078
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_01T18_24_11_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Apr 2024 18:24:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-293078-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Apr 2024 18:28:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Apr 2024 18:24:41 +0000   Mon, 01 Apr 2024 18:24:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Apr 2024 18:24:41 +0000   Mon, 01 Apr 2024 18:24:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Apr 2024 18:24:41 +0000   Mon, 01 Apr 2024 18:24:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Apr 2024 18:24:41 +0000   Mon, 01 Apr 2024 18:24:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.14
	  Hostname:    ha-293078-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 071d9c818e6d4564a98e9da52a34ff25
	  System UUID:                071d9c81-8e6d-4564-a98e-9da52a34ff25
	  Boot ID:                    5d2c1342-0a3a-4951-b2be-ba9d3591daef
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-qhwr4       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m54s
	  kube-system                 kube-proxy-49cqh    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m48s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m54s (x2 over 3m54s)  kubelet          Node ha-293078-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m54s (x2 over 3m54s)  kubelet          Node ha-293078-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m54s (x2 over 3m54s)  kubelet          Node ha-293078-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m53s                  node-controller  Node ha-293078-m04 event: Registered Node ha-293078-m04 in Controller
	  Normal  RegisteredNode           3m51s                  node-controller  Node ha-293078-m04 event: Registered Node ha-293078-m04 in Controller
	  Normal  RegisteredNode           3m49s                  node-controller  Node ha-293078-m04 event: Registered Node ha-293078-m04 in Controller
	  Normal  NodeReady                3m45s                  kubelet          Node ha-293078-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Apr 1 18:20] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051964] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042596] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.580851] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.458007] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.704034] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.937253] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.062108] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066440] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.214972] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.138486] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.294622] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +4.757712] systemd-fstab-generator[764]: Ignoring "noauto" option for root device
	[  +0.062342] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.163879] systemd-fstab-generator[939]: Ignoring "noauto" option for root device
	[  +0.840426] kauditd_printk_skb: 57 callbacks suppressed
	[  +7.059574] systemd-fstab-generator[1362]: Ignoring "noauto" option for root device
	[  +0.076658] kauditd_printk_skb: 40 callbacks suppressed
	[Apr 1 18:21] kauditd_printk_skb: 21 callbacks suppressed
	[Apr 1 18:22] kauditd_printk_skb: 74 callbacks suppressed
	
	
	==> etcd [8471f59f3de235b71fe57e79412f27884ceb62d668027d7fe3730009d2fbb1fa] <==
	{"level":"warn","ts":"2024-04-01T18:28:04.448699Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2c3239b60c033d0c","from":"2c3239b60c033d0c","remote-peer-id":"7d555fa605d0a4f8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-01T18:28:04.461302Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2c3239b60c033d0c","from":"2c3239b60c033d0c","remote-peer-id":"7d555fa605d0a4f8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-01T18:28:04.516922Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2c3239b60c033d0c","from":"2c3239b60c033d0c","remote-peer-id":"7d555fa605d0a4f8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-01T18:28:04.524989Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2c3239b60c033d0c","from":"2c3239b60c033d0c","remote-peer-id":"7d555fa605d0a4f8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-01T18:28:04.533139Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2c3239b60c033d0c","from":"2c3239b60c033d0c","remote-peer-id":"7d555fa605d0a4f8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-01T18:28:04.545356Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2c3239b60c033d0c","from":"2c3239b60c033d0c","remote-peer-id":"7d555fa605d0a4f8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-01T18:28:04.554924Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2c3239b60c033d0c","from":"2c3239b60c033d0c","remote-peer-id":"7d555fa605d0a4f8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-01T18:28:04.560523Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2c3239b60c033d0c","from":"2c3239b60c033d0c","remote-peer-id":"7d555fa605d0a4f8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-01T18:28:04.564895Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2c3239b60c033d0c","from":"2c3239b60c033d0c","remote-peer-id":"7d555fa605d0a4f8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-01T18:28:04.57001Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2c3239b60c033d0c","from":"2c3239b60c033d0c","remote-peer-id":"7d555fa605d0a4f8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-01T18:28:04.574602Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2c3239b60c033d0c","from":"2c3239b60c033d0c","remote-peer-id":"7d555fa605d0a4f8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-01T18:28:04.590574Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2c3239b60c033d0c","from":"2c3239b60c033d0c","remote-peer-id":"7d555fa605d0a4f8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-01T18:28:04.599687Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2c3239b60c033d0c","from":"2c3239b60c033d0c","remote-peer-id":"7d555fa605d0a4f8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-01T18:28:04.607935Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2c3239b60c033d0c","from":"2c3239b60c033d0c","remote-peer-id":"7d555fa605d0a4f8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-01T18:28:04.611893Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2c3239b60c033d0c","from":"2c3239b60c033d0c","remote-peer-id":"7d555fa605d0a4f8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-01T18:28:04.61554Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2c3239b60c033d0c","from":"2c3239b60c033d0c","remote-peer-id":"7d555fa605d0a4f8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-01T18:28:04.623239Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2c3239b60c033d0c","from":"2c3239b60c033d0c","remote-peer-id":"7d555fa605d0a4f8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-01T18:28:04.631088Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2c3239b60c033d0c","from":"2c3239b60c033d0c","remote-peer-id":"7d555fa605d0a4f8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-01T18:28:04.637657Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2c3239b60c033d0c","from":"2c3239b60c033d0c","remote-peer-id":"7d555fa605d0a4f8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-01T18:28:04.641318Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2c3239b60c033d0c","from":"2c3239b60c033d0c","remote-peer-id":"7d555fa605d0a4f8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-01T18:28:04.645503Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2c3239b60c033d0c","from":"2c3239b60c033d0c","remote-peer-id":"7d555fa605d0a4f8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-01T18:28:04.651266Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2c3239b60c033d0c","from":"2c3239b60c033d0c","remote-peer-id":"7d555fa605d0a4f8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-01T18:28:04.658106Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2c3239b60c033d0c","from":"2c3239b60c033d0c","remote-peer-id":"7d555fa605d0a4f8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-01T18:28:04.664893Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2c3239b60c033d0c","from":"2c3239b60c033d0c","remote-peer-id":"7d555fa605d0a4f8","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-01T18:28:04.677152Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2c3239b60c033d0c","from":"2c3239b60c033d0c","remote-peer-id":"7d555fa605d0a4f8","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:28:04 up 7 min,  0 users,  load average: 0.23, 0.27, 0.14
	Linux ha-293078 5.10.207 #1 SMP Wed Mar 27 22:02:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [39a744dcbdcbe85e94e3ddfb1c32297919a24a7d666cb56091bb090ab4f1b169] <==
	I0401 18:27:24.847585       1 main.go:250] Node ha-293078-m04 has CIDR [10.244.3.0/24] 
	I0401 18:27:34.862728       1 main.go:223] Handling node with IPs: map[192.168.39.74:{}]
	I0401 18:27:34.862781       1 main.go:227] handling current node
	I0401 18:27:34.862793       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0401 18:27:34.862800       1 main.go:250] Node ha-293078-m02 has CIDR [10.244.1.0/24] 
	I0401 18:27:34.862924       1 main.go:223] Handling node with IPs: map[192.168.39.210:{}]
	I0401 18:27:34.862954       1 main.go:250] Node ha-293078-m03 has CIDR [10.244.2.0/24] 
	I0401 18:27:34.863013       1 main.go:223] Handling node with IPs: map[192.168.39.14:{}]
	I0401 18:27:34.863048       1 main.go:250] Node ha-293078-m04 has CIDR [10.244.3.0/24] 
	I0401 18:27:44.870846       1 main.go:223] Handling node with IPs: map[192.168.39.74:{}]
	I0401 18:27:44.870896       1 main.go:227] handling current node
	I0401 18:27:44.870907       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0401 18:27:44.870913       1 main.go:250] Node ha-293078-m02 has CIDR [10.244.1.0/24] 
	I0401 18:27:44.871022       1 main.go:223] Handling node with IPs: map[192.168.39.210:{}]
	I0401 18:27:44.871028       1 main.go:250] Node ha-293078-m03 has CIDR [10.244.2.0/24] 
	I0401 18:27:44.871067       1 main.go:223] Handling node with IPs: map[192.168.39.14:{}]
	I0401 18:27:44.871100       1 main.go:250] Node ha-293078-m04 has CIDR [10.244.3.0/24] 
	I0401 18:27:54.884777       1 main.go:223] Handling node with IPs: map[192.168.39.74:{}]
	I0401 18:27:54.884826       1 main.go:227] handling current node
	I0401 18:27:54.884840       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0401 18:27:54.884849       1 main.go:250] Node ha-293078-m02 has CIDR [10.244.1.0/24] 
	I0401 18:27:54.884996       1 main.go:223] Handling node with IPs: map[192.168.39.210:{}]
	I0401 18:27:54.885042       1 main.go:250] Node ha-293078-m03 has CIDR [10.244.2.0/24] 
	I0401 18:27:54.885104       1 main.go:223] Handling node with IPs: map[192.168.39.14:{}]
	I0401 18:27:54.885110       1 main.go:250] Node ha-293078-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [9d9284db03ef8c515d8a7475c032ebbaa4d501954b6e1f5c383cdcdb3ebf6afb] <==
	I0401 18:20:46.159100       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0401 18:20:46.159212       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0401 18:20:46.159704       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0401 18:20:46.171632       1 shared_informer.go:318] Caches are synced for configmaps
	I0401 18:20:46.172467       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0401 18:20:46.172860       1 aggregator.go:165] initial CRD sync complete...
	I0401 18:20:46.172902       1 autoregister_controller.go:141] Starting autoregister controller
	I0401 18:20:46.172908       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0401 18:20:46.172913       1 cache.go:39] Caches are synced for autoregister controller
	I0401 18:20:46.176701       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0401 18:20:47.054317       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0401 18:20:47.064448       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0401 18:20:47.064486       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0401 18:20:47.779371       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0401 18:20:47.827954       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0401 18:20:47.967905       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0401 18:20:47.978196       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.74]
	I0401 18:20:47.979345       1 controller.go:624] quota admission added evaluator for: endpoints
	I0401 18:20:47.984024       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0401 18:20:48.073905       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0401 18:20:49.730759       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0401 18:20:49.749055       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0401 18:20:49.769372       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0401 18:21:02.078805       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0401 18:21:02.136162       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [e36af39fdf13dd3cf98d2d4a8e7666aea913228d31de663d19c302848663d798] <==
	I0401 18:24:10.655230       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-293078-m04\" does not exist"
	I0401 18:24:10.682506       1 range_allocator.go:380] "Set node PodCIDR" node="ha-293078-m04" podCIDRs=["10.244.3.0/24"]
	I0401 18:24:10.706735       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-ddr9q"
	I0401 18:24:10.717842       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-rccf9"
	E0401 18:24:10.879138       1 daemon_controller.go:326] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"dca24ad1-79a6-4941-bc47-fa9b316afdf5", ResourceVersion:"903", Generation:1, CreationTimestamp:time.Date(2024, time.April, 1, 18, 20, 49, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc000865000), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1
, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVol
umeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0017e49c0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00173c090), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVo
lumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:
v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00173c0a8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPers
istentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"registry.k8s.io/kube-proxy:v1.29.3", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc000865100)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil), Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"ku
be-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001732ae0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000cb1ff8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", No
deSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000420540), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil
), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc001aed680)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001b3e050)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:3, NumberMisscheduled:0, DesiredNumberScheduled:3, NumberReady:3, ObservedGeneration:1, UpdatedNumberScheduled:3, NumberAvailable:3, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0401 18:24:10.924598       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-v2shv"
	I0401 18:24:10.938742       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-ddr9q"
	I0401 18:24:10.973599       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-9bvjh"
	I0401 18:24:11.022969       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-rccf9"
	I0401 18:24:11.218240       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-293078-m04"
	I0401 18:24:11.218299       1 event.go:376] "Event occurred" object="ha-293078-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-293078-m04 event: Registered Node ha-293078-m04 in Controller"
	I0401 18:24:19.794324       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-293078-m04"
	I0401 18:25:21.244443       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-293078-m04"
	I0401 18:25:21.244995       1 event.go:376] "Event occurred" object="ha-293078-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node ha-293078-m02 status is now: NodeNotReady"
	I0401 18:25:21.267091       1 event.go:376] "Event occurred" object="kube-system/kube-controller-manager-ha-293078-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0401 18:25:21.286658       1 event.go:376] "Event occurred" object="kube-system/etcd-ha-293078-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0401 18:25:21.302649       1 event.go:376] "Event occurred" object="kube-system/kube-vip-ha-293078-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0401 18:25:21.323243       1 event.go:376] "Event occurred" object="kube-system/kube-scheduler-ha-293078-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0401 18:25:21.337646       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-ntbk4" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0401 18:25:21.397897       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-8s2xk" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0401 18:25:21.434309       1 event.go:376] "Event occurred" object="kube-system/kindnet-f4djp" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0401 18:25:21.450785       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="112.834631ms"
	I0401 18:25:21.451005       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="132.438µs"
	I0401 18:25:21.460778       1 event.go:376] "Event occurred" object="kube-system/kube-apiserver-ha-293078-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	
	
	==> kube-proxy [8d7ab06dacb1f801ea9714513d3f23a0bad938d609fb9f291d0ec0c4903d8d6a] <==
	I0401 18:21:03.148505       1 server_others.go:72] "Using iptables proxy"
	I0401 18:21:03.171602       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.74"]
	I0401 18:21:03.256037       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0401 18:21:03.256088       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0401 18:21:03.256101       1 server_others.go:168] "Using iptables Proxier"
	I0401 18:21:03.259948       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0401 18:21:03.261131       1 server.go:865] "Version info" version="v1.29.3"
	I0401 18:21:03.261181       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 18:21:03.269053       1 config.go:188] "Starting service config controller"
	I0401 18:21:03.269330       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0401 18:21:03.269457       1 config.go:97] "Starting endpoint slice config controller"
	I0401 18:21:03.269465       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0401 18:21:03.272511       1 config.go:315] "Starting node config controller"
	I0401 18:21:03.272548       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0401 18:21:03.369489       1 shared_informer.go:318] Caches are synced for service config
	I0401 18:21:03.369568       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0401 18:21:03.372784       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [6bd1ccbceec8c5056f450169f49c17acf202e064825e6c51a55ca89e591e25b5] <==
	E0401 18:20:47.193265       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0401 18:20:47.212493       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0401 18:20:47.212543       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0401 18:20:47.287659       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0401 18:20:47.287712       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0401 18:20:47.311190       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0401 18:20:47.311337       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0401 18:20:47.440870       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0401 18:20:47.440930       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0401 18:20:47.476851       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0401 18:20:47.476927       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0401 18:20:47.525753       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0401 18:20:47.525793       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0401 18:20:49.041641       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0401 18:24:10.774118       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-rccf9\": pod kindnet-rccf9 is already assigned to node \"ha-293078-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-rccf9" node="ha-293078-m04"
	E0401 18:24:10.774856       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-rccf9\": pod kindnet-rccf9 is already assigned to node \"ha-293078-m04\"" pod="kube-system/kindnet-rccf9"
	I0401 18:24:10.778681       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-rccf9" node="ha-293078-m04"
	E0401 18:24:10.807167       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-v2shv\": pod kindnet-v2shv is already assigned to node \"ha-293078-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-v2shv" node="ha-293078-m04"
	E0401 18:24:10.807305       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod 7740b1db-8105-47c4-a822-717e4c12c0cd(kube-system/kindnet-v2shv) wasn't assumed so cannot be forgotten"
	E0401 18:24:10.807446       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-v2shv\": pod kindnet-v2shv is already assigned to node \"ha-293078-m04\"" pod="kube-system/kindnet-v2shv"
	I0401 18:24:10.807501       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-v2shv" node="ha-293078-m04"
	E0401 18:24:10.819254       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-9bvjh\": pod kube-proxy-9bvjh is already assigned to node \"ha-293078-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-9bvjh" node="ha-293078-m04"
	E0401 18:24:10.819794       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod a7691fe3-ae08-4a93-abc1-86bab696bf9f(kube-system/kube-proxy-9bvjh) wasn't assumed so cannot be forgotten"
	E0401 18:24:10.819870       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-9bvjh\": pod kube-proxy-9bvjh is already assigned to node \"ha-293078-m04\"" pod="kube-system/kube-proxy-9bvjh"
	I0401 18:24:10.819909       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-9bvjh" node="ha-293078-m04"
	
	
	==> kubelet <==
	Apr 01 18:23:49 ha-293078 kubelet[1369]: E0401 18:23:49.983120    1369 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 01 18:23:49 ha-293078 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 01 18:23:49 ha-293078 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 01 18:23:49 ha-293078 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 01 18:23:49 ha-293078 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 01 18:24:49 ha-293078 kubelet[1369]: E0401 18:24:49.981916    1369 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 01 18:24:49 ha-293078 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 01 18:24:49 ha-293078 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 01 18:24:49 ha-293078 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 01 18:24:49 ha-293078 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 01 18:25:49 ha-293078 kubelet[1369]: E0401 18:25:49.983465    1369 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 01 18:25:49 ha-293078 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 01 18:25:49 ha-293078 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 01 18:25:49 ha-293078 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 01 18:25:49 ha-293078 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 01 18:26:49 ha-293078 kubelet[1369]: E0401 18:26:49.981767    1369 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 01 18:26:49 ha-293078 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 01 18:26:49 ha-293078 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 01 18:26:49 ha-293078 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 01 18:26:49 ha-293078 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 01 18:27:49 ha-293078 kubelet[1369]: E0401 18:27:49.986506    1369 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 01 18:27:49 ha-293078 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 01 18:27:49 ha-293078 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 01 18:27:49 ha-293078 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 01 18:27:49 ha-293078 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-293078 -n ha-293078
helpers_test.go:261: (dbg) Run:  kubectl --context ha-293078 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (58.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (373.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-293078 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-293078 -v=7 --alsologtostderr
E0401 18:28:52.854555   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/client.crt: no such file or directory
E0401 18:29:16.856816   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/functional-784295/client.crt: no such file or directory
E0401 18:29:44.542579   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/functional-784295/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-293078 -v=7 --alsologtostderr: exit status 82 (2m2.057622523s)

                                                
                                                
-- stdout --
	* Stopping node "ha-293078-m04"  ...
	* Stopping node "ha-293078-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 18:28:06.271758   32594 out.go:291] Setting OutFile to fd 1 ...
	I0401 18:28:06.272307   32594 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 18:28:06.272325   32594 out.go:304] Setting ErrFile to fd 2...
	I0401 18:28:06.272332   32594 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 18:28:06.272730   32594 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-10493/.minikube/bin
	I0401 18:28:06.273157   32594 out.go:298] Setting JSON to false
	I0401 18:28:06.273272   32594 mustload.go:65] Loading cluster: ha-293078
	I0401 18:28:06.274241   32594 config.go:182] Loaded profile config "ha-293078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 18:28:06.274330   32594 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/config.json ...
	I0401 18:28:06.274583   32594 mustload.go:65] Loading cluster: ha-293078
	I0401 18:28:06.274783   32594 config.go:182] Loaded profile config "ha-293078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 18:28:06.274821   32594 stop.go:39] StopHost: ha-293078-m04
	I0401 18:28:06.275352   32594 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:28:06.275410   32594 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:28:06.290594   32594 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38111
	I0401 18:28:06.291171   32594 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:28:06.291777   32594 main.go:141] libmachine: Using API Version  1
	I0401 18:28:06.291798   32594 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:28:06.292093   32594 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:28:06.294673   32594 out.go:177] * Stopping node "ha-293078-m04"  ...
	I0401 18:28:06.296348   32594 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0401 18:28:06.296387   32594 main.go:141] libmachine: (ha-293078-m04) Calling .DriverName
	I0401 18:28:06.296594   32594 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0401 18:28:06.296619   32594 main.go:141] libmachine: (ha-293078-m04) Calling .GetSSHHostname
	I0401 18:28:06.299393   32594 main.go:141] libmachine: (ha-293078-m04) DBG | domain ha-293078-m04 has defined MAC address 52:54:00:b5:ec:c5 in network mk-ha-293078
	I0401 18:28:06.299774   32594 main.go:141] libmachine: (ha-293078-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:ec:c5", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:23:56 +0000 UTC Type:0 Mac:52:54:00:b5:ec:c5 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-293078-m04 Clientid:01:52:54:00:b5:ec:c5}
	I0401 18:28:06.299796   32594 main.go:141] libmachine: (ha-293078-m04) DBG | domain ha-293078-m04 has defined IP address 192.168.39.14 and MAC address 52:54:00:b5:ec:c5 in network mk-ha-293078
	I0401 18:28:06.299946   32594 main.go:141] libmachine: (ha-293078-m04) Calling .GetSSHPort
	I0401 18:28:06.300103   32594 main.go:141] libmachine: (ha-293078-m04) Calling .GetSSHKeyPath
	I0401 18:28:06.300282   32594 main.go:141] libmachine: (ha-293078-m04) Calling .GetSSHUsername
	I0401 18:28:06.300418   32594 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m04/id_rsa Username:docker}
	I0401 18:28:06.390279   32594 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0401 18:28:06.446606   32594 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0401 18:28:06.502599   32594 main.go:141] libmachine: Stopping "ha-293078-m04"...
	I0401 18:28:06.502629   32594 main.go:141] libmachine: (ha-293078-m04) Calling .GetState
	I0401 18:28:06.504280   32594 main.go:141] libmachine: (ha-293078-m04) Calling .Stop
	I0401 18:28:06.507698   32594 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 0/120
	I0401 18:28:07.851472   32594 main.go:141] libmachine: (ha-293078-m04) Calling .GetState
	I0401 18:28:07.852715   32594 main.go:141] libmachine: Machine "ha-293078-m04" was stopped.
	I0401 18:28:07.852730   32594 stop.go:75] duration metric: took 1.556385161s to stop
	I0401 18:28:07.852746   32594 stop.go:39] StopHost: ha-293078-m03
	I0401 18:28:07.853025   32594 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:28:07.853061   32594 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:28:07.867588   32594 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43001
	I0401 18:28:07.868023   32594 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:28:07.868514   32594 main.go:141] libmachine: Using API Version  1
	I0401 18:28:07.868540   32594 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:28:07.868857   32594 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:28:07.870811   32594 out.go:177] * Stopping node "ha-293078-m03"  ...
	I0401 18:28:07.872169   32594 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0401 18:28:07.872195   32594 main.go:141] libmachine: (ha-293078-m03) Calling .DriverName
	I0401 18:28:07.872457   32594 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0401 18:28:07.872479   32594 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHHostname
	I0401 18:28:07.875230   32594 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:28:07.875677   32594 main.go:141] libmachine: (ha-293078-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:33:4d", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:22:31 +0000 UTC Type:0 Mac:52:54:00:48:33:4d Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-293078-m03 Clientid:01:52:54:00:48:33:4d}
	I0401 18:28:07.875716   32594 main.go:141] libmachine: (ha-293078-m03) DBG | domain ha-293078-m03 has defined IP address 192.168.39.210 and MAC address 52:54:00:48:33:4d in network mk-ha-293078
	I0401 18:28:07.875809   32594 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHPort
	I0401 18:28:07.875982   32594 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHKeyPath
	I0401 18:28:07.876124   32594 main.go:141] libmachine: (ha-293078-m03) Calling .GetSSHUsername
	I0401 18:28:07.876231   32594 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m03/id_rsa Username:docker}
	I0401 18:28:07.966295   32594 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0401 18:28:08.021274   32594 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0401 18:28:08.082508   32594 main.go:141] libmachine: Stopping "ha-293078-m03"...
	I0401 18:28:08.082534   32594 main.go:141] libmachine: (ha-293078-m03) Calling .GetState
	I0401 18:28:08.084167   32594 main.go:141] libmachine: (ha-293078-m03) Calling .Stop
	I0401 18:28:08.087556   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 0/120
	I0401 18:28:09.088983   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 1/120
	I0401 18:28:10.090410   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 2/120
	I0401 18:28:11.091689   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 3/120
	I0401 18:28:12.092878   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 4/120
	I0401 18:28:13.094632   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 5/120
	I0401 18:28:14.096077   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 6/120
	I0401 18:28:15.097419   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 7/120
	I0401 18:28:16.098732   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 8/120
	I0401 18:28:17.100074   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 9/120
	I0401 18:28:18.102033   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 10/120
	I0401 18:28:19.103444   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 11/120
	I0401 18:28:20.104938   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 12/120
	I0401 18:28:21.106244   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 13/120
	I0401 18:28:22.107595   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 14/120
	I0401 18:28:23.109733   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 15/120
	I0401 18:28:24.110869   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 16/120
	I0401 18:28:25.112150   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 17/120
	I0401 18:28:26.113343   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 18/120
	I0401 18:28:27.114745   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 19/120
	I0401 18:28:28.116535   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 20/120
	I0401 18:28:29.118081   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 21/120
	I0401 18:28:30.119535   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 22/120
	I0401 18:28:31.120942   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 23/120
	I0401 18:28:32.122461   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 24/120
	I0401 18:28:33.124337   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 25/120
	I0401 18:28:34.125844   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 26/120
	I0401 18:28:35.127207   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 27/120
	I0401 18:28:36.128542   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 28/120
	I0401 18:28:37.130078   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 29/120
	I0401 18:28:38.131700   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 30/120
	I0401 18:28:39.133254   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 31/120
	I0401 18:28:40.134685   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 32/120
	I0401 18:28:41.136569   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 33/120
	I0401 18:28:42.138223   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 34/120
	I0401 18:28:43.139564   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 35/120
	I0401 18:28:44.140911   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 36/120
	I0401 18:28:45.142258   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 37/120
	I0401 18:28:46.143487   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 38/120
	I0401 18:28:47.144710   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 39/120
	I0401 18:28:48.146317   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 40/120
	I0401 18:28:49.147766   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 41/120
	I0401 18:28:50.148932   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 42/120
	I0401 18:28:51.150290   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 43/120
	I0401 18:28:52.151580   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 44/120
	I0401 18:28:53.153219   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 45/120
	I0401 18:28:54.154758   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 46/120
	I0401 18:28:55.156199   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 47/120
	I0401 18:28:56.157506   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 48/120
	I0401 18:28:57.158841   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 49/120
	I0401 18:28:58.160190   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 50/120
	I0401 18:28:59.161532   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 51/120
	I0401 18:29:00.162776   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 52/120
	I0401 18:29:01.164245   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 53/120
	I0401 18:29:02.165943   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 54/120
	I0401 18:29:03.167319   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 55/120
	I0401 18:29:04.168523   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 56/120
	I0401 18:29:05.169785   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 57/120
	I0401 18:29:06.172064   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 58/120
	I0401 18:29:07.174164   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 59/120
	I0401 18:29:08.175809   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 60/120
	I0401 18:29:09.177837   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 61/120
	I0401 18:29:10.179168   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 62/120
	I0401 18:29:11.180613   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 63/120
	I0401 18:29:12.182027   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 64/120
	I0401 18:29:13.183905   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 65/120
	I0401 18:29:14.185164   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 66/120
	I0401 18:29:15.186500   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 67/120
	I0401 18:29:16.187804   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 68/120
	I0401 18:29:17.189098   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 69/120
	I0401 18:29:18.190675   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 70/120
	I0401 18:29:19.192002   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 71/120
	I0401 18:29:20.193330   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 72/120
	I0401 18:29:21.194431   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 73/120
	I0401 18:29:22.195786   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 74/120
	I0401 18:29:23.197494   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 75/120
	I0401 18:29:24.198769   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 76/120
	I0401 18:29:25.200223   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 77/120
	I0401 18:29:26.201409   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 78/120
	I0401 18:29:27.202701   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 79/120
	I0401 18:29:28.204645   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 80/120
	I0401 18:29:29.206132   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 81/120
	I0401 18:29:30.207447   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 82/120
	I0401 18:29:31.208760   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 83/120
	I0401 18:29:32.210100   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 84/120
	I0401 18:29:33.212001   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 85/120
	I0401 18:29:34.213534   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 86/120
	I0401 18:29:35.214885   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 87/120
	I0401 18:29:36.216112   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 88/120
	I0401 18:29:37.217494   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 89/120
	I0401 18:29:38.219410   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 90/120
	I0401 18:29:39.220939   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 91/120
	I0401 18:29:40.222221   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 92/120
	I0401 18:29:41.223394   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 93/120
	I0401 18:29:42.224812   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 94/120
	I0401 18:29:43.226717   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 95/120
	I0401 18:29:44.228020   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 96/120
	I0401 18:29:45.229486   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 97/120
	I0401 18:29:46.230758   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 98/120
	I0401 18:29:47.232262   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 99/120
	I0401 18:29:48.233944   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 100/120
	I0401 18:29:49.235298   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 101/120
	I0401 18:29:50.236738   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 102/120
	I0401 18:29:51.238064   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 103/120
	I0401 18:29:52.239455   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 104/120
	I0401 18:29:53.241293   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 105/120
	I0401 18:29:54.242835   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 106/120
	I0401 18:29:55.244588   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 107/120
	I0401 18:29:56.245885   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 108/120
	I0401 18:29:57.248114   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 109/120
	I0401 18:29:58.249999   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 110/120
	I0401 18:29:59.251382   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 111/120
	I0401 18:30:00.252567   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 112/120
	I0401 18:30:01.253948   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 113/120
	I0401 18:30:02.255334   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 114/120
	I0401 18:30:03.257102   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 115/120
	I0401 18:30:04.258465   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 116/120
	I0401 18:30:05.259789   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 117/120
	I0401 18:30:06.261549   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 118/120
	I0401 18:30:07.262852   32594 main.go:141] libmachine: (ha-293078-m03) Waiting for machine to stop 119/120
	I0401 18:30:08.263610   32594 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0401 18:30:08.263657   32594 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0401 18:30:08.265453   32594 out.go:177] 
	W0401 18:30:08.266871   32594 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0401 18:30:08.266888   32594 out.go:239] * 
	* 
	W0401 18:30:08.269431   32594 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 18:30:08.271115   32594 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-293078 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-293078 --wait=true -v=7 --alsologtostderr
E0401 18:33:52.855248   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-293078 --wait=true -v=7 --alsologtostderr: (4m8.388713909s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-293078
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-293078 -n ha-293078
E0401 18:34:16.857059   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/functional-784295/client.crt: no such file or directory
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-293078 logs -n 25: (2.1373028s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   |    Version     |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| cp      | ha-293078 cp ha-293078-m03:/home/docker/cp-test.txt                              | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | ha-293078-m02:/home/docker/cp-test_ha-293078-m03_ha-293078-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-293078 ssh -n                                                                 | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | ha-293078-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-293078 ssh -n ha-293078-m02 sudo cat                                          | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | /home/docker/cp-test_ha-293078-m03_ha-293078-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-293078 cp ha-293078-m03:/home/docker/cp-test.txt                              | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | ha-293078-m04:/home/docker/cp-test_ha-293078-m03_ha-293078-m04.txt               |           |         |                |                     |                     |
	| ssh     | ha-293078 ssh -n                                                                 | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | ha-293078-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-293078 ssh -n ha-293078-m04 sudo cat                                          | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | /home/docker/cp-test_ha-293078-m03_ha-293078-m04.txt                             |           |         |                |                     |                     |
	| cp      | ha-293078 cp testdata/cp-test.txt                                                | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | ha-293078-m04:/home/docker/cp-test.txt                                           |           |         |                |                     |                     |
	| ssh     | ha-293078 ssh -n                                                                 | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | ha-293078-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-293078 cp ha-293078-m04:/home/docker/cp-test.txt                              | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3967030531/001/cp-test_ha-293078-m04.txt |           |         |                |                     |                     |
	| ssh     | ha-293078 ssh -n                                                                 | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | ha-293078-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-293078 cp ha-293078-m04:/home/docker/cp-test.txt                              | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | ha-293078:/home/docker/cp-test_ha-293078-m04_ha-293078.txt                       |           |         |                |                     |                     |
	| ssh     | ha-293078 ssh -n                                                                 | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | ha-293078-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-293078 ssh -n ha-293078 sudo cat                                              | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | /home/docker/cp-test_ha-293078-m04_ha-293078.txt                                 |           |         |                |                     |                     |
	| cp      | ha-293078 cp ha-293078-m04:/home/docker/cp-test.txt                              | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | ha-293078-m02:/home/docker/cp-test_ha-293078-m04_ha-293078-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-293078 ssh -n                                                                 | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | ha-293078-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-293078 ssh -n ha-293078-m02 sudo cat                                          | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | /home/docker/cp-test_ha-293078-m04_ha-293078-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-293078 cp ha-293078-m04:/home/docker/cp-test.txt                              | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | ha-293078-m03:/home/docker/cp-test_ha-293078-m04_ha-293078-m03.txt               |           |         |                |                     |                     |
	| ssh     | ha-293078 ssh -n                                                                 | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | ha-293078-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-293078 ssh -n ha-293078-m03 sudo cat                                          | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | /home/docker/cp-test_ha-293078-m04_ha-293078-m03.txt                             |           |         |                |                     |                     |
	| node    | ha-293078 node stop m02 -v=7                                                     | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | ha-293078 node start m02 -v=7                                                    | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:27 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | list -p ha-293078 -v=7                                                           | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:28 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| stop    | -p ha-293078 -v=7                                                                | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:28 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| start   | -p ha-293078 --wait=true -v=7                                                    | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:30 UTC | 01 Apr 24 18:34 UTC |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | list -p ha-293078                                                                | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:34 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/01 18:30:08
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 18:30:08.328548   32936 out.go:291] Setting OutFile to fd 1 ...
	I0401 18:30:08.328682   32936 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 18:30:08.328691   32936 out.go:304] Setting ErrFile to fd 2...
	I0401 18:30:08.328695   32936 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 18:30:08.328888   32936 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-10493/.minikube/bin
	I0401 18:30:08.329471   32936 out.go:298] Setting JSON to false
	I0401 18:30:08.330452   32936 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4360,"bootTime":1711991848,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 18:30:08.330508   32936 start.go:139] virtualization: kvm guest
	I0401 18:30:08.333069   32936 out.go:177] * [ha-293078] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 18:30:08.334723   32936 notify.go:220] Checking for updates...
	I0401 18:30:08.334742   32936 out.go:177]   - MINIKUBE_LOCATION=18233
	I0401 18:30:08.335962   32936 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 18:30:08.337333   32936 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 18:30:08.338593   32936 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-10493/.minikube
	I0401 18:30:08.339835   32936 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 18:30:08.341116   32936 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 18:30:08.342730   32936 config.go:182] Loaded profile config "ha-293078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 18:30:08.342815   32936 driver.go:392] Setting default libvirt URI to qemu:///system
	I0401 18:30:08.343193   32936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:30:08.343230   32936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:30:08.358182   32936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41541
	I0401 18:30:08.358571   32936 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:30:08.359183   32936 main.go:141] libmachine: Using API Version  1
	I0401 18:30:08.359207   32936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:30:08.359538   32936 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:30:08.359706   32936 main.go:141] libmachine: (ha-293078) Calling .DriverName
	I0401 18:30:08.394285   32936 out.go:177] * Using the kvm2 driver based on existing profile
	I0401 18:30:08.395478   32936 start.go:297] selected driver: kvm2
	I0401 18:30:08.395489   32936 start.go:901] validating driver "kvm2" against &{Name:ha-293078 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.29.3 ClusterName:ha-293078 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.74 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.161 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.14 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 18:30:08.395609   32936 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 18:30:08.395909   32936 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 18:30:08.395970   32936 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18233-10493/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0401 18:30:08.410464   32936 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0401 18:30:08.411165   32936 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 18:30:08.411225   32936 cni.go:84] Creating CNI manager for ""
	I0401 18:30:08.411240   32936 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0401 18:30:08.411317   32936 start.go:340] cluster config:
	{Name:ha-293078 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-293078 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.74 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.161 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.14 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 18:30:08.411883   32936 iso.go:125] acquiring lock: {Name:mka511ffe42ecd86bd7f46e7a17ddcdd3e5e4327 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 18:30:08.413744   32936 out.go:177] * Starting "ha-293078" primary control-plane node in "ha-293078" cluster
	I0401 18:30:08.415082   32936 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0401 18:30:08.415118   32936 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0401 18:30:08.415134   32936 cache.go:56] Caching tarball of preloaded images
	I0401 18:30:08.415223   32936 preload.go:173] Found /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 18:30:08.415238   32936 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0401 18:30:08.415384   32936 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/config.json ...
	I0401 18:30:08.415614   32936 start.go:360] acquireMachinesLock for ha-293078: {Name:mk6b7472209a8db5f40be4c2f0565da7e0094c19 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 18:30:08.415664   32936 start.go:364] duration metric: took 27.893µs to acquireMachinesLock for "ha-293078"
	I0401 18:30:08.415682   32936 start.go:96] Skipping create...Using existing machine configuration
	I0401 18:30:08.415692   32936 fix.go:54] fixHost starting: 
	I0401 18:30:08.416103   32936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:30:08.416140   32936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:30:08.430119   32936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36443
	I0401 18:30:08.430545   32936 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:30:08.430971   32936 main.go:141] libmachine: Using API Version  1
	I0401 18:30:08.430995   32936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:30:08.431350   32936 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:30:08.431553   32936 main.go:141] libmachine: (ha-293078) Calling .DriverName
	I0401 18:30:08.431730   32936 main.go:141] libmachine: (ha-293078) Calling .GetState
	I0401 18:30:08.433398   32936 fix.go:112] recreateIfNeeded on ha-293078: state=Running err=<nil>
	W0401 18:30:08.433417   32936 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 18:30:08.435323   32936 out.go:177] * Updating the running kvm2 "ha-293078" VM ...
	I0401 18:30:08.436666   32936 machine.go:94] provisionDockerMachine start ...
	I0401 18:30:08.436683   32936 main.go:141] libmachine: (ha-293078) Calling .DriverName
	I0401 18:30:08.436839   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:30:08.439389   32936 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:30:08.439886   32936 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:30:08.439916   32936 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:30:08.440007   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:30:08.440182   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:30:08.440324   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:30:08.440466   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:30:08.440598   32936 main.go:141] libmachine: Using SSH client type: native
	I0401 18:30:08.440769   32936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.74 22 <nil> <nil>}
	I0401 18:30:08.440789   32936 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 18:30:08.551483   32936 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-293078
	
	I0401 18:30:08.551506   32936 main.go:141] libmachine: (ha-293078) Calling .GetMachineName
	I0401 18:30:08.551767   32936 buildroot.go:166] provisioning hostname "ha-293078"
	I0401 18:30:08.551788   32936 main.go:141] libmachine: (ha-293078) Calling .GetMachineName
	I0401 18:30:08.551955   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:30:08.554622   32936 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:30:08.555073   32936 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:30:08.555098   32936 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:30:08.555239   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:30:08.555425   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:30:08.555582   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:30:08.555719   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:30:08.555888   32936 main.go:141] libmachine: Using SSH client type: native
	I0401 18:30:08.556038   32936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.74 22 <nil> <nil>}
	I0401 18:30:08.556051   32936 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-293078 && echo "ha-293078" | sudo tee /etc/hostname
	I0401 18:30:08.682841   32936 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-293078
	
	I0401 18:30:08.682876   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:30:08.685357   32936 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:30:08.685815   32936 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:30:08.685848   32936 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:30:08.686084   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:30:08.686282   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:30:08.686435   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:30:08.686558   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:30:08.686706   32936 main.go:141] libmachine: Using SSH client type: native
	I0401 18:30:08.686898   32936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.74 22 <nil> <nil>}
	I0401 18:30:08.686920   32936 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-293078' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-293078/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-293078' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 18:30:08.800236   32936 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 18:30:08.800262   32936 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18233-10493/.minikube CaCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18233-10493/.minikube}
	I0401 18:30:08.800286   32936 buildroot.go:174] setting up certificates
	I0401 18:30:08.800297   32936 provision.go:84] configureAuth start
	I0401 18:30:08.800308   32936 main.go:141] libmachine: (ha-293078) Calling .GetMachineName
	I0401 18:30:08.800595   32936 main.go:141] libmachine: (ha-293078) Calling .GetIP
	I0401 18:30:08.803046   32936 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:30:08.803455   32936 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:30:08.803483   32936 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:30:08.803606   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:30:08.805731   32936 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:30:08.806103   32936 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:30:08.806125   32936 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:30:08.806298   32936 provision.go:143] copyHostCerts
	I0401 18:30:08.806325   32936 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem
	I0401 18:30:08.806357   32936 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem, removing ...
	I0401 18:30:08.806369   32936 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem
	I0401 18:30:08.806433   32936 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem (1082 bytes)
	I0401 18:30:08.806506   32936 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem
	I0401 18:30:08.806525   32936 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem, removing ...
	I0401 18:30:08.806530   32936 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem
	I0401 18:30:08.806555   32936 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem (1123 bytes)
	I0401 18:30:08.806596   32936 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem
	I0401 18:30:08.806613   32936 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem, removing ...
	I0401 18:30:08.806616   32936 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem
	I0401 18:30:08.806635   32936 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem (1679 bytes)
	I0401 18:30:08.806680   32936 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem org=jenkins.ha-293078 san=[127.0.0.1 192.168.39.74 ha-293078 localhost minikube]
	I0401 18:30:09.001766   32936 provision.go:177] copyRemoteCerts
	I0401 18:30:09.001818   32936 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 18:30:09.001838   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:30:09.004390   32936 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:30:09.004720   32936 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:30:09.004743   32936 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:30:09.004911   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:30:09.005120   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:30:09.005264   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:30:09.005402   32936 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078/id_rsa Username:docker}
	I0401 18:30:09.089494   32936 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0401 18:30:09.089551   32936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 18:30:09.119137   32936 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0401 18:30:09.119231   32936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0401 18:30:09.147243   32936 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0401 18:30:09.147320   32936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 18:30:09.177439   32936 provision.go:87] duration metric: took 377.129311ms to configureAuth
	I0401 18:30:09.177467   32936 buildroot.go:189] setting minikube options for container-runtime
	I0401 18:30:09.177751   32936 config.go:182] Loaded profile config "ha-293078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 18:30:09.177836   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:30:09.180340   32936 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:30:09.180683   32936 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:30:09.180709   32936 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:30:09.180848   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:30:09.181039   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:30:09.181187   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:30:09.181372   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:30:09.181514   32936 main.go:141] libmachine: Using SSH client type: native
	I0401 18:30:09.181689   32936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.74 22 <nil> <nil>}
	I0401 18:30:09.181705   32936 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 18:31:40.023679   32936 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 18:31:40.023702   32936 machine.go:97] duration metric: took 1m31.587022348s to provisionDockerMachine
	I0401 18:31:40.023717   32936 start.go:293] postStartSetup for "ha-293078" (driver="kvm2")
	I0401 18:31:40.023731   32936 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 18:31:40.023750   32936 main.go:141] libmachine: (ha-293078) Calling .DriverName
	I0401 18:31:40.024117   32936 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 18:31:40.024163   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:31:40.027265   32936 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:31:40.027757   32936 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:31:40.027780   32936 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:31:40.028000   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:31:40.028193   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:31:40.028346   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:31:40.028474   32936 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078/id_rsa Username:docker}
	I0401 18:31:40.116781   32936 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 18:31:40.121600   32936 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 18:31:40.121619   32936 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/addons for local assets ...
	I0401 18:31:40.121687   32936 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/files for local assets ...
	I0401 18:31:40.121772   32936 filesync.go:149] local asset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> 177512.pem in /etc/ssl/certs
	I0401 18:31:40.121786   32936 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> /etc/ssl/certs/177512.pem
	I0401 18:31:40.121893   32936 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 18:31:40.132702   32936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /etc/ssl/certs/177512.pem (1708 bytes)
	I0401 18:31:40.159615   32936 start.go:296] duration metric: took 135.88365ms for postStartSetup
	I0401 18:31:40.159662   32936 main.go:141] libmachine: (ha-293078) Calling .DriverName
	I0401 18:31:40.159941   32936 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0401 18:31:40.159969   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:31:40.162351   32936 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:31:40.162828   32936 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:31:40.162853   32936 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:31:40.163017   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:31:40.163203   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:31:40.163350   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:31:40.163504   32936 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078/id_rsa Username:docker}
	W0401 18:31:40.249692   32936 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0401 18:31:40.249713   32936 fix.go:56] duration metric: took 1m31.834022077s for fixHost
	I0401 18:31:40.249731   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:31:40.252348   32936 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:31:40.252737   32936 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:31:40.252766   32936 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:31:40.252909   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:31:40.253094   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:31:40.253236   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:31:40.253372   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:31:40.253510   32936 main.go:141] libmachine: Using SSH client type: native
	I0401 18:31:40.253717   32936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.74 22 <nil> <nil>}
	I0401 18:31:40.253731   32936 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 18:31:40.363092   32936 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711996300.320265387
	
	I0401 18:31:40.363111   32936 fix.go:216] guest clock: 1711996300.320265387
	I0401 18:31:40.363119   32936 fix.go:229] Guest: 2024-04-01 18:31:40.320265387 +0000 UTC Remote: 2024-04-01 18:31:40.249719126 +0000 UTC m=+91.968465902 (delta=70.546261ms)
	I0401 18:31:40.363141   32936 fix.go:200] guest clock delta is within tolerance: 70.546261ms
	I0401 18:31:40.363146   32936 start.go:83] releasing machines lock for "ha-293078", held for 1m31.947470406s
	I0401 18:31:40.363162   32936 main.go:141] libmachine: (ha-293078) Calling .DriverName
	I0401 18:31:40.363430   32936 main.go:141] libmachine: (ha-293078) Calling .GetIP
	I0401 18:31:40.365970   32936 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:31:40.366352   32936 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:31:40.366372   32936 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:31:40.366503   32936 main.go:141] libmachine: (ha-293078) Calling .DriverName
	I0401 18:31:40.366971   32936 main.go:141] libmachine: (ha-293078) Calling .DriverName
	I0401 18:31:40.367138   32936 main.go:141] libmachine: (ha-293078) Calling .DriverName
	I0401 18:31:40.367247   32936 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 18:31:40.367292   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:31:40.367403   32936 ssh_runner.go:195] Run: cat /version.json
	I0401 18:31:40.367424   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:31:40.369986   32936 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:31:40.370070   32936 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:31:40.370377   32936 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:31:40.370399   32936 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:31:40.370491   32936 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:31:40.370519   32936 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:31:40.370572   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:31:40.370749   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:31:40.370757   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:31:40.370916   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:31:40.370923   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:31:40.371069   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:31:40.371068   32936 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078/id_rsa Username:docker}
	I0401 18:31:40.371192   32936 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078/id_rsa Username:docker}
	I0401 18:31:40.476222   32936 ssh_runner.go:195] Run: systemctl --version
	I0401 18:31:40.483827   32936 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 18:31:40.652200   32936 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 18:31:40.665267   32936 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 18:31:40.665339   32936 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 18:31:40.675602   32936 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 18:31:40.675626   32936 start.go:494] detecting cgroup driver to use...
	I0401 18:31:40.675679   32936 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 18:31:40.693067   32936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 18:31:40.708406   32936 docker.go:217] disabling cri-docker service (if available) ...
	I0401 18:31:40.708453   32936 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 18:31:40.723032   32936 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 18:31:40.737652   32936 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 18:31:40.889245   32936 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 18:31:41.051711   32936 docker.go:233] disabling docker service ...
	I0401 18:31:41.051782   32936 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 18:31:41.070684   32936 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 18:31:41.085234   32936 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 18:31:41.262674   32936 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 18:31:41.437549   32936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 18:31:41.454173   32936 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 18:31:41.474597   32936 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0401 18:31:41.474653   32936 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:31:41.487699   32936 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 18:31:41.487746   32936 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:31:41.499784   32936 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:31:41.512416   32936 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:31:41.524485   32936 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 18:31:41.536802   32936 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:31:41.548770   32936 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:31:41.560455   32936 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:31:41.572609   32936 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 18:31:41.583854   32936 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 18:31:41.595643   32936 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 18:31:41.753486   32936 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 18:31:42.068409   32936 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 18:31:42.068477   32936 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 18:31:42.074821   32936 start.go:562] Will wait 60s for crictl version
	I0401 18:31:42.074861   32936 ssh_runner.go:195] Run: which crictl
	I0401 18:31:42.079345   32936 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 18:31:42.121401   32936 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 18:31:42.121461   32936 ssh_runner.go:195] Run: crio --version
	I0401 18:31:42.156264   32936 ssh_runner.go:195] Run: crio --version
	I0401 18:31:42.192329   32936 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0401 18:31:42.193931   32936 main.go:141] libmachine: (ha-293078) Calling .GetIP
	I0401 18:31:42.196617   32936 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:31:42.197010   32936 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:31:42.197031   32936 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:31:42.197288   32936 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0401 18:31:42.202923   32936 kubeadm.go:877] updating cluster {Name:ha-293078 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cl
usterName:ha-293078 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.74 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.161 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.14 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 18:31:42.203061   32936 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0401 18:31:42.203126   32936 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 18:31:42.251001   32936 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 18:31:42.251025   32936 crio.go:433] Images already preloaded, skipping extraction
	I0401 18:31:42.251076   32936 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 18:31:42.289972   32936 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 18:31:42.289991   32936 cache_images.go:84] Images are preloaded, skipping loading
	I0401 18:31:42.290000   32936 kubeadm.go:928] updating node { 192.168.39.74 8443 v1.29.3 crio true true} ...
	I0401 18:31:42.290110   32936 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-293078 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.74
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-293078 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 18:31:42.290186   32936 ssh_runner.go:195] Run: crio config
	I0401 18:31:42.345883   32936 cni.go:84] Creating CNI manager for ""
	I0401 18:31:42.345912   32936 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0401 18:31:42.345924   32936 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 18:31:42.345952   32936 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.74 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-293078 NodeName:ha-293078 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.74"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.74 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 18:31:42.346074   32936 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.74
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-293078"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.74
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.74"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 18:31:42.346094   32936 kube-vip.go:111] generating kube-vip config ...
	I0401 18:31:42.346131   32936 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0401 18:31:42.360102   32936 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0401 18:31:42.360226   32936 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0401 18:31:42.360298   32936 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0401 18:31:42.372276   32936 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 18:31:42.372349   32936 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0401 18:31:42.383818   32936 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0401 18:31:42.402739   32936 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 18:31:42.422117   32936 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0401 18:31:42.440824   32936 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0401 18:31:42.460166   32936 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0401 18:31:42.464722   32936 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 18:31:42.627389   32936 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 18:31:42.676728   32936 certs.go:68] Setting up /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078 for IP: 192.168.39.74
	I0401 18:31:42.676758   32936 certs.go:194] generating shared ca certs ...
	I0401 18:31:42.676784   32936 certs.go:226] acquiring lock for ca certs: {Name:mk348b3e250c104b662139cd7212c6c6dfda3180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:31:42.676969   32936 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key
	I0401 18:31:42.677035   32936 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key
	I0401 18:31:42.677055   32936 certs.go:256] generating profile certs ...
	I0401 18:31:42.677160   32936 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/client.key
	I0401 18:31:42.677197   32936 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.key.ac3d735a
	I0401 18:31:42.677226   32936 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.crt.ac3d735a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.74 192.168.39.161 192.168.39.210 192.168.39.254]
	I0401 18:31:42.855238   32936 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.crt.ac3d735a ...
	I0401 18:31:42.855270   32936 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.crt.ac3d735a: {Name:mk2d663e7ba26a85f02cfee3721bf6eaa4fa35b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:31:42.855463   32936 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.key.ac3d735a ...
	I0401 18:31:42.855478   32936 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.key.ac3d735a: {Name:mk92ff2516a96f808774f5c18d46850ca95c319a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:31:42.855572   32936 certs.go:381] copying /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.crt.ac3d735a -> /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.crt
	I0401 18:31:42.855706   32936 certs.go:385] copying /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.key.ac3d735a -> /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.key
	I0401 18:31:42.855832   32936 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/proxy-client.key
	I0401 18:31:42.855862   32936 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0401 18:31:42.855881   32936 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0401 18:31:42.855901   32936 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0401 18:31:42.855914   32936 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0401 18:31:42.855927   32936 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0401 18:31:42.855939   32936 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0401 18:31:42.855954   32936 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0401 18:31:42.855965   32936 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0401 18:31:42.856006   32936 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem (1338 bytes)
	W0401 18:31:42.856036   32936 certs.go:480] ignoring /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751_empty.pem, impossibly tiny 0 bytes
	I0401 18:31:42.856045   32936 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem (1675 bytes)
	I0401 18:31:42.856064   32936 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem (1082 bytes)
	I0401 18:31:42.856084   32936 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem (1123 bytes)
	I0401 18:31:42.856106   32936 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem (1679 bytes)
	I0401 18:31:42.856176   32936 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem (1708 bytes)
	I0401 18:31:42.856200   32936 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> /usr/share/ca-certificates/177512.pem
	I0401 18:31:42.856212   32936 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0401 18:31:42.856224   32936 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem -> /usr/share/ca-certificates/17751.pem
	I0401 18:31:42.856755   32936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 18:31:42.886462   32936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 18:31:42.913811   32936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 18:31:42.949567   32936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 18:31:42.986901   32936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0401 18:31:43.013452   32936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 18:31:43.041543   32936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 18:31:43.068355   32936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 18:31:43.096848   32936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /usr/share/ca-certificates/177512.pem (1708 bytes)
	I0401 18:31:43.135434   32936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 18:31:43.165791   32936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem --> /usr/share/ca-certificates/17751.pem (1338 bytes)
	I0401 18:31:43.192800   32936 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0401 18:31:43.211337   32936 ssh_runner.go:195] Run: openssl version
	I0401 18:31:43.217742   32936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 18:31:43.241809   32936 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 18:31:43.250572   32936 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 18:07 /usr/share/ca-certificates/minikubeCA.pem
	I0401 18:31:43.250630   32936 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 18:31:43.264937   32936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 18:31:43.279786   32936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17751.pem && ln -fs /usr/share/ca-certificates/17751.pem /etc/ssl/certs/17751.pem"
	I0401 18:31:43.292270   32936 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17751.pem
	I0401 18:31:43.297135   32936 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 18:15 /usr/share/ca-certificates/17751.pem
	I0401 18:31:43.297186   32936 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17751.pem
	I0401 18:31:43.303987   32936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17751.pem /etc/ssl/certs/51391683.0"
	I0401 18:31:43.314119   32936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177512.pem && ln -fs /usr/share/ca-certificates/177512.pem /etc/ssl/certs/177512.pem"
	I0401 18:31:43.326277   32936 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177512.pem
	I0401 18:31:43.332134   32936 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 18:15 /usr/share/ca-certificates/177512.pem
	I0401 18:31:43.332202   32936 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177512.pem
	I0401 18:31:43.338753   32936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177512.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 18:31:43.349540   32936 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 18:31:43.354635   32936 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 18:31:43.362891   32936 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 18:31:43.369675   32936 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 18:31:43.376388   32936 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 18:31:43.382934   32936 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 18:31:43.389539   32936 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 18:31:43.396196   32936 kubeadm.go:391] StartCluster: {Name:ha-293078 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clust
erName:ha-293078 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.74 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.161 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.14 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 18:31:43.396308   32936 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 18:31:43.396363   32936 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 18:31:43.459299   32936 cri.go:89] found id: "a675dbcdc9748f6386a2b82398770ad55d46e03815ede9d9d26e8a7b1ccbdc69"
	I0401 18:31:43.459322   32936 cri.go:89] found id: "83089bbda84e0857d3a6f634701946e11cd3a0e7facd446e8bd19a918aa3e3af"
	I0401 18:31:43.459328   32936 cri.go:89] found id: "b35085c638277df2d3d037d2003d5907adbfeca00f8d8e1cee4f59230a44e8aa"
	I0401 18:31:43.459332   32936 cri.go:89] found id: "d27910db04ffdc2a492a9a09511fc0ab6d4c80f4a897ccf7e48b017c277e9522"
	I0401 18:31:43.459335   32936 cri.go:89] found id: "53f1a82893f662e018743729a3b3bcb80f4eef69f6214b4ec74bc248829cbbc2"
	I0401 18:31:43.459338   32936 cri.go:89] found id: "28e71802f2d239a48bd313b15717cbd9276395c88536fea7e1d98fca1d21a38c"
	I0401 18:31:43.459340   32936 cri.go:89] found id: "be43b3abd52fcb26f579806533a081948a895cdd479befbbc9bd5446fdc060e9"
	I0401 18:31:43.459343   32936 cri.go:89] found id: "ce906a6132be484cf993679eea95d6637b9e3b3e9884820e95723b2b2c33e7e6"
	I0401 18:31:43.459345   32936 cri.go:89] found id: "8d7ab06dacb1f801ea9714513d3f23a0bad938d609fb9f291d0ec0c4903d8d6a"
	I0401 18:31:43.459350   32936 cri.go:89] found id: "c1af36287bacaf83243c8481c963e2cf6f3ec89e4ffb87b80a135b18652a2c9d"
	I0401 18:31:43.459353   32936 cri.go:89] found id: "9d9284db03ef8c515d8a7475c032ebbaa4d501954b6e1f5c383cdcdb3ebf6afb"
	I0401 18:31:43.459355   32936 cri.go:89] found id: "6bd1ccbceec8c5056f450169f49c17acf202e064825e6c51a55ca89e591e25b5"
	I0401 18:31:43.459358   32936 cri.go:89] found id: "8471f59f3de235b71fe57e79412f27884ceb62d668027d7fe3730009d2fbb1fa"
	I0401 18:31:43.459360   32936 cri.go:89] found id: "e36af39fdf13dd3cf98d2d4a8e7666aea913228d31de663d19c302848663d798"
	I0401 18:31:43.459365   32936 cri.go:89] found id: ""
	I0401 18:31:43.459409   32936 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 01 18:34:17 ha-293078 crio[3842]: time="2024-04-01 18:34:17.431160929Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:fa2a91a3428e03ab7ef8014cb6b310ec8a127070255d1a44a2fbcf7339a44b19,Metadata:&PodSandboxMetadata{Name:busybox-7fdf7869d9-7tn8z,Uid:0cf87f47-0b2d-42b9-9aa6-e4e3736ca728,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1711996341109560100,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7fdf7869d9-7tn8z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cf87f47-0b2d-42b9-9aa6-e4e3736ca728,pod-template-hash: 7fdf7869d9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-01T18:23:33.111482875Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:925bf7ded7bbba806d1c4fb45d3bf0520d952ec80b99694f072306922e9b934f,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-293078,Uid:897e54c6374ab0d6298432af511254b4,Namespace:kube-system,Attempt:0,},State:SANDBOX
_READY,CreatedAt:1711996320540118660,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 897e54c6374ab0d6298432af511254b4,},Annotations:map[string]string{kubernetes.io/config.hash: 897e54c6374ab0d6298432af511254b4,kubernetes.io/config.seen: 2024-04-01T18:31:42.417701122Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:433aada64602b49b6c6947765acf3602ebfaf6913ad2d55c12045a6b7810caa7,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-8v456,Uid:28cf6a1d-90df-4802-ad3c-9c0276380a44,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1711996307468375674,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-8v456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28cf6a1d-90df-4802-ad3c-9c0276380a44,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024
-04-01T18:21:04.458943684Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:07143304915bd30122d8826c98b4d101e0d042a6cd06e78c5acd637ff860f4e4,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-sqxnb,Uid:17868bbd-b0e9-460c-b191-9707f613af0a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1711996307421894451,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-sqxnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17868bbd-b0e9-460c-b191-9707f613af0a,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-01T18:21:04.445579090Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:da260fce1557d9db21f3100d3c6b5a6dd0189371c51d0d9faa0659ecc29f5eca,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:3d7c42eb-192e-4ae0-b5ae-0883ef5e740c,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1711996307413671755,Labels:map[string]str
ing{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d7c42eb-192e-4ae0-b5ae-0883ef5e740c,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/co
nfig.seen: 2024-04-01T18:21:04.457438180Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:09c3e4083c6da6744238462638563448d4c26d9611404139e6b94d0929544c7e,Metadata:&PodSandboxMetadata{Name:kube-proxy-l5q2p,Uid:167db687-ac11-4f57-83c1-048c31a7b2cb,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1711996307387290691,Labels:map[string]string{controller-revision-hash: 7659797656,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-l5q2p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 167db687-ac11-4f57-83c1-048c31a7b2cb,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-01T18:21:02.139775812Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7f6f6195913012dfa4bc213f4a58a4a72cc3c7f67aaab83cfc595d9222b1d890,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-293078,Uid:111b7388841713ed3598aaf599c56758,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:17119
96307378711144,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 111b7388841713ed3598aaf599c56758,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.74:8443,kubernetes.io/config.hash: 111b7388841713ed3598aaf599c56758,kubernetes.io/config.seen: 2024-04-01T18:20:49.837331794Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9cb8873813e799abb80d9670bc16ce65e7c1b4aa4a41ae7da2eaedfe22ce9818,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-293078,Uid:14a552ff6182f687744d2f77e0ce85cc,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1711996307360368537,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14a552ff6182
f687744d2f77e0ce85cc,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 14a552ff6182f687744d2f77e0ce85cc,kubernetes.io/config.seen: 2024-04-01T18:20:49.837333772Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f2022a163b51a03502db09ec40831846d3a7a7a044ce8967cb9611a92263c393,Metadata:&PodSandboxMetadata{Name:etcd-ha-293078,Uid:ed3d89e46aa7fdf04d31b28a37841ad5,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1711996307326172856,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed3d89e46aa7fdf04d31b28a37841ad5,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.74:2379,kubernetes.io/config.hash: ed3d89e46aa7fdf04d31b28a37841ad5,kubernetes.io/config.seen: 2024-04-01T18:20:49.837328151Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:33
b0fc1f4bd7a36e0c8ae46c40a486bf79c0a94ec11325afccc90cbe8f9f2254,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-293078,Uid:431f977c37ad2da28fe70e24f8f4cfb5,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1711996307308238277,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 431f977c37ad2da28fe70e24f8f4cfb5,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 431f977c37ad2da28fe70e24f8f4cfb5,kubernetes.io/config.seen: 2024-04-01T18:20:49.837332872Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:88f19d546e8fac2c3ea8437bf72e612a2b907c5cea31ee8c7deb54e84bc3f710,Metadata:&PodSandboxMetadata{Name:kindnet-rjfcj,Uid:63f6ecc3-4bd0-406b-8096-ffd6115a2de3,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1711996302619600355,Labels:map[string]string{app: kindnet,controll
er-revision-hash: bb65b84c4,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-rjfcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f6ecc3-4bd0-406b-8096-ffd6115a2de3,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-01T18:21:02.164551892Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d2ac86b05a9f4d146abfc431861426b75aa121e86155e33f6885c2287d35c2d9,Metadata:&PodSandboxMetadata{Name:busybox-7fdf7869d9-7tn8z,Uid:0cf87f47-0b2d-42b9-9aa6-e4e3736ca728,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1711995813445010514,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7fdf7869d9-7tn8z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cf87f47-0b2d-42b9-9aa6-e4e3736ca728,pod-template-hash: 7fdf7869d9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-01T18:23:33.111482875Z,kubernetes.io/config.sou
rce: api,},RuntimeHandler:,},&PodSandbox{Id:184b6f8a0b09d310e6167558bc2e043f793ec8069ada3f99f07f8c4bf5bbe2a3,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-8v456,Uid:28cf6a1d-90df-4802-ad3c-9c0276380a44,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1711995664780792673,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-8v456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28cf6a1d-90df-4802-ad3c-9c0276380a44,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-01T18:21:04.458943684Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f885d7f062d4925a0c12a93de7fab4a08ad786e7dc47a543daf4c046acd992d8,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-sqxnb,Uid:17868bbd-b0e9-460c-b191-9707f613af0a,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1711995664752813870,Labels:map[string]string{io.kubernetes.container.name: POD,io
.kubernetes.pod.name: coredns-76f75df574-sqxnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17868bbd-b0e9-460c-b191-9707f613af0a,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-01T18:21:04.445579090Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:849ffff6ee9e4b1fed8bc9e2950a7f2d227adf1318502c7d46a0e03e73165ca2,Metadata:&PodSandboxMetadata{Name:kube-proxy-l5q2p,Uid:167db687-ac11-4f57-83c1-048c31a7b2cb,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1711995662464835312,Labels:map[string]string{controller-revision-hash: 7659797656,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-l5q2p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 167db687-ac11-4f57-83c1-048c31a7b2cb,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-01T18:21:02.139775812Z,kubernetes.io/config.source: api,},RuntimeHandler:,
},&PodSandbox{Id:34af251b6243e69ca34eeeb959254863f3933b8142c33d2027be0d4f7647ea8b,Metadata:&PodSandboxMetadata{Name:etcd-ha-293078,Uid:ed3d89e46aa7fdf04d31b28a37841ad5,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1711995642479178891,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed3d89e46aa7fdf04d31b28a37841ad5,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.74:2379,kubernetes.io/config.hash: ed3d89e46aa7fdf04d31b28a37841ad5,kubernetes.io/config.seen: 2024-04-01T18:20:41.977515320Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:91aa9ea508a082ce745f620d0c3c5161f596f6efef8dca30ddfad2fdc5376338,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-293078,Uid:14a552ff6182f687744d2f77e0ce85cc,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:17119956424633
07147,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14a552ff6182f687744d2f77e0ce85cc,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 14a552ff6182f687744d2f77e0ce85cc,kubernetes.io/config.seen: 2024-04-01T18:20:41.977518552Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=9775a039-446d-4a6d-8dfe-1813cd4fd148 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 01 18:34:17 ha-293078 crio[3842]: time="2024-04-01 18:34:17.432312333Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=00d9bd3b-50d7-4b5a-b071-0a02325b3bf8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:34:17 ha-293078 crio[3842]: time="2024-04-01 18:34:17.432601743Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=00d9bd3b-50d7-4b5a-b071-0a02325b3bf8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:34:17 ha-293078 crio[3842]: time="2024-04-01 18:34:17.433156046Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f880f1f32f064d6f4d5fbba6a7e0fa85b4736d0a77363334299d84695997fc3d,PodSandboxId:da260fce1557d9db21f3100d3c6b5a6dd0189371c51d0d9faa0659ecc29f5eca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711996363950766867,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d7c42eb-192e-4ae0-b5ae-0883ef5e740c,},Annotations:map[string]string{io.kubernetes.container.hash: 245032af,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e7606a1741f035c4106a889012cf5db5431ac4a2e1390cf5fa25faf62a34ea9,PodSandboxId:88f19d546e8fac2c3ea8437bf72e612a2b907c5cea31ee8c7deb54e84bc3f710,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1711996360950374053,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rjfcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f6ecc3-4bd0-406b-8096-ffd6115a2de3,},Annotations:map[string]string{io.kubernetes.container.hash: 1c24bf0f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4cbd0e1fa74f9a0bf6ac1fcafa74e7cc52ea84d7f7d3216ffa34610961bb64b,PodSandboxId:33b0fc1f4bd7a36e0c8ae46c40a486bf79c0a94ec11325afccc90cbe8f9f2254,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711996350949798176,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 431f977c37ad2da28fe70e24f8f4cfb5,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f922d350a52c3b48d57f86e85d8225b11fcc916d1dd95577c4f5fe5d3757c986,PodSandboxId:7f6f6195913012dfa4bc213f4a58a4a72cc3c7f67aaab83cfc595d9222b1d890,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711996349948219848,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 111b7388841713ed3598aaf599c56758,},Annotations:map[string]string{io.kubernetes.container.hash: 886f76f4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c9a11dda6690123c36d59e2b56a84bd3e52ed833757b6fd4c6d8120bb7e46ba,PodSandboxId:fa2a91a3428e03ab7ef8014cb6b310ec8a127070255d1a44a2fbcf7339a44b19,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1711996341246662361,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-7tn8z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cf87f47-0b2d-42b9-9aa6-e4e3736ca728,},Annotations:map[string]string{io.kubernetes.container.hash: 94944394,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6a49a7917650045a9a22b204d79808b7124ca401e2d74faabc9b57e255fbd3c,PodSandboxId:925bf7ded7bbba806d1c4fb45d3bf0520d952ec80b99694f072306922e9b934f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1711996320647022910,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 897e54c6374ab0d6298432af511254b4,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:c748af70e7154a879fb419d898bb0eaa511a6797afb99199f8231d834dca19c4,PodSandboxId:da260fce1557d9db21f3100d3c6b5a6dd0189371c51d0d9faa0659ecc29f5eca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1711996308896675087,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d7c42eb-192e-4ae0-b5ae-0883ef5e740c,},Annotations:map[string]string{io.kubernetes.container.hash: 245032af,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:d0ba4303bba7609a3982e28cc53c7c80afb21aadb86d498d9d4b5e6340e2d039,PodSandboxId:09c3e4083c6da6744238462638563448d4c26d9611404139e6b94d0929544c7e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711996308000913140,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l5q2p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 167db687-ac11-4f57-83c1-048c31a7b2cb,},Annotations:map[string]string{io.kubernetes.container.hash: a09407a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab6cbca
43e514d079299396ef1d62ccb2d276f802ead726a35dc01e00e35e334,PodSandboxId:433aada64602b49b6c6947765acf3602ebfaf6913ad2d55c12045a6b7810caa7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711996308265312358,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-8v456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28cf6a1d-90df-4802-ad3c-9c0276380a44,},Annotations:map[string]string{io.kubernetes.container.hash: 286c3144,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5142b30b613168527e4d6ffa1c4e84c977d97a5c7e7f2cd9e331db31875309a,PodSandboxId:07143304915bd30122d8826c98b4d101e0d042a6cd06e78c5acd637ff860f4e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711996308081027181,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-sqxnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17868bbd-b0e9-460c-b191-9707f613af0a,},Annotations:map[string]string{io.kubernetes.container.hash: 48f6bb3c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a272a73055d5fd196829e20e75ef8aafb0df5ae5f665312afc9e839c52f7766,PodSandboxId:7f6f6195913012dfa4bc213f4a58a4a72cc3c7f67aaab83cfc595d9222b1d890,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711996308080918210,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-293078,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 111b7388841713ed3598aaf599c56758,},Annotations:map[string]string{io.kubernetes.container.hash: 886f76f4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1def46fae9a1e3494c6e79f3f6224d4b4ff1e4a487370fa491a92924c0622b6,PodSandboxId:33b0fc1f4bd7a36e0c8ae46c40a486bf79c0a94ec11325afccc90cbe8f9f2254,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1711996307737973043,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-293078,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 431f977c37ad2da28fe70e24f8f4cfb5,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:760c286bbb6db472837a632164ec1f41295aab88d45f26ad6be70fd606b5d770,PodSandboxId:9cb8873813e799abb80d9670bc16ce65e7c1b4aa4a41ae7da2eaedfe22ce9818,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711996307878499651,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 14a552ff6182f687744d2f77e0ce85cc,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a4bb7bf172a9ff370e2374952c73ee9f7a9407d8fbe484fef1014a4f770ea75,PodSandboxId:f2022a163b51a03502db09ec40831846d3a7a7a044ce8967cb9611a92263c393,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711996307819829498,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed3d89e46aa7fdf04d31b28a37841ad5,},An
notations:map[string]string{io.kubernetes.container.hash: 5bcf3746,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a675dbcdc9748f6386a2b82398770ad55d46e03815ede9d9d26e8a7b1ccbdc69,PodSandboxId:88f19d546e8fac2c3ea8437bf72e612a2b907c5cea31ee8c7deb54e84bc3f710,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1711996302919574959,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rjfcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f6ecc3-4bd0-406b-8096-ffd6115a2de3,},Annotations:map[string]string{io.kube
rnetes.container.hash: 1c24bf0f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61d746cfabdcf1e527c0a0136c923d19be52285d3c766da6faaba4eb3b3c013d,PodSandboxId:d2ac86b05a9f4d146abfc431861426b75aa121e86155e33f6885c2287d35c2d9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1711995814759324620,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-7tn8z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cf87f47-0b2d-42b9-9aa6-e4e3736ca728,},Annotations:map[string]string{io.kuber
netes.container.hash: 94944394,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce906a6132be484cf993679eea95d6637b9e3b3e9884820e95723b2b2c33e7e6,PodSandboxId:184b6f8a0b09d310e6167558bc2e043f793ec8069ada3f99f07f8c4bf5bbe2a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711995665008792137,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-8v456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28cf6a1d-90df-4802-ad3c-9c0276380a44,},Annotations:map[string]string{io.kubernetes.container.hash: 286c3144,i
o.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be43b3abd52fcb26f579806533a081948a895cdd479befbbc9bd5446fdc060e9,PodSandboxId:f885d7f062d4925a0c12a93de7fab4a08ad786e7dc47a543daf4c046acd992d8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711995665021082613,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: core
dns-76f75df574-sqxnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17868bbd-b0e9-460c-b191-9707f613af0a,},Annotations:map[string]string{io.kubernetes.container.hash: 48f6bb3c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d7ab06dacb1f801ea9714513d3f23a0bad938d609fb9f291d0ec0c4903d8d6a,PodSandboxId:849ffff6ee9e4b1fed8bc9e2950a7f2d227adf1318502c7d46a0e03e73165ca2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b
0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1711995662809506933,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l5q2p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 167db687-ac11-4f57-83c1-048c31a7b2cb,},Annotations:map[string]string{io.kubernetes.container.hash: a09407a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bd1ccbceec8c5056f450169f49c17acf202e064825e6c51a55ca89e591e25b5,PodSandboxId:91aa9ea508a082ce745f620d0c3c5161f596f6efef8dca30ddfad2fdc5376338,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16
b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1711995642771289176,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14a552ff6182f687744d2f77e0ce85cc,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8471f59f3de235b71fe57e79412f27884ceb62d668027d7fe3730009d2fbb1fa,PodSandboxId:34af251b6243e69ca34eeeb959254863f3933b8142c33d2027be0d4f7647ea8b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CON
TAINER_EXITED,CreatedAt:1711995642748101156,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed3d89e46aa7fdf04d31b28a37841ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 5bcf3746,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=00d9bd3b-50d7-4b5a-b071-0a02325b3bf8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:34:17 ha-293078 crio[3842]: time="2024-04-01 18:34:17.486795237Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=db6598fa-04e4-446a-8c09-da4563228406 name=/runtime.v1.RuntimeService/Version
	Apr 01 18:34:17 ha-293078 crio[3842]: time="2024-04-01 18:34:17.486868646Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=db6598fa-04e4-446a-8c09-da4563228406 name=/runtime.v1.RuntimeService/Version
	Apr 01 18:34:17 ha-293078 crio[3842]: time="2024-04-01 18:34:17.488355353Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1c3c387f-b736-45b1-9e95-94f27cc1cbe5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 18:34:17 ha-293078 crio[3842]: time="2024-04-01 18:34:17.488942488Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711996457488912989,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1c3c387f-b736-45b1-9e95-94f27cc1cbe5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 18:34:17 ha-293078 crio[3842]: time="2024-04-01 18:34:17.489851256Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b5049fb9-8f50-4395-86c0-056c37dfb253 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:34:17 ha-293078 crio[3842]: time="2024-04-01 18:34:17.489933925Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b5049fb9-8f50-4395-86c0-056c37dfb253 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:34:17 ha-293078 crio[3842]: time="2024-04-01 18:34:17.490541176Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f880f1f32f064d6f4d5fbba6a7e0fa85b4736d0a77363334299d84695997fc3d,PodSandboxId:da260fce1557d9db21f3100d3c6b5a6dd0189371c51d0d9faa0659ecc29f5eca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711996363950766867,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d7c42eb-192e-4ae0-b5ae-0883ef5e740c,},Annotations:map[string]string{io.kubernetes.container.hash: 245032af,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e7606a1741f035c4106a889012cf5db5431ac4a2e1390cf5fa25faf62a34ea9,PodSandboxId:88f19d546e8fac2c3ea8437bf72e612a2b907c5cea31ee8c7deb54e84bc3f710,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1711996360950374053,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rjfcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f6ecc3-4bd0-406b-8096-ffd6115a2de3,},Annotations:map[string]string{io.kubernetes.container.hash: 1c24bf0f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4cbd0e1fa74f9a0bf6ac1fcafa74e7cc52ea84d7f7d3216ffa34610961bb64b,PodSandboxId:33b0fc1f4bd7a36e0c8ae46c40a486bf79c0a94ec11325afccc90cbe8f9f2254,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711996350949798176,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 431f977c37ad2da28fe70e24f8f4cfb5,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f922d350a52c3b48d57f86e85d8225b11fcc916d1dd95577c4f5fe5d3757c986,PodSandboxId:7f6f6195913012dfa4bc213f4a58a4a72cc3c7f67aaab83cfc595d9222b1d890,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711996349948219848,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 111b7388841713ed3598aaf599c56758,},Annotations:map[string]string{io.kubernetes.container.hash: 886f76f4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c9a11dda6690123c36d59e2b56a84bd3e52ed833757b6fd4c6d8120bb7e46ba,PodSandboxId:fa2a91a3428e03ab7ef8014cb6b310ec8a127070255d1a44a2fbcf7339a44b19,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1711996341246662361,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-7tn8z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cf87f47-0b2d-42b9-9aa6-e4e3736ca728,},Annotations:map[string]string{io.kubernetes.container.hash: 94944394,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6a49a7917650045a9a22b204d79808b7124ca401e2d74faabc9b57e255fbd3c,PodSandboxId:925bf7ded7bbba806d1c4fb45d3bf0520d952ec80b99694f072306922e9b934f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1711996320647022910,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 897e54c6374ab0d6298432af511254b4,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:c748af70e7154a879fb419d898bb0eaa511a6797afb99199f8231d834dca19c4,PodSandboxId:da260fce1557d9db21f3100d3c6b5a6dd0189371c51d0d9faa0659ecc29f5eca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1711996308896675087,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d7c42eb-192e-4ae0-b5ae-0883ef5e740c,},Annotations:map[string]string{io.kubernetes.container.hash: 245032af,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:d0ba4303bba7609a3982e28cc53c7c80afb21aadb86d498d9d4b5e6340e2d039,PodSandboxId:09c3e4083c6da6744238462638563448d4c26d9611404139e6b94d0929544c7e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711996308000913140,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l5q2p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 167db687-ac11-4f57-83c1-048c31a7b2cb,},Annotations:map[string]string{io.kubernetes.container.hash: a09407a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab6cbca
43e514d079299396ef1d62ccb2d276f802ead726a35dc01e00e35e334,PodSandboxId:433aada64602b49b6c6947765acf3602ebfaf6913ad2d55c12045a6b7810caa7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711996308265312358,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-8v456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28cf6a1d-90df-4802-ad3c-9c0276380a44,},Annotations:map[string]string{io.kubernetes.container.hash: 286c3144,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5142b30b613168527e4d6ffa1c4e84c977d97a5c7e7f2cd9e331db31875309a,PodSandboxId:07143304915bd30122d8826c98b4d101e0d042a6cd06e78c5acd637ff860f4e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711996308081027181,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-sqxnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17868bbd-b0e9-460c-b191-9707f613af0a,},Annotations:map[string]string{io.kubernetes.container.hash: 48f6bb3c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a272a73055d5fd196829e20e75ef8aafb0df5ae5f665312afc9e839c52f7766,PodSandboxId:7f6f6195913012dfa4bc213f4a58a4a72cc3c7f67aaab83cfc595d9222b1d890,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711996308080918210,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-293078,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 111b7388841713ed3598aaf599c56758,},Annotations:map[string]string{io.kubernetes.container.hash: 886f76f4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1def46fae9a1e3494c6e79f3f6224d4b4ff1e4a487370fa491a92924c0622b6,PodSandboxId:33b0fc1f4bd7a36e0c8ae46c40a486bf79c0a94ec11325afccc90cbe8f9f2254,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1711996307737973043,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-293078,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 431f977c37ad2da28fe70e24f8f4cfb5,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:760c286bbb6db472837a632164ec1f41295aab88d45f26ad6be70fd606b5d770,PodSandboxId:9cb8873813e799abb80d9670bc16ce65e7c1b4aa4a41ae7da2eaedfe22ce9818,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711996307878499651,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 14a552ff6182f687744d2f77e0ce85cc,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a4bb7bf172a9ff370e2374952c73ee9f7a9407d8fbe484fef1014a4f770ea75,PodSandboxId:f2022a163b51a03502db09ec40831846d3a7a7a044ce8967cb9611a92263c393,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711996307819829498,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed3d89e46aa7fdf04d31b28a37841ad5,},An
notations:map[string]string{io.kubernetes.container.hash: 5bcf3746,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a675dbcdc9748f6386a2b82398770ad55d46e03815ede9d9d26e8a7b1ccbdc69,PodSandboxId:88f19d546e8fac2c3ea8437bf72e612a2b907c5cea31ee8c7deb54e84bc3f710,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1711996302919574959,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rjfcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f6ecc3-4bd0-406b-8096-ffd6115a2de3,},Annotations:map[string]string{io.kube
rnetes.container.hash: 1c24bf0f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61d746cfabdcf1e527c0a0136c923d19be52285d3c766da6faaba4eb3b3c013d,PodSandboxId:d2ac86b05a9f4d146abfc431861426b75aa121e86155e33f6885c2287d35c2d9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1711995814759324620,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-7tn8z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cf87f47-0b2d-42b9-9aa6-e4e3736ca728,},Annotations:map[string]string{io.kuber
netes.container.hash: 94944394,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce906a6132be484cf993679eea95d6637b9e3b3e9884820e95723b2b2c33e7e6,PodSandboxId:184b6f8a0b09d310e6167558bc2e043f793ec8069ada3f99f07f8c4bf5bbe2a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711995665008792137,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-8v456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28cf6a1d-90df-4802-ad3c-9c0276380a44,},Annotations:map[string]string{io.kubernetes.container.hash: 286c3144,i
o.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be43b3abd52fcb26f579806533a081948a895cdd479befbbc9bd5446fdc060e9,PodSandboxId:f885d7f062d4925a0c12a93de7fab4a08ad786e7dc47a543daf4c046acd992d8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711995665021082613,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: core
dns-76f75df574-sqxnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17868bbd-b0e9-460c-b191-9707f613af0a,},Annotations:map[string]string{io.kubernetes.container.hash: 48f6bb3c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d7ab06dacb1f801ea9714513d3f23a0bad938d609fb9f291d0ec0c4903d8d6a,PodSandboxId:849ffff6ee9e4b1fed8bc9e2950a7f2d227adf1318502c7d46a0e03e73165ca2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b
0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1711995662809506933,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l5q2p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 167db687-ac11-4f57-83c1-048c31a7b2cb,},Annotations:map[string]string{io.kubernetes.container.hash: a09407a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bd1ccbceec8c5056f450169f49c17acf202e064825e6c51a55ca89e591e25b5,PodSandboxId:91aa9ea508a082ce745f620d0c3c5161f596f6efef8dca30ddfad2fdc5376338,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16
b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1711995642771289176,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14a552ff6182f687744d2f77e0ce85cc,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8471f59f3de235b71fe57e79412f27884ceb62d668027d7fe3730009d2fbb1fa,PodSandboxId:34af251b6243e69ca34eeeb959254863f3933b8142c33d2027be0d4f7647ea8b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CON
TAINER_EXITED,CreatedAt:1711995642748101156,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed3d89e46aa7fdf04d31b28a37841ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 5bcf3746,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b5049fb9-8f50-4395-86c0-056c37dfb253 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:34:17 ha-293078 crio[3842]: time="2024-04-01 18:34:17.549252610Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=31044e8b-d2f6-480e-a789-620a6f7ab50c name=/runtime.v1.RuntimeService/Version
	Apr 01 18:34:17 ha-293078 crio[3842]: time="2024-04-01 18:34:17.549351542Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=31044e8b-d2f6-480e-a789-620a6f7ab50c name=/runtime.v1.RuntimeService/Version
	Apr 01 18:34:17 ha-293078 crio[3842]: time="2024-04-01 18:34:17.551588822Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f9fb4e0a-23ff-4ba8-bd77-93f8282606a4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 18:34:17 ha-293078 crio[3842]: time="2024-04-01 18:34:17.552114391Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711996457552091064,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f9fb4e0a-23ff-4ba8-bd77-93f8282606a4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 18:34:17 ha-293078 crio[3842]: time="2024-04-01 18:34:17.552882363Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0879a34c-15b8-4e2b-a550-7fd41f9e8e90 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:34:17 ha-293078 crio[3842]: time="2024-04-01 18:34:17.552963546Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0879a34c-15b8-4e2b-a550-7fd41f9e8e90 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:34:17 ha-293078 crio[3842]: time="2024-04-01 18:34:17.553550612Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f880f1f32f064d6f4d5fbba6a7e0fa85b4736d0a77363334299d84695997fc3d,PodSandboxId:da260fce1557d9db21f3100d3c6b5a6dd0189371c51d0d9faa0659ecc29f5eca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711996363950766867,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d7c42eb-192e-4ae0-b5ae-0883ef5e740c,},Annotations:map[string]string{io.kubernetes.container.hash: 245032af,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e7606a1741f035c4106a889012cf5db5431ac4a2e1390cf5fa25faf62a34ea9,PodSandboxId:88f19d546e8fac2c3ea8437bf72e612a2b907c5cea31ee8c7deb54e84bc3f710,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1711996360950374053,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rjfcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f6ecc3-4bd0-406b-8096-ffd6115a2de3,},Annotations:map[string]string{io.kubernetes.container.hash: 1c24bf0f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4cbd0e1fa74f9a0bf6ac1fcafa74e7cc52ea84d7f7d3216ffa34610961bb64b,PodSandboxId:33b0fc1f4bd7a36e0c8ae46c40a486bf79c0a94ec11325afccc90cbe8f9f2254,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711996350949798176,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 431f977c37ad2da28fe70e24f8f4cfb5,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f922d350a52c3b48d57f86e85d8225b11fcc916d1dd95577c4f5fe5d3757c986,PodSandboxId:7f6f6195913012dfa4bc213f4a58a4a72cc3c7f67aaab83cfc595d9222b1d890,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711996349948219848,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 111b7388841713ed3598aaf599c56758,},Annotations:map[string]string{io.kubernetes.container.hash: 886f76f4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c9a11dda6690123c36d59e2b56a84bd3e52ed833757b6fd4c6d8120bb7e46ba,PodSandboxId:fa2a91a3428e03ab7ef8014cb6b310ec8a127070255d1a44a2fbcf7339a44b19,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1711996341246662361,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-7tn8z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cf87f47-0b2d-42b9-9aa6-e4e3736ca728,},Annotations:map[string]string{io.kubernetes.container.hash: 94944394,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6a49a7917650045a9a22b204d79808b7124ca401e2d74faabc9b57e255fbd3c,PodSandboxId:925bf7ded7bbba806d1c4fb45d3bf0520d952ec80b99694f072306922e9b934f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1711996320647022910,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 897e54c6374ab0d6298432af511254b4,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:c748af70e7154a879fb419d898bb0eaa511a6797afb99199f8231d834dca19c4,PodSandboxId:da260fce1557d9db21f3100d3c6b5a6dd0189371c51d0d9faa0659ecc29f5eca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1711996308896675087,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d7c42eb-192e-4ae0-b5ae-0883ef5e740c,},Annotations:map[string]string{io.kubernetes.container.hash: 245032af,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:d0ba4303bba7609a3982e28cc53c7c80afb21aadb86d498d9d4b5e6340e2d039,PodSandboxId:09c3e4083c6da6744238462638563448d4c26d9611404139e6b94d0929544c7e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711996308000913140,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l5q2p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 167db687-ac11-4f57-83c1-048c31a7b2cb,},Annotations:map[string]string{io.kubernetes.container.hash: a09407a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab6cbca
43e514d079299396ef1d62ccb2d276f802ead726a35dc01e00e35e334,PodSandboxId:433aada64602b49b6c6947765acf3602ebfaf6913ad2d55c12045a6b7810caa7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711996308265312358,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-8v456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28cf6a1d-90df-4802-ad3c-9c0276380a44,},Annotations:map[string]string{io.kubernetes.container.hash: 286c3144,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5142b30b613168527e4d6ffa1c4e84c977d97a5c7e7f2cd9e331db31875309a,PodSandboxId:07143304915bd30122d8826c98b4d101e0d042a6cd06e78c5acd637ff860f4e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711996308081027181,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-sqxnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17868bbd-b0e9-460c-b191-9707f613af0a,},Annotations:map[string]string{io.kubernetes.container.hash: 48f6bb3c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a272a73055d5fd196829e20e75ef8aafb0df5ae5f665312afc9e839c52f7766,PodSandboxId:7f6f6195913012dfa4bc213f4a58a4a72cc3c7f67aaab83cfc595d9222b1d890,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711996308080918210,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-293078,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 111b7388841713ed3598aaf599c56758,},Annotations:map[string]string{io.kubernetes.container.hash: 886f76f4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1def46fae9a1e3494c6e79f3f6224d4b4ff1e4a487370fa491a92924c0622b6,PodSandboxId:33b0fc1f4bd7a36e0c8ae46c40a486bf79c0a94ec11325afccc90cbe8f9f2254,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1711996307737973043,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-293078,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 431f977c37ad2da28fe70e24f8f4cfb5,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:760c286bbb6db472837a632164ec1f41295aab88d45f26ad6be70fd606b5d770,PodSandboxId:9cb8873813e799abb80d9670bc16ce65e7c1b4aa4a41ae7da2eaedfe22ce9818,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711996307878499651,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 14a552ff6182f687744d2f77e0ce85cc,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a4bb7bf172a9ff370e2374952c73ee9f7a9407d8fbe484fef1014a4f770ea75,PodSandboxId:f2022a163b51a03502db09ec40831846d3a7a7a044ce8967cb9611a92263c393,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711996307819829498,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed3d89e46aa7fdf04d31b28a37841ad5,},An
notations:map[string]string{io.kubernetes.container.hash: 5bcf3746,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a675dbcdc9748f6386a2b82398770ad55d46e03815ede9d9d26e8a7b1ccbdc69,PodSandboxId:88f19d546e8fac2c3ea8437bf72e612a2b907c5cea31ee8c7deb54e84bc3f710,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1711996302919574959,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rjfcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f6ecc3-4bd0-406b-8096-ffd6115a2de3,},Annotations:map[string]string{io.kube
rnetes.container.hash: 1c24bf0f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61d746cfabdcf1e527c0a0136c923d19be52285d3c766da6faaba4eb3b3c013d,PodSandboxId:d2ac86b05a9f4d146abfc431861426b75aa121e86155e33f6885c2287d35c2d9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1711995814759324620,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-7tn8z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cf87f47-0b2d-42b9-9aa6-e4e3736ca728,},Annotations:map[string]string{io.kuber
netes.container.hash: 94944394,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce906a6132be484cf993679eea95d6637b9e3b3e9884820e95723b2b2c33e7e6,PodSandboxId:184b6f8a0b09d310e6167558bc2e043f793ec8069ada3f99f07f8c4bf5bbe2a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711995665008792137,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-8v456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28cf6a1d-90df-4802-ad3c-9c0276380a44,},Annotations:map[string]string{io.kubernetes.container.hash: 286c3144,i
o.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be43b3abd52fcb26f579806533a081948a895cdd479befbbc9bd5446fdc060e9,PodSandboxId:f885d7f062d4925a0c12a93de7fab4a08ad786e7dc47a543daf4c046acd992d8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711995665021082613,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: core
dns-76f75df574-sqxnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17868bbd-b0e9-460c-b191-9707f613af0a,},Annotations:map[string]string{io.kubernetes.container.hash: 48f6bb3c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d7ab06dacb1f801ea9714513d3f23a0bad938d609fb9f291d0ec0c4903d8d6a,PodSandboxId:849ffff6ee9e4b1fed8bc9e2950a7f2d227adf1318502c7d46a0e03e73165ca2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b
0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1711995662809506933,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l5q2p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 167db687-ac11-4f57-83c1-048c31a7b2cb,},Annotations:map[string]string{io.kubernetes.container.hash: a09407a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bd1ccbceec8c5056f450169f49c17acf202e064825e6c51a55ca89e591e25b5,PodSandboxId:91aa9ea508a082ce745f620d0c3c5161f596f6efef8dca30ddfad2fdc5376338,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16
b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1711995642771289176,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14a552ff6182f687744d2f77e0ce85cc,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8471f59f3de235b71fe57e79412f27884ceb62d668027d7fe3730009d2fbb1fa,PodSandboxId:34af251b6243e69ca34eeeb959254863f3933b8142c33d2027be0d4f7647ea8b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CON
TAINER_EXITED,CreatedAt:1711995642748101156,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed3d89e46aa7fdf04d31b28a37841ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 5bcf3746,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0879a34c-15b8-4e2b-a550-7fd41f9e8e90 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:34:17 ha-293078 crio[3842]: time="2024-04-01 18:34:17.609453824Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4707aa17-38fa-48f4-b22a-95f75f0a812c name=/runtime.v1.RuntimeService/Version
	Apr 01 18:34:17 ha-293078 crio[3842]: time="2024-04-01 18:34:17.609719432Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4707aa17-38fa-48f4-b22a-95f75f0a812c name=/runtime.v1.RuntimeService/Version
	Apr 01 18:34:17 ha-293078 crio[3842]: time="2024-04-01 18:34:17.612116364Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e52e6edf-535f-4d90-b294-a82a75799854 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 18:34:17 ha-293078 crio[3842]: time="2024-04-01 18:34:17.612870279Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711996457612844130,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e52e6edf-535f-4d90-b294-a82a75799854 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 18:34:17 ha-293078 crio[3842]: time="2024-04-01 18:34:17.613552824Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b2d07301-7d5a-4059-9d05-1174f08cb5d9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:34:17 ha-293078 crio[3842]: time="2024-04-01 18:34:17.613632662Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b2d07301-7d5a-4059-9d05-1174f08cb5d9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:34:17 ha-293078 crio[3842]: time="2024-04-01 18:34:17.614294276Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f880f1f32f064d6f4d5fbba6a7e0fa85b4736d0a77363334299d84695997fc3d,PodSandboxId:da260fce1557d9db21f3100d3c6b5a6dd0189371c51d0d9faa0659ecc29f5eca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711996363950766867,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d7c42eb-192e-4ae0-b5ae-0883ef5e740c,},Annotations:map[string]string{io.kubernetes.container.hash: 245032af,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e7606a1741f035c4106a889012cf5db5431ac4a2e1390cf5fa25faf62a34ea9,PodSandboxId:88f19d546e8fac2c3ea8437bf72e612a2b907c5cea31ee8c7deb54e84bc3f710,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1711996360950374053,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rjfcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f6ecc3-4bd0-406b-8096-ffd6115a2de3,},Annotations:map[string]string{io.kubernetes.container.hash: 1c24bf0f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4cbd0e1fa74f9a0bf6ac1fcafa74e7cc52ea84d7f7d3216ffa34610961bb64b,PodSandboxId:33b0fc1f4bd7a36e0c8ae46c40a486bf79c0a94ec11325afccc90cbe8f9f2254,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711996350949798176,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 431f977c37ad2da28fe70e24f8f4cfb5,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f922d350a52c3b48d57f86e85d8225b11fcc916d1dd95577c4f5fe5d3757c986,PodSandboxId:7f6f6195913012dfa4bc213f4a58a4a72cc3c7f67aaab83cfc595d9222b1d890,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711996349948219848,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 111b7388841713ed3598aaf599c56758,},Annotations:map[string]string{io.kubernetes.container.hash: 886f76f4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c9a11dda6690123c36d59e2b56a84bd3e52ed833757b6fd4c6d8120bb7e46ba,PodSandboxId:fa2a91a3428e03ab7ef8014cb6b310ec8a127070255d1a44a2fbcf7339a44b19,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1711996341246662361,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-7tn8z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cf87f47-0b2d-42b9-9aa6-e4e3736ca728,},Annotations:map[string]string{io.kubernetes.container.hash: 94944394,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6a49a7917650045a9a22b204d79808b7124ca401e2d74faabc9b57e255fbd3c,PodSandboxId:925bf7ded7bbba806d1c4fb45d3bf0520d952ec80b99694f072306922e9b934f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1711996320647022910,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 897e54c6374ab0d6298432af511254b4,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:c748af70e7154a879fb419d898bb0eaa511a6797afb99199f8231d834dca19c4,PodSandboxId:da260fce1557d9db21f3100d3c6b5a6dd0189371c51d0d9faa0659ecc29f5eca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1711996308896675087,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d7c42eb-192e-4ae0-b5ae-0883ef5e740c,},Annotations:map[string]string{io.kubernetes.container.hash: 245032af,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:d0ba4303bba7609a3982e28cc53c7c80afb21aadb86d498d9d4b5e6340e2d039,PodSandboxId:09c3e4083c6da6744238462638563448d4c26d9611404139e6b94d0929544c7e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711996308000913140,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l5q2p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 167db687-ac11-4f57-83c1-048c31a7b2cb,},Annotations:map[string]string{io.kubernetes.container.hash: a09407a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab6cbca
43e514d079299396ef1d62ccb2d276f802ead726a35dc01e00e35e334,PodSandboxId:433aada64602b49b6c6947765acf3602ebfaf6913ad2d55c12045a6b7810caa7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711996308265312358,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-8v456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28cf6a1d-90df-4802-ad3c-9c0276380a44,},Annotations:map[string]string{io.kubernetes.container.hash: 286c3144,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5142b30b613168527e4d6ffa1c4e84c977d97a5c7e7f2cd9e331db31875309a,PodSandboxId:07143304915bd30122d8826c98b4d101e0d042a6cd06e78c5acd637ff860f4e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711996308081027181,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-sqxnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17868bbd-b0e9-460c-b191-9707f613af0a,},Annotations:map[string]string{io.kubernetes.container.hash: 48f6bb3c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a272a73055d5fd196829e20e75ef8aafb0df5ae5f665312afc9e839c52f7766,PodSandboxId:7f6f6195913012dfa4bc213f4a58a4a72cc3c7f67aaab83cfc595d9222b1d890,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711996308080918210,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-293078,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 111b7388841713ed3598aaf599c56758,},Annotations:map[string]string{io.kubernetes.container.hash: 886f76f4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1def46fae9a1e3494c6e79f3f6224d4b4ff1e4a487370fa491a92924c0622b6,PodSandboxId:33b0fc1f4bd7a36e0c8ae46c40a486bf79c0a94ec11325afccc90cbe8f9f2254,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1711996307737973043,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-293078,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 431f977c37ad2da28fe70e24f8f4cfb5,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:760c286bbb6db472837a632164ec1f41295aab88d45f26ad6be70fd606b5d770,PodSandboxId:9cb8873813e799abb80d9670bc16ce65e7c1b4aa4a41ae7da2eaedfe22ce9818,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711996307878499651,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 14a552ff6182f687744d2f77e0ce85cc,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a4bb7bf172a9ff370e2374952c73ee9f7a9407d8fbe484fef1014a4f770ea75,PodSandboxId:f2022a163b51a03502db09ec40831846d3a7a7a044ce8967cb9611a92263c393,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711996307819829498,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed3d89e46aa7fdf04d31b28a37841ad5,},An
notations:map[string]string{io.kubernetes.container.hash: 5bcf3746,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a675dbcdc9748f6386a2b82398770ad55d46e03815ede9d9d26e8a7b1ccbdc69,PodSandboxId:88f19d546e8fac2c3ea8437bf72e612a2b907c5cea31ee8c7deb54e84bc3f710,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1711996302919574959,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rjfcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f6ecc3-4bd0-406b-8096-ffd6115a2de3,},Annotations:map[string]string{io.kube
rnetes.container.hash: 1c24bf0f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61d746cfabdcf1e527c0a0136c923d19be52285d3c766da6faaba4eb3b3c013d,PodSandboxId:d2ac86b05a9f4d146abfc431861426b75aa121e86155e33f6885c2287d35c2d9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1711995814759324620,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-7tn8z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cf87f47-0b2d-42b9-9aa6-e4e3736ca728,},Annotations:map[string]string{io.kuber
netes.container.hash: 94944394,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce906a6132be484cf993679eea95d6637b9e3b3e9884820e95723b2b2c33e7e6,PodSandboxId:184b6f8a0b09d310e6167558bc2e043f793ec8069ada3f99f07f8c4bf5bbe2a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711995665008792137,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-8v456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28cf6a1d-90df-4802-ad3c-9c0276380a44,},Annotations:map[string]string{io.kubernetes.container.hash: 286c3144,i
o.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be43b3abd52fcb26f579806533a081948a895cdd479befbbc9bd5446fdc060e9,PodSandboxId:f885d7f062d4925a0c12a93de7fab4a08ad786e7dc47a543daf4c046acd992d8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711995665021082613,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: core
dns-76f75df574-sqxnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17868bbd-b0e9-460c-b191-9707f613af0a,},Annotations:map[string]string{io.kubernetes.container.hash: 48f6bb3c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d7ab06dacb1f801ea9714513d3f23a0bad938d609fb9f291d0ec0c4903d8d6a,PodSandboxId:849ffff6ee9e4b1fed8bc9e2950a7f2d227adf1318502c7d46a0e03e73165ca2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b
0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1711995662809506933,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l5q2p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 167db687-ac11-4f57-83c1-048c31a7b2cb,},Annotations:map[string]string{io.kubernetes.container.hash: a09407a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bd1ccbceec8c5056f450169f49c17acf202e064825e6c51a55ca89e591e25b5,PodSandboxId:91aa9ea508a082ce745f620d0c3c5161f596f6efef8dca30ddfad2fdc5376338,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16
b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1711995642771289176,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14a552ff6182f687744d2f77e0ce85cc,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8471f59f3de235b71fe57e79412f27884ceb62d668027d7fe3730009d2fbb1fa,PodSandboxId:34af251b6243e69ca34eeeb959254863f3933b8142c33d2027be0d4f7647ea8b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CON
TAINER_EXITED,CreatedAt:1711995642748101156,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed3d89e46aa7fdf04d31b28a37841ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 5bcf3746,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b2d07301-7d5a-4059-9d05-1174f08cb5d9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	f880f1f32f064       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   da260fce1557d       storage-provisioner
	4e7606a1741f0       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               3                   88f19d546e8fa       kindnet-rjfcj
	e4cbd0e1fa74f       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      About a minute ago   Running             kube-controller-manager   2                   33b0fc1f4bd7a       kube-controller-manager-ha-293078
	f922d350a52c3       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      About a minute ago   Running             kube-apiserver            3                   7f6f619591301       kube-apiserver-ha-293078
	7c9a11dda6690       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   fa2a91a3428e0       busybox-7fdf7869d9-7tn8z
	c6a49a7917650       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      2 minutes ago        Running             kube-vip                  0                   925bf7ded7bbb       kube-vip-ha-293078
	c748af70e7154       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       3                   da260fce1557d       storage-provisioner
	ab6cbca43e514       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   433aada64602b       coredns-76f75df574-8v456
	f5142b30b6131       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   07143304915bd       coredns-76f75df574-sqxnb
	8a272a73055d5       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      2 minutes ago        Exited              kube-apiserver            2                   7f6f619591301       kube-apiserver-ha-293078
	d0ba4303bba76       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      2 minutes ago        Running             kube-proxy                1                   09c3e4083c6da       kube-proxy-l5q2p
	760c286bbb6db       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      2 minutes ago        Running             kube-scheduler            1                   9cb8873813e79       kube-scheduler-ha-293078
	2a4bb7bf172a9       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      2 minutes ago        Running             etcd                      1                   f2022a163b51a       etcd-ha-293078
	d1def46fae9a1       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      2 minutes ago        Exited              kube-controller-manager   1                   33b0fc1f4bd7a       kube-controller-manager-ha-293078
	a675dbcdc9748       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      2 minutes ago        Exited              kindnet-cni               2                   88f19d546e8fa       kindnet-rjfcj
	61d746cfabdcf       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   d2ac86b05a9f4       busybox-7fdf7869d9-7tn8z
	be43b3abd52fc       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   f885d7f062d49       coredns-76f75df574-sqxnb
	ce906a6132be4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   184b6f8a0b09d       coredns-76f75df574-8v456
	8d7ab06dacb1f       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      13 minutes ago       Exited              kube-proxy                0                   849ffff6ee9e4       kube-proxy-l5q2p
	6bd1ccbceec8c       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      13 minutes ago       Exited              kube-scheduler            0                   91aa9ea508a08       kube-scheduler-ha-293078
	8471f59f3de23       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago       Exited              etcd                      0                   34af251b6243e       etcd-ha-293078
	
	
	==> coredns [ab6cbca43e514d079299396ef1d62ccb2d276f802ead726a35dc01e00e35e334] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[372260125]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (01-Apr-2024 18:31:55.002) (total time: 10001ms):
	Trace[372260125]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (18:32:05.003)
	Trace[372260125]: [10.001282714s] [10.001282714s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:39210->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:39210->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [be43b3abd52fcb26f579806533a081948a895cdd479befbbc9bd5446fdc060e9] <==
	[INFO] 10.244.0.4:48954 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004445287s
	[INFO] 10.244.0.4:41430 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00325614s
	[INFO] 10.244.0.4:43938 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000214694s
	[INFO] 10.244.0.4:55272 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000150031s
	[INFO] 10.244.1.2:53484 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00036286s
	[INFO] 10.244.1.2:40882 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000191317s
	[INFO] 10.244.1.2:44362 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000231809s
	[INFO] 10.244.2.2:38878 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000130983s
	[INFO] 10.244.2.2:55123 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000140829s
	[INFO] 10.244.2.2:60293 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000207687s
	[INFO] 10.244.2.2:42748 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000162463s
	[INFO] 10.244.0.4:51962 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000171832s
	[INFO] 10.244.1.2:34522 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169219s
	[INFO] 10.244.1.2:45853 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000149138s
	[INFO] 10.244.0.4:34814 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000154553s
	[INFO] 10.244.1.2:51449 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125618s
	[INFO] 10.244.1.2:53188 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000205396s
	[INFO] 10.244.2.2:55517 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00011978s
	[INFO] 10.244.2.2:58847 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00014087s
	[INFO] 10.244.2.2:55721 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000148617s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ce906a6132be484cf993679eea95d6637b9e3b3e9884820e95723b2b2c33e7e6] <==
	[INFO] 10.244.1.2:46630 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000135925s
	[INFO] 10.244.2.2:37886 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147427s
	[INFO] 10.244.2.2:47974 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002026718s
	[INFO] 10.244.2.2:36742 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000132507s
	[INFO] 10.244.2.2:60458 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001236853s
	[INFO] 10.244.0.4:36514 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000079136s
	[INFO] 10.244.0.4:54146 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000061884s
	[INFO] 10.244.0.4:48422 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000049796s
	[INFO] 10.244.1.2:53602 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000174827s
	[INFO] 10.244.1.2:52752 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000123202s
	[INFO] 10.244.2.2:42824 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122778s
	[INFO] 10.244.2.2:39412 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000138599s
	[INFO] 10.244.2.2:46213 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000134624s
	[INFO] 10.244.2.2:41423 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000104186s
	[INFO] 10.244.0.4:56317 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189039s
	[INFO] 10.244.0.4:49692 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000121271s
	[INFO] 10.244.0.4:55372 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000369332s
	[INFO] 10.244.1.2:44134 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000161425s
	[INFO] 10.244.1.2:45595 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000086429s
	[INFO] 10.244.2.2:52399 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000233085s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f5142b30b613168527e4d6ffa1c4e84c977d97a5c7e7f2cd9e331db31875309a] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:60656->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:58584->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:58584->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:60656->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-293078
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-293078
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2
	                    minikube.k8s.io/name=ha-293078
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_01T18_20_50_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Apr 2024 18:20:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-293078
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Apr 2024 18:34:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Apr 2024 18:32:34 +0000   Mon, 01 Apr 2024 18:20:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Apr 2024 18:32:34 +0000   Mon, 01 Apr 2024 18:20:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Apr 2024 18:32:34 +0000   Mon, 01 Apr 2024 18:20:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Apr 2024 18:32:34 +0000   Mon, 01 Apr 2024 18:21:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.74
	  Hostname:    ha-293078
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3e3b54c701944ac9af1db6484a71e599
	  System UUID:                3e3b54c7-0194-4ac9-af1d-b6484a71e599
	  Boot ID:                    7f2e19c7-2c6d-417a-9d2d-1c4d117eee25
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-7tn8z             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 coredns-76f75df574-8v456             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-76f75df574-sqxnb             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-293078                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-rjfcj                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-293078             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-293078    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-l5q2p                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-293078             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-293078                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 13m    kube-proxy       
	  Normal   Starting                 106s   kube-proxy       
	  Normal   NodeHasNoDiskPressure    13m    kubelet          Node ha-293078 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 13m    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13m    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m    kubelet          Node ha-293078 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     13m    kubelet          Node ha-293078 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m    node-controller  Node ha-293078 event: Registered Node ha-293078 in Controller
	  Normal   NodeReady                13m    kubelet          Node ha-293078 status is now: NodeReady
	  Normal   RegisteredNode           12m    node-controller  Node ha-293078 event: Registered Node ha-293078 in Controller
	  Normal   RegisteredNode           10m    node-controller  Node ha-293078 event: Registered Node ha-293078 in Controller
	  Warning  ContainerGCFailed        3m29s  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           99s    node-controller  Node ha-293078 event: Registered Node ha-293078 in Controller
	  Normal   RegisteredNode           93s    node-controller  Node ha-293078 event: Registered Node ha-293078 in Controller
	  Normal   RegisteredNode           34s    node-controller  Node ha-293078 event: Registered Node ha-293078 in Controller
	
	
	Name:               ha-293078-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-293078-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2
	                    minikube.k8s.io/name=ha-293078
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_01T18_22_00_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Apr 2024 18:21:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-293078-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Apr 2024 18:34:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Apr 2024 18:33:15 +0000   Mon, 01 Apr 2024 18:32:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Apr 2024 18:33:15 +0000   Mon, 01 Apr 2024 18:32:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Apr 2024 18:33:15 +0000   Mon, 01 Apr 2024 18:32:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Apr 2024 18:33:15 +0000   Mon, 01 Apr 2024 18:32:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.161
	  Hostname:    ha-293078-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ca6adfb154a0459d8158168bf9a31bb6
	  System UUID:                ca6adfb1-54a0-459d-8158-168bf9a31bb6
	  Boot ID:                    60ca700d-5f12-448f-8b63-f87c3d66ac34
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-ntbk4                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-293078-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-f4djp                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-293078-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-293078-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-8s2xk                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-293078-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-293078-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 88s                    kube-proxy       
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-293078-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-293078-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node ha-293078-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                    node-controller  Node ha-293078-m02 event: Registered Node ha-293078-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-293078-m02 event: Registered Node ha-293078-m02 in Controller
	  Normal  RegisteredNode           10m                    node-controller  Node ha-293078-m02 event: Registered Node ha-293078-m02 in Controller
	  Normal  NodeNotReady             8m57s                  node-controller  Node ha-293078-m02 status is now: NodeNotReady
	  Normal  Starting                 2m12s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m12s (x8 over 2m12s)  kubelet          Node ha-293078-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m12s (x8 over 2m12s)  kubelet          Node ha-293078-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m12s (x7 over 2m12s)  kubelet          Node ha-293078-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           99s                    node-controller  Node ha-293078-m02 event: Registered Node ha-293078-m02 in Controller
	  Normal  RegisteredNode           93s                    node-controller  Node ha-293078-m02 event: Registered Node ha-293078-m02 in Controller
	  Normal  RegisteredNode           34s                    node-controller  Node ha-293078-m02 event: Registered Node ha-293078-m02 in Controller
	
	
	Name:               ha-293078-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-293078-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2
	                    minikube.k8s.io/name=ha-293078
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_01T18_23_15_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Apr 2024 18:23:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-293078-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Apr 2024 18:34:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Apr 2024 18:33:43 +0000   Mon, 01 Apr 2024 18:23:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Apr 2024 18:33:43 +0000   Mon, 01 Apr 2024 18:23:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Apr 2024 18:33:43 +0000   Mon, 01 Apr 2024 18:23:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Apr 2024 18:33:43 +0000   Mon, 01 Apr 2024 18:23:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.210
	  Hostname:    ha-293078-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c0e3d05a853946ce973ab987568f85f7
	  System UUID:                c0e3d05a-8539-46ce-973a-b987568f85f7
	  Boot ID:                    0edebd51-765b-48ed-8a66-afe079155c66
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-z89qx                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-293078-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-ccxmv                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-293078-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-293078-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-xjx5z                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-293078-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-293078-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 47s                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-293078-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-293078-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-293078-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-293078-m03 event: Registered Node ha-293078-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-293078-m03 event: Registered Node ha-293078-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-293078-m03 event: Registered Node ha-293078-m03 in Controller
	  Normal   RegisteredNode           99s                node-controller  Node ha-293078-m03 event: Registered Node ha-293078-m03 in Controller
	  Normal   RegisteredNode           93s                node-controller  Node ha-293078-m03 event: Registered Node ha-293078-m03 in Controller
	  Normal   Starting                 66s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  66s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  66s                kubelet          Node ha-293078-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    66s                kubelet          Node ha-293078-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     66s                kubelet          Node ha-293078-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 66s                kubelet          Node ha-293078-m03 has been rebooted, boot id: 0edebd51-765b-48ed-8a66-afe079155c66
	  Normal   RegisteredNode           34s                node-controller  Node ha-293078-m03 event: Registered Node ha-293078-m03 in Controller
	
	
	Name:               ha-293078-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-293078-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2
	                    minikube.k8s.io/name=ha-293078
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_01T18_24_11_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Apr 2024 18:24:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-293078-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Apr 2024 18:34:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Apr 2024 18:34:09 +0000   Mon, 01 Apr 2024 18:34:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Apr 2024 18:34:09 +0000   Mon, 01 Apr 2024 18:34:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Apr 2024 18:34:09 +0000   Mon, 01 Apr 2024 18:34:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Apr 2024 18:34:09 +0000   Mon, 01 Apr 2024 18:34:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.14
	  Hostname:    ha-293078-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 071d9c818e6d4564a98e9da52a34ff25
	  System UUID:                071d9c81-8e6d-4564-a98e-9da52a34ff25
	  Boot ID:                    28fcd272-0f75-45e0-a431-29bc701fc638
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-qhwr4       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-49cqh    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 5s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-293078-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-293078-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-293078-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-293078-m04 event: Registered Node ha-293078-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-293078-m04 event: Registered Node ha-293078-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-293078-m04 event: Registered Node ha-293078-m04 in Controller
	  Normal   NodeReady                9m59s              kubelet          Node ha-293078-m04 status is now: NodeReady
	  Normal   RegisteredNode           99s                node-controller  Node ha-293078-m04 event: Registered Node ha-293078-m04 in Controller
	  Normal   RegisteredNode           93s                node-controller  Node ha-293078-m04 event: Registered Node ha-293078-m04 in Controller
	  Normal   NodeNotReady             59s                node-controller  Node ha-293078-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           34s                node-controller  Node ha-293078-m04 event: Registered Node ha-293078-m04 in Controller
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  9s (x2 over 9s)    kubelet          Node ha-293078-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s (x2 over 9s)    kubelet          Node ha-293078-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s (x2 over 9s)    kubelet          Node ha-293078-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 9s                 kubelet          Node ha-293078-m04 has been rebooted, boot id: 28fcd272-0f75-45e0-a431-29bc701fc638
	  Normal   NodeReady                9s                 kubelet          Node ha-293078-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +6.937253] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.062108] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066440] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.214972] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.138486] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.294622] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +4.757712] systemd-fstab-generator[764]: Ignoring "noauto" option for root device
	[  +0.062342] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.163879] systemd-fstab-generator[939]: Ignoring "noauto" option for root device
	[  +0.840426] kauditd_printk_skb: 57 callbacks suppressed
	[  +7.059574] systemd-fstab-generator[1362]: Ignoring "noauto" option for root device
	[  +0.076658] kauditd_printk_skb: 40 callbacks suppressed
	[Apr 1 18:21] kauditd_printk_skb: 21 callbacks suppressed
	[Apr 1 18:22] kauditd_printk_skb: 74 callbacks suppressed
	[Apr 1 18:28] kauditd_printk_skb: 1 callbacks suppressed
	[Apr 1 18:31] systemd-fstab-generator[3760]: Ignoring "noauto" option for root device
	[  +0.149145] systemd-fstab-generator[3773]: Ignoring "noauto" option for root device
	[  +0.193570] systemd-fstab-generator[3787]: Ignoring "noauto" option for root device
	[  +0.202601] systemd-fstab-generator[3799]: Ignoring "noauto" option for root device
	[  +0.316779] systemd-fstab-generator[3827]: Ignoring "noauto" option for root device
	[  +0.867340] systemd-fstab-generator[3929]: Ignoring "noauto" option for root device
	[  +4.872420] kauditd_printk_skb: 132 callbacks suppressed
	[Apr 1 18:32] kauditd_printk_skb: 87 callbacks suppressed
	[  +9.318872] kauditd_printk_skb: 2 callbacks suppressed
	[ +40.047733] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [2a4bb7bf172a9ff370e2374952c73ee9f7a9407d8fbe484fef1014a4f770ea75] <==
	{"level":"warn","ts":"2024-04-01T18:33:08.994792Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"97b47c491108199","rtt":"0s","error":"dial tcp 192.168.39.210:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-01T18:33:08.995916Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"97b47c491108199","rtt":"0s","error":"dial tcp 192.168.39.210:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-01T18:33:09.700508Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.210:2380/version","remote-member-id":"97b47c491108199","error":"Get \"https://192.168.39.210:2380/version\": dial tcp 192.168.39.210:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-01T18:33:09.700806Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"97b47c491108199","error":"Get \"https://192.168.39.210:2380/version\": dial tcp 192.168.39.210:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-01T18:33:13.703147Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.210:2380/version","remote-member-id":"97b47c491108199","error":"Get \"https://192.168.39.210:2380/version\": dial tcp 192.168.39.210:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-01T18:33:13.703238Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"97b47c491108199","error":"Get \"https://192.168.39.210:2380/version\": dial tcp 192.168.39.210:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-01T18:33:13.995659Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"97b47c491108199","rtt":"0s","error":"dial tcp 192.168.39.210:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-01T18:33:13.996103Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"97b47c491108199","rtt":"0s","error":"dial tcp 192.168.39.210:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-01T18:33:17.705687Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.210:2380/version","remote-member-id":"97b47c491108199","error":"Get \"https://192.168.39.210:2380/version\": dial tcp 192.168.39.210:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-01T18:33:17.706011Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"97b47c491108199","error":"Get \"https://192.168.39.210:2380/version\": dial tcp 192.168.39.210:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-01T18:33:18.996298Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"97b47c491108199","rtt":"0s","error":"dial tcp 192.168.39.210:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-01T18:33:18.996319Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"97b47c491108199","rtt":"0s","error":"dial tcp 192.168.39.210:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-01T18:33:21.708658Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.210:2380/version","remote-member-id":"97b47c491108199","error":"Get \"https://192.168.39.210:2380/version\": dial tcp 192.168.39.210:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-01T18:33:21.708916Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"97b47c491108199","error":"Get \"https://192.168.39.210:2380/version\": dial tcp 192.168.39.210:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-01T18:33:23.996931Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"97b47c491108199","rtt":"0s","error":"dial tcp 192.168.39.210:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-01T18:33:23.997047Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"97b47c491108199","rtt":"0s","error":"dial tcp 192.168.39.210:2380: connect: connection refused"}
	{"level":"info","ts":"2024-04-01T18:33:25.301233Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"97b47c491108199"}
	{"level":"info","ts":"2024-04-01T18:33:25.30137Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"2c3239b60c033d0c","remote-peer-id":"97b47c491108199"}
	{"level":"info","ts":"2024-04-01T18:33:25.301644Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"2c3239b60c033d0c","remote-peer-id":"97b47c491108199"}
	{"level":"info","ts":"2024-04-01T18:33:25.308592Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"2c3239b60c033d0c","to":"97b47c491108199","stream-type":"stream Message"}
	{"level":"info","ts":"2024-04-01T18:33:25.308866Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"2c3239b60c033d0c","remote-peer-id":"97b47c491108199"}
	{"level":"info","ts":"2024-04-01T18:33:25.316623Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"2c3239b60c033d0c","to":"97b47c491108199","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-04-01T18:33:25.316676Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"2c3239b60c033d0c","remote-peer-id":"97b47c491108199"}
	{"level":"info","ts":"2024-04-01T18:33:26.371223Z","caller":"traceutil/trace.go:171","msg":"trace[1734470293] transaction","detail":"{read_only:false; response_revision:2350; number_of_response:1; }","duration":"135.320178ms","start":"2024-04-01T18:33:26.235865Z","end":"2024-04-01T18:33:26.371185Z","steps":["trace[1734470293] 'process raft request'  (duration: 135.201475ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-01T18:33:28.998675Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"97b47c491108199","rtt":"0s","error":"dial tcp 192.168.39.210:2380: connect: connection refused"}
	
	
	==> etcd [8471f59f3de235b71fe57e79412f27884ceb62d668027d7fe3730009d2fbb1fa] <==
	{"level":"warn","ts":"2024-04-01T18:30:09.321813Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.252588ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ingress/\" range_end:\"/registry/ingress0\" limit:10000 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-04-01T18:30:09.330834Z","caller":"traceutil/trace.go:171","msg":"trace[864783224] range","detail":"{range_begin:/registry/ingress/; range_end:/registry/ingress0; }","duration":"131.283752ms","start":"2024-04-01T18:30:09.199542Z","end":"2024-04-01T18:30:09.330826Z","steps":["trace[864783224] 'agreement among raft nodes before linearized reading'  (duration: 122.267694ms)"],"step_count":1}
	2024/04/01 18:30:09 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-04-01T18:30:09.324366Z","caller":"traceutil/trace.go:171","msg":"trace[1403237565] range","detail":"{range_begin:/registry/podtemplates/; range_end:/registry/podtemplates0; }","duration":"117.617558ms","start":"2024-04-01T18:30:09.206714Z","end":"2024-04-01T18:30:09.324332Z","steps":["trace[1403237565] 'agreement among raft nodes before linearized reading'  (duration: 115.080328ms)"],"step_count":1}
	2024/04/01 18:30:09 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-04-01T18:30:09.394326Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.74:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-01T18:30:09.394448Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.74:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-01T18:30:09.396076Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"2c3239b60c033d0c","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-04-01T18:30:09.396452Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"7d555fa605d0a4f8"}
	{"level":"info","ts":"2024-04-01T18:30:09.396526Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"7d555fa605d0a4f8"}
	{"level":"info","ts":"2024-04-01T18:30:09.396549Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"7d555fa605d0a4f8"}
	{"level":"info","ts":"2024-04-01T18:30:09.396635Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"2c3239b60c033d0c","remote-peer-id":"7d555fa605d0a4f8"}
	{"level":"info","ts":"2024-04-01T18:30:09.396702Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"2c3239b60c033d0c","remote-peer-id":"7d555fa605d0a4f8"}
	{"level":"info","ts":"2024-04-01T18:30:09.396736Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"2c3239b60c033d0c","remote-peer-id":"7d555fa605d0a4f8"}
	{"level":"info","ts":"2024-04-01T18:30:09.396746Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"7d555fa605d0a4f8"}
	{"level":"info","ts":"2024-04-01T18:30:09.396754Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"97b47c491108199"}
	{"level":"info","ts":"2024-04-01T18:30:09.396762Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"97b47c491108199"}
	{"level":"info","ts":"2024-04-01T18:30:09.396808Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"97b47c491108199"}
	{"level":"info","ts":"2024-04-01T18:30:09.396948Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"2c3239b60c033d0c","remote-peer-id":"97b47c491108199"}
	{"level":"info","ts":"2024-04-01T18:30:09.397057Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"2c3239b60c033d0c","remote-peer-id":"97b47c491108199"}
	{"level":"info","ts":"2024-04-01T18:30:09.397134Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"2c3239b60c033d0c","remote-peer-id":"97b47c491108199"}
	{"level":"info","ts":"2024-04-01T18:30:09.397173Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"97b47c491108199"}
	{"level":"info","ts":"2024-04-01T18:30:09.400096Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.74:2380"}
	{"level":"info","ts":"2024-04-01T18:30:09.400301Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.74:2380"}
	{"level":"info","ts":"2024-04-01T18:30:09.400352Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-293078","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.74:2380"],"advertise-client-urls":["https://192.168.39.74:2379"]}
	
	
	==> kernel <==
	 18:34:18 up 14 min,  0 users,  load average: 1.10, 0.83, 0.46
	Linux ha-293078 5.10.207 #1 SMP Wed Mar 27 22:02:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [4e7606a1741f035c4106a889012cf5db5431ac4a2e1390cf5fa25faf62a34ea9] <==
	I0401 18:33:42.304881       1 main.go:250] Node ha-293078-m04 has CIDR [10.244.3.0/24] 
	I0401 18:33:52.323678       1 main.go:223] Handling node with IPs: map[192.168.39.74:{}]
	I0401 18:33:52.323765       1 main.go:227] handling current node
	I0401 18:33:52.323782       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0401 18:33:52.323791       1 main.go:250] Node ha-293078-m02 has CIDR [10.244.1.0/24] 
	I0401 18:33:52.323957       1 main.go:223] Handling node with IPs: map[192.168.39.210:{}]
	I0401 18:33:52.323966       1 main.go:250] Node ha-293078-m03 has CIDR [10.244.2.0/24] 
	I0401 18:33:52.324514       1 main.go:223] Handling node with IPs: map[192.168.39.14:{}]
	I0401 18:33:52.324565       1 main.go:250] Node ha-293078-m04 has CIDR [10.244.3.0/24] 
	I0401 18:34:02.337195       1 main.go:223] Handling node with IPs: map[192.168.39.74:{}]
	I0401 18:34:02.337229       1 main.go:227] handling current node
	I0401 18:34:02.337241       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0401 18:34:02.337247       1 main.go:250] Node ha-293078-m02 has CIDR [10.244.1.0/24] 
	I0401 18:34:02.337353       1 main.go:223] Handling node with IPs: map[192.168.39.210:{}]
	I0401 18:34:02.337358       1 main.go:250] Node ha-293078-m03 has CIDR [10.244.2.0/24] 
	I0401 18:34:02.337492       1 main.go:223] Handling node with IPs: map[192.168.39.14:{}]
	I0401 18:34:02.337500       1 main.go:250] Node ha-293078-m04 has CIDR [10.244.3.0/24] 
	I0401 18:34:12.356517       1 main.go:223] Handling node with IPs: map[192.168.39.74:{}]
	I0401 18:34:12.356584       1 main.go:227] handling current node
	I0401 18:34:12.356614       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0401 18:34:12.356622       1 main.go:250] Node ha-293078-m02 has CIDR [10.244.1.0/24] 
	I0401 18:34:12.356790       1 main.go:223] Handling node with IPs: map[192.168.39.210:{}]
	I0401 18:34:12.356828       1 main.go:250] Node ha-293078-m03 has CIDR [10.244.2.0/24] 
	I0401 18:34:12.356895       1 main.go:223] Handling node with IPs: map[192.168.39.14:{}]
	I0401 18:34:12.356901       1 main.go:250] Node ha-293078-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [a675dbcdc9748f6386a2b82398770ad55d46e03815ede9d9d26e8a7b1ccbdc69] <==
	I0401 18:31:43.476892       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0401 18:31:43.477056       1 main.go:107] hostIP = 192.168.39.74
	podIP = 192.168.39.74
	I0401 18:31:43.477316       1 main.go:116] setting mtu 1500 for CNI 
	I0401 18:31:43.477460       1 main.go:146] kindnetd IP family: "ipv4"
	I0401 18:31:43.477495       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0401 18:31:43.783833       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0401 18:31:46.837942       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0401 18:31:49.910495       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0401 18:31:52.982549       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0401 18:31:56.055040       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kube-apiserver [8a272a73055d5fd196829e20e75ef8aafb0df5ae5f665312afc9e839c52f7766] <==
	I0401 18:31:48.810878       1 options.go:222] external host was not specified, using 192.168.39.74
	I0401 18:31:48.812255       1 server.go:148] Version: v1.29.3
	I0401 18:31:48.812313       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 18:31:49.647620       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0401 18:31:49.655029       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0401 18:31:49.655077       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0401 18:31:49.655302       1 instance.go:297] Using reconciler: lease
	W0401 18:32:09.651826       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0401 18:32:09.651826       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0401 18:32:09.656963       1 instance.go:290] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [f922d350a52c3b48d57f86e85d8225b11fcc916d1dd95577c4f5fe5d3757c986] <==
	I0401 18:32:32.507186       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0401 18:32:32.507826       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0401 18:32:32.507874       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0401 18:32:32.507932       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0401 18:32:32.508047       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0401 18:32:32.579931       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0401 18:32:32.590577       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0401 18:32:32.593576       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0401 18:32:32.594366       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0401 18:32:32.594438       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0401 18:32:32.594525       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0401 18:32:32.598315       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0401 18:32:32.599066       1 shared_informer.go:318] Caches are synced for configmaps
	I0401 18:32:32.608192       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0401 18:32:32.608289       1 aggregator.go:165] initial CRD sync complete...
	I0401 18:32:32.608301       1 autoregister_controller.go:141] Starting autoregister controller
	I0401 18:32:32.608308       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0401 18:32:32.608315       1 cache.go:39] Caches are synced for autoregister controller
	W0401 18:32:32.624144       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.161 192.168.39.210]
	I0401 18:32:32.625709       1 controller.go:624] quota admission added evaluator for: endpoints
	I0401 18:32:32.658269       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0401 18:32:32.666504       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0401 18:32:33.503248       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0401 18:32:33.993371       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.161 192.168.39.210 192.168.39.74]
	W0401 18:32:43.996241       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.161 192.168.39.74]
	
	
	==> kube-controller-manager [d1def46fae9a1e3494c6e79f3f6224d4b4ff1e4a487370fa491a92924c0622b6] <==
	I0401 18:31:49.565363       1 serving.go:380] Generated self-signed cert in-memory
	I0401 18:31:50.054082       1 controllermanager.go:187] "Starting" version="v1.29.3"
	I0401 18:31:50.054187       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 18:31:50.056150       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0401 18:31:50.056460       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0401 18:31:50.056726       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0401 18:31:50.056849       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0401 18:32:10.664253       1 controllermanager.go:232] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.74:8443/healthz\": dial tcp 192.168.39.74:8443: connect: connection refused"
	
	
	==> kube-controller-manager [e4cbd0e1fa74f9a0bf6ac1fcafa74e7cc52ea84d7f7d3216ffa34610961bb64b] <==
	I0401 18:32:45.664355       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0401 18:32:45.669745       1 shared_informer.go:318] Caches are synced for PV protection
	I0401 18:32:45.684639       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0401 18:32:45.745745       1 shared_informer.go:318] Caches are synced for resource quota
	I0401 18:32:45.785656       1 shared_informer.go:318] Caches are synced for resource quota
	I0401 18:32:46.088003       1 shared_informer.go:318] Caches are synced for garbage collector
	I0401 18:32:46.088047       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0401 18:32:46.145752       1 shared_informer.go:318] Caches are synced for garbage collector
	I0401 18:32:52.950519       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="52.636361ms"
	I0401 18:32:52.950634       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="48.078µs"
	I0401 18:33:04.851251       1 event.go:376] "Event occurred" object="kube-system/kube-dns" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToUpdateEndpoint" message="Failed to update endpoint kube-system/kube-dns: Operation cannot be fulfilled on endpoints \"kube-dns\": the object has been modified; please apply your changes to the latest version and try again"
	I0401 18:33:04.861614       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="failed to update kube-dns-v92kk EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-v92kk\": the object has been modified; please apply your changes to the latest version and try again"
	I0401 18:33:04.861862       1 event.go:364] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"7b8d916e-1b34-4cc6-8e58-4312b8e59e96", APIVersion:"v1", ResourceVersion:"240", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-v92kk EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-v92kk": the object has been modified; please apply your changes to the latest version and try again
	I0401 18:33:04.862542       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="76.169169ms"
	I0401 18:33:04.863525       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="349.791µs"
	I0401 18:33:13.150340       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="10.553659ms"
	I0401 18:33:13.150695       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="46.652µs"
	I0401 18:33:28.408847       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="12.732604ms"
	I0401 18:33:28.409112       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="55.03µs"
	I0401 18:33:44.804965       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="failed to update kube-dns-v92kk EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-v92kk\": the object has been modified; please apply your changes to the latest version and try again"
	I0401 18:33:44.806717       1 event.go:376] "Event occurred" object="kube-system/kube-dns" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToUpdateEndpoint" message="Failed to update endpoint kube-system/kube-dns: Operation cannot be fulfilled on endpoints \"kube-dns\": the object has been modified; please apply your changes to the latest version and try again"
	I0401 18:33:44.806761       1 event.go:364] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"7b8d916e-1b34-4cc6-8e58-4312b8e59e96", APIVersion:"v1", ResourceVersion:"240", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-v92kk EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-v92kk": the object has been modified; please apply your changes to the latest version and try again
	I0401 18:33:44.844912       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="72.694952ms"
	I0401 18:33:44.845057       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="71.273µs"
	I0401 18:34:09.279777       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-293078-m04"
	
	
	==> kube-proxy [8d7ab06dacb1f801ea9714513d3f23a0bad938d609fb9f291d0ec0c4903d8d6a] <==
	E0401 18:28:59.286576       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1780": dial tcp 192.168.39.254:8443: connect: no route to host
	W0401 18:29:02.357913       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-293078&resourceVersion=1857": dial tcp 192.168.39.254:8443: connect: no route to host
	W0401 18:29:02.358024       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1780": dial tcp 192.168.39.254:8443: connect: no route to host
	E0401 18:29:02.358120       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1780": dial tcp 192.168.39.254:8443: connect: no route to host
	E0401 18:29:02.358141       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-293078&resourceVersion=1857": dial tcp 192.168.39.254:8443: connect: no route to host
	W0401 18:29:02.357898       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1862": dial tcp 192.168.39.254:8443: connect: no route to host
	E0401 18:29:02.358267       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1862": dial tcp 192.168.39.254:8443: connect: no route to host
	W0401 18:29:08.503625       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-293078&resourceVersion=1857": dial tcp 192.168.39.254:8443: connect: no route to host
	E0401 18:29:08.503745       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-293078&resourceVersion=1857": dial tcp 192.168.39.254:8443: connect: no route to host
	W0401 18:29:08.503821       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1862": dial tcp 192.168.39.254:8443: connect: no route to host
	E0401 18:29:08.503874       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1862": dial tcp 192.168.39.254:8443: connect: no route to host
	W0401 18:29:08.503908       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1780": dial tcp 192.168.39.254:8443: connect: no route to host
	E0401 18:29:08.504162       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1780": dial tcp 192.168.39.254:8443: connect: no route to host
	W0401 18:29:17.718745       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-293078&resourceVersion=1857": dial tcp 192.168.39.254:8443: connect: no route to host
	E0401 18:29:17.718958       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-293078&resourceVersion=1857": dial tcp 192.168.39.254:8443: connect: no route to host
	W0401 18:29:20.790974       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1780": dial tcp 192.168.39.254:8443: connect: no route to host
	E0401 18:29:20.791118       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1780": dial tcp 192.168.39.254:8443: connect: no route to host
	W0401 18:29:20.791361       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1862": dial tcp 192.168.39.254:8443: connect: no route to host
	E0401 18:29:20.791479       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1862": dial tcp 192.168.39.254:8443: connect: no route to host
	W0401 18:29:36.150324       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-293078&resourceVersion=1857": dial tcp 192.168.39.254:8443: connect: no route to host
	E0401 18:29:36.150514       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-293078&resourceVersion=1857": dial tcp 192.168.39.254:8443: connect: no route to host
	W0401 18:29:42.296297       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1780": dial tcp 192.168.39.254:8443: connect: no route to host
	E0401 18:29:42.296711       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1780": dial tcp 192.168.39.254:8443: connect: no route to host
	W0401 18:29:45.366573       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1862": dial tcp 192.168.39.254:8443: connect: no route to host
	E0401 18:29:45.367179       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1862": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [d0ba4303bba7609a3982e28cc53c7c80afb21aadb86d498d9d4b5e6340e2d039] <==
	E0401 18:32:12.822355       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-293078\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0401 18:32:31.255519       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-293078\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0401 18:32:31.262545       1 server.go:1020] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	I0401 18:32:31.475669       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0401 18:32:31.475730       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0401 18:32:31.475758       1 server_others.go:168] "Using iptables Proxier"
	I0401 18:32:31.480667       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0401 18:32:31.481004       1 server.go:865] "Version info" version="v1.29.3"
	I0401 18:32:31.481053       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 18:32:31.483694       1 config.go:188] "Starting service config controller"
	I0401 18:32:31.498467       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0401 18:32:31.498544       1 config.go:97] "Starting endpoint slice config controller"
	I0401 18:32:31.498553       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0401 18:32:31.499212       1 config.go:315] "Starting node config controller"
	I0401 18:32:31.499257       1 shared_informer.go:311] Waiting for caches to sync for node config
	E0401 18:32:34.327023       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0401 18:32:34.328343       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0401 18:32:34.328725       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0401 18:32:34.328816       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0401 18:32:34.328882       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0401 18:32:34.328952       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-293078&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0401 18:32:34.329009       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-293078&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	I0401 18:32:35.298610       1 shared_informer.go:318] Caches are synced for service config
	I0401 18:32:35.799560       1 shared_informer.go:318] Caches are synced for node config
	I0401 18:32:35.899555       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [6bd1ccbceec8c5056f450169f49c17acf202e064825e6c51a55ca89e591e25b5] <==
	E0401 18:30:06.000673       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0401 18:30:06.185594       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0401 18:30:06.185621       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0401 18:30:06.409792       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0401 18:30:06.409849       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0401 18:30:06.515055       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0401 18:30:06.515106       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0401 18:30:06.767717       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0401 18:30:06.767814       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0401 18:30:06.814304       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0401 18:30:06.814344       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0401 18:30:07.044190       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0401 18:30:07.044244       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0401 18:30:07.980831       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0401 18:30:07.980894       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0401 18:30:08.097289       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0401 18:30:08.097500       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0401 18:30:08.160852       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0401 18:30:08.161002       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0401 18:30:08.430145       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0401 18:30:08.430201       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0401 18:30:09.307710       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0401 18:30:09.307879       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0401 18:30:09.308074       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0401 18:30:09.308196       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [760c286bbb6db472837a632164ec1f41295aab88d45f26ad6be70fd606b5d770] <==
	W0401 18:32:27.251187       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: Get "https://192.168.39.74:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.74:8443: connect: connection refused
	E0401 18:32:27.251254       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.74:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.74:8443: connect: connection refused
	W0401 18:32:27.963570       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: Get "https://192.168.39.74:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.74:8443: connect: connection refused
	E0401 18:32:27.963661       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.74:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.74:8443: connect: connection refused
	W0401 18:32:28.743995       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.74:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.74:8443: connect: connection refused
	E0401 18:32:28.744096       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.74:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.74:8443: connect: connection refused
	W0401 18:32:28.861529       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: Get "https://192.168.39.74:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.74:8443: connect: connection refused
	E0401 18:32:28.861618       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.74:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.74:8443: connect: connection refused
	W0401 18:32:29.320939       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://192.168.39.74:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.74:8443: connect: connection refused
	E0401 18:32:29.321002       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.74:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.74:8443: connect: connection refused
	W0401 18:32:29.358778       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.74:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.74:8443: connect: connection refused
	E0401 18:32:29.358839       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.74:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.74:8443: connect: connection refused
	W0401 18:32:29.369482       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: Get "https://192.168.39.74:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.74:8443: connect: connection refused
	E0401 18:32:29.369542       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.74:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.74:8443: connect: connection refused
	W0401 18:32:29.417318       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.74:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.74:8443: connect: connection refused
	E0401 18:32:29.417525       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.74:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.74:8443: connect: connection refused
	W0401 18:32:29.740959       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: Get "https://192.168.39.74:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.74:8443: connect: connection refused
	E0401 18:32:29.741049       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.74:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.74:8443: connect: connection refused
	W0401 18:32:29.767908       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: Get "https://192.168.39.74:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.74:8443: connect: connection refused
	E0401 18:32:29.767955       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.74:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.74:8443: connect: connection refused
	W0401 18:32:32.521765       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0401 18:32:32.521841       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0401 18:32:32.521958       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0401 18:32:32.521999       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0401 18:32:41.268362       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 01 18:32:30 ha-293078 kubelet[1369]: I0401 18:32:30.931939    1369 scope.go:117] "RemoveContainer" containerID="d1def46fae9a1e3494c6e79f3f6224d4b4ff1e4a487370fa491a92924c0622b6"
	Apr 01 18:32:31 ha-293078 kubelet[1369]: I0401 18:32:31.253789    1369 status_manager.go:853] "Failed to get status for pod" podUID="111b7388841713ed3598aaf599c56758" pod="kube-system/kube-apiserver-ha-293078" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-293078\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Apr 01 18:32:34 ha-293078 kubelet[1369]: I0401 18:32:34.325828    1369 status_manager.go:853] "Failed to get status for pod" podUID="3d7c42eb-192e-4ae0-b5ae-0883ef5e740c" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Apr 01 18:32:34 ha-293078 kubelet[1369]: E0401 18:32:34.326321    1369 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-293078?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	Apr 01 18:32:34 ha-293078 kubelet[1369]: W0401 18:32:34.326234    1369 reflector.go:539] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/configmaps?fieldSelector=metadata.name%!D(MISSING)kube-root-ca.crt&resourceVersion=1780": dial tcp 192.168.39.254:8443: connect: no route to host
	Apr 01 18:32:34 ha-293078 kubelet[1369]: E0401 18:32:34.327487    1369 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/configmaps?fieldSelector=metadata.name%!D(MISSING)kube-root-ca.crt&resourceVersion=1780": dial tcp 192.168.39.254:8443: connect: no route to host
	Apr 01 18:32:34 ha-293078 kubelet[1369]: E0401 18:32:34.327493    1369 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/kube-apiserver-ha-293078.17c23b605eb31137\": dial tcp 192.168.39.254:8443: connect: no route to host" event="&Event{ObjectMeta:{kube-apiserver-ha-293078.17c23b605eb31137  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-293078,UID:111b7388841713ed3598aaf599c56758,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ha-293078,},FirstTimestamp:2024-04-01 18:28:13.445902647 +0000 UTC m=+443.736031169,LastTimestamp:2024-04-01 18:28:17.453083026 +0000 UTC m=+447.743211565,Count:2,Type:Warning,EventTime:0001-01-01 0
0:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-293078,}"
	Apr 01 18:32:34 ha-293078 kubelet[1369]: E0401 18:32:34.326486    1369 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-293078\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-293078?resourceVersion=0&timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Apr 01 18:32:40 ha-293078 kubelet[1369]: I0401 18:32:40.931523    1369 scope.go:117] "RemoveContainer" containerID="a675dbcdc9748f6386a2b82398770ad55d46e03815ede9d9d26e8a7b1ccbdc69"
	Apr 01 18:32:40 ha-293078 kubelet[1369]: I0401 18:32:40.998264    1369 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-7fdf7869d9-7tn8z" podStartSLOduration=547.021704281 podStartE2EDuration="9m7.998195716s" podCreationTimestamp="2024-04-01 18:23:33 +0000 UTC" firstStartedPulling="2024-04-01 18:23:33.75830649 +0000 UTC m=+164.048435015" lastFinishedPulling="2024-04-01 18:23:34.734797925 +0000 UTC m=+165.024926450" observedRunningTime="2024-04-01 18:23:35.720812093 +0000 UTC m=+166.010940626" watchObservedRunningTime="2024-04-01 18:32:40.998195716 +0000 UTC m=+711.288324256"
	Apr 01 18:32:43 ha-293078 kubelet[1369]: I0401 18:32:43.932290    1369 scope.go:117] "RemoveContainer" containerID="c748af70e7154a879fb419d898bb0eaa511a6797afb99199f8231d834dca19c4"
	Apr 01 18:32:49 ha-293078 kubelet[1369]: E0401 18:32:49.985890    1369 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 01 18:32:49 ha-293078 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 01 18:32:49 ha-293078 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 01 18:32:49 ha-293078 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 01 18:32:49 ha-293078 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 01 18:33:17 ha-293078 kubelet[1369]: I0401 18:33:17.931976    1369 kubelet.go:1903] "Trying to delete pod" pod="kube-system/kube-vip-ha-293078" podUID="543de9ec-6f50-46b9-b6ec-f58964f81f12"
	Apr 01 18:33:17 ha-293078 kubelet[1369]: I0401 18:33:17.963524    1369 kubelet.go:1908] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-293078"
	Apr 01 18:33:18 ha-293078 kubelet[1369]: I0401 18:33:18.094754    1369 kubelet.go:1903] "Trying to delete pod" pod="kube-system/kube-vip-ha-293078" podUID="543de9ec-6f50-46b9-b6ec-f58964f81f12"
	Apr 01 18:33:44 ha-293078 kubelet[1369]: I0401 18:33:44.770088    1369 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-vip-ha-293078" podStartSLOduration=27.770003033 podStartE2EDuration="27.770003033s" podCreationTimestamp="2024-04-01 18:33:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-01 18:33:19.9568693 +0000 UTC m=+750.246997842" watchObservedRunningTime="2024-04-01 18:33:44.770003033 +0000 UTC m=+775.060131577"
	Apr 01 18:33:49 ha-293078 kubelet[1369]: E0401 18:33:49.981947    1369 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 01 18:33:49 ha-293078 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 01 18:33:49 ha-293078 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 01 18:33:49 ha-293078 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 01 18:33:49 ha-293078 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0401 18:34:17.077477   34021 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18233-10493/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-293078 -n ha-293078
helpers_test.go:261: (dbg) Run:  kubectl --context ha-293078 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (373.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (142s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 stop -v=7 --alsologtostderr
E0401 18:35:15.900220   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-293078 stop -v=7 --alsologtostderr: exit status 82 (2m0.488279975s)

                                                
                                                
-- stdout --
	* Stopping node "ha-293078-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 18:34:37.627944   34408 out.go:291] Setting OutFile to fd 1 ...
	I0401 18:34:37.628081   34408 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 18:34:37.628090   34408 out.go:304] Setting ErrFile to fd 2...
	I0401 18:34:37.628095   34408 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 18:34:37.628293   34408 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-10493/.minikube/bin
	I0401 18:34:37.628509   34408 out.go:298] Setting JSON to false
	I0401 18:34:37.628575   34408 mustload.go:65] Loading cluster: ha-293078
	I0401 18:34:37.628918   34408 config.go:182] Loaded profile config "ha-293078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 18:34:37.629002   34408 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/config.json ...
	I0401 18:34:37.629172   34408 mustload.go:65] Loading cluster: ha-293078
	I0401 18:34:37.629292   34408 config.go:182] Loaded profile config "ha-293078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 18:34:37.629320   34408 stop.go:39] StopHost: ha-293078-m04
	I0401 18:34:37.629767   34408 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:34:37.629805   34408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:34:37.645324   34408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37167
	I0401 18:34:37.645836   34408 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:34:37.646403   34408 main.go:141] libmachine: Using API Version  1
	I0401 18:34:37.646434   34408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:34:37.646863   34408 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:34:37.649330   34408 out.go:177] * Stopping node "ha-293078-m04"  ...
	I0401 18:34:37.650806   34408 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0401 18:34:37.650846   34408 main.go:141] libmachine: (ha-293078-m04) Calling .DriverName
	I0401 18:34:37.651076   34408 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0401 18:34:37.651111   34408 main.go:141] libmachine: (ha-293078-m04) Calling .GetSSHHostname
	I0401 18:34:37.653948   34408 main.go:141] libmachine: (ha-293078-m04) DBG | domain ha-293078-m04 has defined MAC address 52:54:00:b5:ec:c5 in network mk-ha-293078
	I0401 18:34:37.654348   34408 main.go:141] libmachine: (ha-293078-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:ec:c5", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:34:03 +0000 UTC Type:0 Mac:52:54:00:b5:ec:c5 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-293078-m04 Clientid:01:52:54:00:b5:ec:c5}
	I0401 18:34:37.654381   34408 main.go:141] libmachine: (ha-293078-m04) DBG | domain ha-293078-m04 has defined IP address 192.168.39.14 and MAC address 52:54:00:b5:ec:c5 in network mk-ha-293078
	I0401 18:34:37.654558   34408 main.go:141] libmachine: (ha-293078-m04) Calling .GetSSHPort
	I0401 18:34:37.654728   34408 main.go:141] libmachine: (ha-293078-m04) Calling .GetSSHKeyPath
	I0401 18:34:37.654892   34408 main.go:141] libmachine: (ha-293078-m04) Calling .GetSSHUsername
	I0401 18:34:37.655057   34408 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m04/id_rsa Username:docker}
	I0401 18:34:37.748936   34408 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0401 18:34:37.803158   34408 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0401 18:34:37.857248   34408 main.go:141] libmachine: Stopping "ha-293078-m04"...
	I0401 18:34:37.857286   34408 main.go:141] libmachine: (ha-293078-m04) Calling .GetState
	I0401 18:34:37.858973   34408 main.go:141] libmachine: (ha-293078-m04) Calling .Stop
	I0401 18:34:37.862330   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 0/120
	I0401 18:34:38.864457   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 1/120
	I0401 18:34:39.865721   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 2/120
	I0401 18:34:40.867346   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 3/120
	I0401 18:34:41.868809   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 4/120
	I0401 18:34:42.870757   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 5/120
	I0401 18:34:43.872079   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 6/120
	I0401 18:34:44.874104   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 7/120
	I0401 18:34:45.875502   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 8/120
	I0401 18:34:46.876868   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 9/120
	I0401 18:34:47.878955   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 10/120
	I0401 18:34:48.880210   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 11/120
	I0401 18:34:49.881543   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 12/120
	I0401 18:34:50.883454   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 13/120
	I0401 18:34:51.885205   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 14/120
	I0401 18:34:52.887164   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 15/120
	I0401 18:34:53.888518   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 16/120
	I0401 18:34:54.889764   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 17/120
	I0401 18:34:55.892003   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 18/120
	I0401 18:34:56.893305   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 19/120
	I0401 18:34:57.895338   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 20/120
	I0401 18:34:58.896631   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 21/120
	I0401 18:34:59.898018   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 22/120
	I0401 18:35:00.899358   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 23/120
	I0401 18:35:01.900682   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 24/120
	I0401 18:35:02.902694   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 25/120
	I0401 18:35:03.904330   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 26/120
	I0401 18:35:04.905697   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 27/120
	I0401 18:35:05.907084   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 28/120
	I0401 18:35:06.908557   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 29/120
	I0401 18:35:07.910841   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 30/120
	I0401 18:35:08.912081   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 31/120
	I0401 18:35:09.914011   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 32/120
	I0401 18:35:10.915328   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 33/120
	I0401 18:35:11.916940   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 34/120
	I0401 18:35:12.918608   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 35/120
	I0401 18:35:13.919901   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 36/120
	I0401 18:35:14.920944   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 37/120
	I0401 18:35:15.922152   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 38/120
	I0401 18:35:16.923231   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 39/120
	I0401 18:35:17.925400   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 40/120
	I0401 18:35:18.927045   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 41/120
	I0401 18:35:19.928247   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 42/120
	I0401 18:35:20.929565   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 43/120
	I0401 18:35:21.930999   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 44/120
	I0401 18:35:22.932888   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 45/120
	I0401 18:35:23.934271   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 46/120
	I0401 18:35:24.936247   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 47/120
	I0401 18:35:25.937551   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 48/120
	I0401 18:35:26.939626   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 49/120
	I0401 18:35:27.941502   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 50/120
	I0401 18:35:28.942760   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 51/120
	I0401 18:35:29.943946   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 52/120
	I0401 18:35:30.945289   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 53/120
	I0401 18:35:31.946480   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 54/120
	I0401 18:35:32.948449   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 55/120
	I0401 18:35:33.949636   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 56/120
	I0401 18:35:34.951282   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 57/120
	I0401 18:35:35.952523   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 58/120
	I0401 18:35:36.954656   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 59/120
	I0401 18:35:37.956525   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 60/120
	I0401 18:35:38.957748   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 61/120
	I0401 18:35:39.958922   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 62/120
	I0401 18:35:40.960294   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 63/120
	I0401 18:35:41.961861   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 64/120
	I0401 18:35:42.963695   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 65/120
	I0401 18:35:43.965429   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 66/120
	I0401 18:35:44.966871   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 67/120
	I0401 18:35:45.968751   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 68/120
	I0401 18:35:46.970039   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 69/120
	I0401 18:35:47.972040   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 70/120
	I0401 18:35:48.973334   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 71/120
	I0401 18:35:49.974990   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 72/120
	I0401 18:35:50.976387   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 73/120
	I0401 18:35:51.977934   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 74/120
	I0401 18:35:52.979255   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 75/120
	I0401 18:35:53.980672   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 76/120
	I0401 18:35:54.982026   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 77/120
	I0401 18:35:55.983268   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 78/120
	I0401 18:35:56.984560   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 79/120
	I0401 18:35:57.986629   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 80/120
	I0401 18:35:58.987951   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 81/120
	I0401 18:35:59.989305   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 82/120
	I0401 18:36:00.990566   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 83/120
	I0401 18:36:01.992176   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 84/120
	I0401 18:36:02.993444   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 85/120
	I0401 18:36:03.994762   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 86/120
	I0401 18:36:04.996335   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 87/120
	I0401 18:36:05.997657   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 88/120
	I0401 18:36:06.998847   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 89/120
	I0401 18:36:08.000873   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 90/120
	I0401 18:36:09.002014   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 91/120
	I0401 18:36:10.004031   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 92/120
	I0401 18:36:11.005303   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 93/120
	I0401 18:36:12.006584   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 94/120
	I0401 18:36:13.007853   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 95/120
	I0401 18:36:14.009118   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 96/120
	I0401 18:36:15.010353   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 97/120
	I0401 18:36:16.012731   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 98/120
	I0401 18:36:17.014104   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 99/120
	I0401 18:36:18.016012   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 100/120
	I0401 18:36:19.017487   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 101/120
	I0401 18:36:20.018969   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 102/120
	I0401 18:36:21.020468   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 103/120
	I0401 18:36:22.022075   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 104/120
	I0401 18:36:23.023905   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 105/120
	I0401 18:36:24.026050   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 106/120
	I0401 18:36:25.028099   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 107/120
	I0401 18:36:26.029886   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 108/120
	I0401 18:36:27.031994   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 109/120
	I0401 18:36:28.034084   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 110/120
	I0401 18:36:29.036264   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 111/120
	I0401 18:36:30.037828   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 112/120
	I0401 18:36:31.039030   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 113/120
	I0401 18:36:32.040337   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 114/120
	I0401 18:36:33.042259   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 115/120
	I0401 18:36:34.044334   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 116/120
	I0401 18:36:35.045878   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 117/120
	I0401 18:36:36.047164   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 118/120
	I0401 18:36:37.049368   34408 main.go:141] libmachine: (ha-293078-m04) Waiting for machine to stop 119/120
	I0401 18:36:38.050815   34408 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0401 18:36:38.050876   34408 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0401 18:36:38.052788   34408 out.go:177] 
	W0401 18:36:38.054267   34408 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0401 18:36:38.054284   34408 out.go:239] * 
	* 
	W0401 18:36:38.056912   34408 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 18:36:38.058361   34408 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-293078 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-293078 status -v=7 --alsologtostderr: exit status 3 (18.961981249s)

                                                
                                                
-- stdout --
	ha-293078
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-293078-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-293078-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 18:36:38.119511   34726 out.go:291] Setting OutFile to fd 1 ...
	I0401 18:36:38.119628   34726 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 18:36:38.119637   34726 out.go:304] Setting ErrFile to fd 2...
	I0401 18:36:38.119642   34726 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 18:36:38.119844   34726 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-10493/.minikube/bin
	I0401 18:36:38.120002   34726 out.go:298] Setting JSON to false
	I0401 18:36:38.120035   34726 mustload.go:65] Loading cluster: ha-293078
	I0401 18:36:38.120152   34726 notify.go:220] Checking for updates...
	I0401 18:36:38.120449   34726 config.go:182] Loaded profile config "ha-293078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 18:36:38.120466   34726 status.go:255] checking status of ha-293078 ...
	I0401 18:36:38.120871   34726 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:36:38.120936   34726 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:36:38.143383   34726 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43107
	I0401 18:36:38.143807   34726 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:36:38.144322   34726 main.go:141] libmachine: Using API Version  1
	I0401 18:36:38.144344   34726 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:36:38.144721   34726 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:36:38.144915   34726 main.go:141] libmachine: (ha-293078) Calling .GetState
	I0401 18:36:38.146621   34726 status.go:330] ha-293078 host status = "Running" (err=<nil>)
	I0401 18:36:38.146641   34726 host.go:66] Checking if "ha-293078" exists ...
	I0401 18:36:38.146903   34726 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:36:38.146933   34726 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:36:38.160951   34726 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34697
	I0401 18:36:38.161308   34726 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:36:38.161759   34726 main.go:141] libmachine: Using API Version  1
	I0401 18:36:38.161777   34726 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:36:38.162093   34726 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:36:38.162267   34726 main.go:141] libmachine: (ha-293078) Calling .GetIP
	I0401 18:36:38.164904   34726 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:36:38.165395   34726 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:36:38.165418   34726 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:36:38.165581   34726 host.go:66] Checking if "ha-293078" exists ...
	I0401 18:36:38.165887   34726 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:36:38.165927   34726 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:36:38.180947   34726 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40021
	I0401 18:36:38.181348   34726 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:36:38.181897   34726 main.go:141] libmachine: Using API Version  1
	I0401 18:36:38.181925   34726 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:36:38.182245   34726 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:36:38.182436   34726 main.go:141] libmachine: (ha-293078) Calling .DriverName
	I0401 18:36:38.182632   34726 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 18:36:38.182662   34726 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:36:38.185638   34726 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:36:38.186107   34726 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:36:38.186127   34726 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:36:38.186409   34726 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:36:38.186554   34726 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:36:38.186724   34726 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:36:38.186859   34726 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078/id_rsa Username:docker}
	I0401 18:36:38.279507   34726 ssh_runner.go:195] Run: systemctl --version
	I0401 18:36:38.286850   34726 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 18:36:38.306987   34726 kubeconfig.go:125] found "ha-293078" server: "https://192.168.39.254:8443"
	I0401 18:36:38.307018   34726 api_server.go:166] Checking apiserver status ...
	I0401 18:36:38.307047   34726 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 18:36:38.326965   34726 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5187/cgroup
	W0401 18:36:38.339841   34726 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5187/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0401 18:36:38.339885   34726 ssh_runner.go:195] Run: ls
	I0401 18:36:38.345017   34726 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0401 18:36:38.349259   34726 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0401 18:36:38.349277   34726 status.go:422] ha-293078 apiserver status = Running (err=<nil>)
	I0401 18:36:38.349286   34726 status.go:257] ha-293078 status: &{Name:ha-293078 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0401 18:36:38.349299   34726 status.go:255] checking status of ha-293078-m02 ...
	I0401 18:36:38.349639   34726 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:36:38.349701   34726 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:36:38.364069   34726 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45029
	I0401 18:36:38.364519   34726 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:36:38.365028   34726 main.go:141] libmachine: Using API Version  1
	I0401 18:36:38.365052   34726 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:36:38.365439   34726 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:36:38.365629   34726 main.go:141] libmachine: (ha-293078-m02) Calling .GetState
	I0401 18:36:38.367078   34726 status.go:330] ha-293078-m02 host status = "Running" (err=<nil>)
	I0401 18:36:38.367095   34726 host.go:66] Checking if "ha-293078-m02" exists ...
	I0401 18:36:38.367369   34726 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:36:38.367400   34726 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:36:38.381482   34726 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40101
	I0401 18:36:38.381846   34726 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:36:38.382293   34726 main.go:141] libmachine: Using API Version  1
	I0401 18:36:38.382315   34726 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:36:38.382660   34726 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:36:38.382855   34726 main.go:141] libmachine: (ha-293078-m02) Calling .GetIP
	I0401 18:36:38.385580   34726 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:36:38.385992   34726 main.go:141] libmachine: (ha-293078-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7f:87", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:31:55 +0000 UTC Type:0 Mac:52:54:00:25:7f:87 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-293078-m02 Clientid:01:52:54:00:25:7f:87}
	I0401 18:36:38.386016   34726 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined IP address 192.168.39.161 and MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:36:38.386166   34726 host.go:66] Checking if "ha-293078-m02" exists ...
	I0401 18:36:38.386492   34726 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:36:38.386539   34726 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:36:38.401082   34726 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39043
	I0401 18:36:38.401426   34726 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:36:38.401872   34726 main.go:141] libmachine: Using API Version  1
	I0401 18:36:38.401890   34726 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:36:38.402179   34726 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:36:38.402395   34726 main.go:141] libmachine: (ha-293078-m02) Calling .DriverName
	I0401 18:36:38.402549   34726 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 18:36:38.402569   34726 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHHostname
	I0401 18:36:38.405072   34726 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:36:38.405433   34726 main.go:141] libmachine: (ha-293078-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:7f:87", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:31:55 +0000 UTC Type:0 Mac:52:54:00:25:7f:87 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-293078-m02 Clientid:01:52:54:00:25:7f:87}
	I0401 18:36:38.405454   34726 main.go:141] libmachine: (ha-293078-m02) DBG | domain ha-293078-m02 has defined IP address 192.168.39.161 and MAC address 52:54:00:25:7f:87 in network mk-ha-293078
	I0401 18:36:38.405624   34726 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHPort
	I0401 18:36:38.405795   34726 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHKeyPath
	I0401 18:36:38.405933   34726 main.go:141] libmachine: (ha-293078-m02) Calling .GetSSHUsername
	I0401 18:36:38.406054   34726 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m02/id_rsa Username:docker}
	I0401 18:36:38.497631   34726 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 18:36:38.514681   34726 kubeconfig.go:125] found "ha-293078" server: "https://192.168.39.254:8443"
	I0401 18:36:38.514715   34726 api_server.go:166] Checking apiserver status ...
	I0401 18:36:38.514755   34726 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 18:36:38.529560   34726 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1374/cgroup
	W0401 18:36:38.539237   34726 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1374/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0401 18:36:38.539282   34726 ssh_runner.go:195] Run: ls
	I0401 18:36:38.546454   34726 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0401 18:36:38.551413   34726 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0401 18:36:38.551434   34726 status.go:422] ha-293078-m02 apiserver status = Running (err=<nil>)
	I0401 18:36:38.551442   34726 status.go:257] ha-293078-m02 status: &{Name:ha-293078-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0401 18:36:38.551455   34726 status.go:255] checking status of ha-293078-m04 ...
	I0401 18:36:38.551848   34726 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:36:38.551896   34726 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:36:38.567725   34726 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42909
	I0401 18:36:38.568195   34726 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:36:38.568640   34726 main.go:141] libmachine: Using API Version  1
	I0401 18:36:38.568661   34726 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:36:38.568972   34726 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:36:38.569158   34726 main.go:141] libmachine: (ha-293078-m04) Calling .GetState
	I0401 18:36:38.570577   34726 status.go:330] ha-293078-m04 host status = "Running" (err=<nil>)
	I0401 18:36:38.570596   34726 host.go:66] Checking if "ha-293078-m04" exists ...
	I0401 18:36:38.570908   34726 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:36:38.570941   34726 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:36:38.585031   34726 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41479
	I0401 18:36:38.585431   34726 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:36:38.585883   34726 main.go:141] libmachine: Using API Version  1
	I0401 18:36:38.585919   34726 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:36:38.586234   34726 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:36:38.586432   34726 main.go:141] libmachine: (ha-293078-m04) Calling .GetIP
	I0401 18:36:38.588792   34726 main.go:141] libmachine: (ha-293078-m04) DBG | domain ha-293078-m04 has defined MAC address 52:54:00:b5:ec:c5 in network mk-ha-293078
	I0401 18:36:38.589171   34726 main.go:141] libmachine: (ha-293078-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:ec:c5", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:34:03 +0000 UTC Type:0 Mac:52:54:00:b5:ec:c5 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-293078-m04 Clientid:01:52:54:00:b5:ec:c5}
	I0401 18:36:38.589196   34726 main.go:141] libmachine: (ha-293078-m04) DBG | domain ha-293078-m04 has defined IP address 192.168.39.14 and MAC address 52:54:00:b5:ec:c5 in network mk-ha-293078
	I0401 18:36:38.589343   34726 host.go:66] Checking if "ha-293078-m04" exists ...
	I0401 18:36:38.589610   34726 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:36:38.589665   34726 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:36:38.604015   34726 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32889
	I0401 18:36:38.604425   34726 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:36:38.604874   34726 main.go:141] libmachine: Using API Version  1
	I0401 18:36:38.604907   34726 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:36:38.605198   34726 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:36:38.605378   34726 main.go:141] libmachine: (ha-293078-m04) Calling .DriverName
	I0401 18:36:38.605591   34726 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 18:36:38.605615   34726 main.go:141] libmachine: (ha-293078-m04) Calling .GetSSHHostname
	I0401 18:36:38.608174   34726 main.go:141] libmachine: (ha-293078-m04) DBG | domain ha-293078-m04 has defined MAC address 52:54:00:b5:ec:c5 in network mk-ha-293078
	I0401 18:36:38.608555   34726 main.go:141] libmachine: (ha-293078-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:ec:c5", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:34:03 +0000 UTC Type:0 Mac:52:54:00:b5:ec:c5 Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:ha-293078-m04 Clientid:01:52:54:00:b5:ec:c5}
	I0401 18:36:38.608582   34726 main.go:141] libmachine: (ha-293078-m04) DBG | domain ha-293078-m04 has defined IP address 192.168.39.14 and MAC address 52:54:00:b5:ec:c5 in network mk-ha-293078
	I0401 18:36:38.608650   34726 main.go:141] libmachine: (ha-293078-m04) Calling .GetSSHPort
	I0401 18:36:38.608828   34726 main.go:141] libmachine: (ha-293078-m04) Calling .GetSSHKeyPath
	I0401 18:36:38.608982   34726 main.go:141] libmachine: (ha-293078-m04) Calling .GetSSHUsername
	I0401 18:36:38.609097   34726 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078-m04/id_rsa Username:docker}
	W0401 18:36:57.021847   34726 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.14:22: connect: no route to host
	W0401 18:36:57.021974   34726 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.14:22: connect: no route to host
	E0401 18:36:57.021994   34726 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.14:22: connect: no route to host
	I0401 18:36:57.022005   34726 status.go:257] ha-293078-m04 status: &{Name:ha-293078-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0401 18:36:57.022051   34726 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.14:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-293078 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-293078 -n ha-293078
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-293078 logs -n 25: (1.862412064s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   |    Version     |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| ssh     | ha-293078 ssh -n ha-293078-m02 sudo cat                                          | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | /home/docker/cp-test_ha-293078-m03_ha-293078-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-293078 cp ha-293078-m03:/home/docker/cp-test.txt                              | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | ha-293078-m04:/home/docker/cp-test_ha-293078-m03_ha-293078-m04.txt               |           |         |                |                     |                     |
	| ssh     | ha-293078 ssh -n                                                                 | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | ha-293078-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-293078 ssh -n ha-293078-m04 sudo cat                                          | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | /home/docker/cp-test_ha-293078-m03_ha-293078-m04.txt                             |           |         |                |                     |                     |
	| cp      | ha-293078 cp testdata/cp-test.txt                                                | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | ha-293078-m04:/home/docker/cp-test.txt                                           |           |         |                |                     |                     |
	| ssh     | ha-293078 ssh -n                                                                 | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | ha-293078-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-293078 cp ha-293078-m04:/home/docker/cp-test.txt                              | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3967030531/001/cp-test_ha-293078-m04.txt |           |         |                |                     |                     |
	| ssh     | ha-293078 ssh -n                                                                 | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | ha-293078-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-293078 cp ha-293078-m04:/home/docker/cp-test.txt                              | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | ha-293078:/home/docker/cp-test_ha-293078-m04_ha-293078.txt                       |           |         |                |                     |                     |
	| ssh     | ha-293078 ssh -n                                                                 | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | ha-293078-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-293078 ssh -n ha-293078 sudo cat                                              | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | /home/docker/cp-test_ha-293078-m04_ha-293078.txt                                 |           |         |                |                     |                     |
	| cp      | ha-293078 cp ha-293078-m04:/home/docker/cp-test.txt                              | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | ha-293078-m02:/home/docker/cp-test_ha-293078-m04_ha-293078-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-293078 ssh -n                                                                 | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | ha-293078-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-293078 ssh -n ha-293078-m02 sudo cat                                          | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | /home/docker/cp-test_ha-293078-m04_ha-293078-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-293078 cp ha-293078-m04:/home/docker/cp-test.txt                              | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | ha-293078-m03:/home/docker/cp-test_ha-293078-m04_ha-293078-m03.txt               |           |         |                |                     |                     |
	| ssh     | ha-293078 ssh -n                                                                 | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | ha-293078-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-293078 ssh -n ha-293078-m03 sudo cat                                          | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC | 01 Apr 24 18:24 UTC |
	|         | /home/docker/cp-test_ha-293078-m04_ha-293078-m03.txt                             |           |         |                |                     |                     |
	| node    | ha-293078 node stop m02 -v=7                                                     | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:24 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | ha-293078 node start m02 -v=7                                                    | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:27 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | list -p ha-293078 -v=7                                                           | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:28 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| stop    | -p ha-293078 -v=7                                                                | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:28 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| start   | -p ha-293078 --wait=true -v=7                                                    | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:30 UTC | 01 Apr 24 18:34 UTC |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | list -p ha-293078                                                                | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:34 UTC |                     |
	| node    | ha-293078 node delete m03 -v=7                                                   | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:34 UTC | 01 Apr 24 18:34 UTC |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| stop    | ha-293078 stop -v=7                                                              | ha-293078 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:34 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/01 18:30:08
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 18:30:08.328548   32936 out.go:291] Setting OutFile to fd 1 ...
	I0401 18:30:08.328682   32936 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 18:30:08.328691   32936 out.go:304] Setting ErrFile to fd 2...
	I0401 18:30:08.328695   32936 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 18:30:08.328888   32936 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-10493/.minikube/bin
	I0401 18:30:08.329471   32936 out.go:298] Setting JSON to false
	I0401 18:30:08.330452   32936 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4360,"bootTime":1711991848,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 18:30:08.330508   32936 start.go:139] virtualization: kvm guest
	I0401 18:30:08.333069   32936 out.go:177] * [ha-293078] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 18:30:08.334723   32936 notify.go:220] Checking for updates...
	I0401 18:30:08.334742   32936 out.go:177]   - MINIKUBE_LOCATION=18233
	I0401 18:30:08.335962   32936 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 18:30:08.337333   32936 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 18:30:08.338593   32936 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-10493/.minikube
	I0401 18:30:08.339835   32936 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 18:30:08.341116   32936 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 18:30:08.342730   32936 config.go:182] Loaded profile config "ha-293078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 18:30:08.342815   32936 driver.go:392] Setting default libvirt URI to qemu:///system
	I0401 18:30:08.343193   32936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:30:08.343230   32936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:30:08.358182   32936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41541
	I0401 18:30:08.358571   32936 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:30:08.359183   32936 main.go:141] libmachine: Using API Version  1
	I0401 18:30:08.359207   32936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:30:08.359538   32936 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:30:08.359706   32936 main.go:141] libmachine: (ha-293078) Calling .DriverName
	I0401 18:30:08.394285   32936 out.go:177] * Using the kvm2 driver based on existing profile
	I0401 18:30:08.395478   32936 start.go:297] selected driver: kvm2
	I0401 18:30:08.395489   32936 start.go:901] validating driver "kvm2" against &{Name:ha-293078 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.29.3 ClusterName:ha-293078 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.74 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.161 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.14 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 18:30:08.395609   32936 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 18:30:08.395909   32936 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 18:30:08.395970   32936 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18233-10493/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0401 18:30:08.410464   32936 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0401 18:30:08.411165   32936 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 18:30:08.411225   32936 cni.go:84] Creating CNI manager for ""
	I0401 18:30:08.411240   32936 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0401 18:30:08.411317   32936 start.go:340] cluster config:
	{Name:ha-293078 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-293078 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.74 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.161 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.14 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 18:30:08.411883   32936 iso.go:125] acquiring lock: {Name:mka511ffe42ecd86bd7f46e7a17ddcdd3e5e4327 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 18:30:08.413744   32936 out.go:177] * Starting "ha-293078" primary control-plane node in "ha-293078" cluster
	I0401 18:30:08.415082   32936 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0401 18:30:08.415118   32936 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0401 18:30:08.415134   32936 cache.go:56] Caching tarball of preloaded images
	I0401 18:30:08.415223   32936 preload.go:173] Found /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 18:30:08.415238   32936 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0401 18:30:08.415384   32936 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/config.json ...
	I0401 18:30:08.415614   32936 start.go:360] acquireMachinesLock for ha-293078: {Name:mk6b7472209a8db5f40be4c2f0565da7e0094c19 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 18:30:08.415664   32936 start.go:364] duration metric: took 27.893µs to acquireMachinesLock for "ha-293078"
	I0401 18:30:08.415682   32936 start.go:96] Skipping create...Using existing machine configuration
	I0401 18:30:08.415692   32936 fix.go:54] fixHost starting: 
	I0401 18:30:08.416103   32936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:30:08.416140   32936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:30:08.430119   32936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36443
	I0401 18:30:08.430545   32936 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:30:08.430971   32936 main.go:141] libmachine: Using API Version  1
	I0401 18:30:08.430995   32936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:30:08.431350   32936 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:30:08.431553   32936 main.go:141] libmachine: (ha-293078) Calling .DriverName
	I0401 18:30:08.431730   32936 main.go:141] libmachine: (ha-293078) Calling .GetState
	I0401 18:30:08.433398   32936 fix.go:112] recreateIfNeeded on ha-293078: state=Running err=<nil>
	W0401 18:30:08.433417   32936 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 18:30:08.435323   32936 out.go:177] * Updating the running kvm2 "ha-293078" VM ...
	I0401 18:30:08.436666   32936 machine.go:94] provisionDockerMachine start ...
	I0401 18:30:08.436683   32936 main.go:141] libmachine: (ha-293078) Calling .DriverName
	I0401 18:30:08.436839   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:30:08.439389   32936 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:30:08.439886   32936 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:30:08.439916   32936 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:30:08.440007   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:30:08.440182   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:30:08.440324   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:30:08.440466   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:30:08.440598   32936 main.go:141] libmachine: Using SSH client type: native
	I0401 18:30:08.440769   32936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.74 22 <nil> <nil>}
	I0401 18:30:08.440789   32936 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 18:30:08.551483   32936 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-293078
	
	I0401 18:30:08.551506   32936 main.go:141] libmachine: (ha-293078) Calling .GetMachineName
	I0401 18:30:08.551767   32936 buildroot.go:166] provisioning hostname "ha-293078"
	I0401 18:30:08.551788   32936 main.go:141] libmachine: (ha-293078) Calling .GetMachineName
	I0401 18:30:08.551955   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:30:08.554622   32936 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:30:08.555073   32936 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:30:08.555098   32936 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:30:08.555239   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:30:08.555425   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:30:08.555582   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:30:08.555719   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:30:08.555888   32936 main.go:141] libmachine: Using SSH client type: native
	I0401 18:30:08.556038   32936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.74 22 <nil> <nil>}
	I0401 18:30:08.556051   32936 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-293078 && echo "ha-293078" | sudo tee /etc/hostname
	I0401 18:30:08.682841   32936 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-293078
	
	I0401 18:30:08.682876   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:30:08.685357   32936 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:30:08.685815   32936 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:30:08.685848   32936 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:30:08.686084   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:30:08.686282   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:30:08.686435   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:30:08.686558   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:30:08.686706   32936 main.go:141] libmachine: Using SSH client type: native
	I0401 18:30:08.686898   32936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.74 22 <nil> <nil>}
	I0401 18:30:08.686920   32936 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-293078' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-293078/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-293078' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 18:30:08.800236   32936 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 18:30:08.800262   32936 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18233-10493/.minikube CaCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18233-10493/.minikube}
	I0401 18:30:08.800286   32936 buildroot.go:174] setting up certificates
	I0401 18:30:08.800297   32936 provision.go:84] configureAuth start
	I0401 18:30:08.800308   32936 main.go:141] libmachine: (ha-293078) Calling .GetMachineName
	I0401 18:30:08.800595   32936 main.go:141] libmachine: (ha-293078) Calling .GetIP
	I0401 18:30:08.803046   32936 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:30:08.803455   32936 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:30:08.803483   32936 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:30:08.803606   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:30:08.805731   32936 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:30:08.806103   32936 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:30:08.806125   32936 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:30:08.806298   32936 provision.go:143] copyHostCerts
	I0401 18:30:08.806325   32936 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem
	I0401 18:30:08.806357   32936 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem, removing ...
	I0401 18:30:08.806369   32936 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem
	I0401 18:30:08.806433   32936 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem (1082 bytes)
	I0401 18:30:08.806506   32936 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem
	I0401 18:30:08.806525   32936 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem, removing ...
	I0401 18:30:08.806530   32936 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem
	I0401 18:30:08.806555   32936 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem (1123 bytes)
	I0401 18:30:08.806596   32936 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem
	I0401 18:30:08.806613   32936 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem, removing ...
	I0401 18:30:08.806616   32936 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem
	I0401 18:30:08.806635   32936 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem (1679 bytes)
	I0401 18:30:08.806680   32936 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem org=jenkins.ha-293078 san=[127.0.0.1 192.168.39.74 ha-293078 localhost minikube]
	I0401 18:30:09.001766   32936 provision.go:177] copyRemoteCerts
	I0401 18:30:09.001818   32936 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 18:30:09.001838   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:30:09.004390   32936 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:30:09.004720   32936 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:30:09.004743   32936 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:30:09.004911   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:30:09.005120   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:30:09.005264   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:30:09.005402   32936 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078/id_rsa Username:docker}
	I0401 18:30:09.089494   32936 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0401 18:30:09.089551   32936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 18:30:09.119137   32936 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0401 18:30:09.119231   32936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0401 18:30:09.147243   32936 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0401 18:30:09.147320   32936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 18:30:09.177439   32936 provision.go:87] duration metric: took 377.129311ms to configureAuth
	I0401 18:30:09.177467   32936 buildroot.go:189] setting minikube options for container-runtime
	I0401 18:30:09.177751   32936 config.go:182] Loaded profile config "ha-293078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 18:30:09.177836   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:30:09.180340   32936 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:30:09.180683   32936 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:30:09.180709   32936 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:30:09.180848   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:30:09.181039   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:30:09.181187   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:30:09.181372   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:30:09.181514   32936 main.go:141] libmachine: Using SSH client type: native
	I0401 18:30:09.181689   32936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.74 22 <nil> <nil>}
	I0401 18:30:09.181705   32936 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 18:31:40.023679   32936 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 18:31:40.023702   32936 machine.go:97] duration metric: took 1m31.587022348s to provisionDockerMachine
	I0401 18:31:40.023717   32936 start.go:293] postStartSetup for "ha-293078" (driver="kvm2")
	I0401 18:31:40.023731   32936 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 18:31:40.023750   32936 main.go:141] libmachine: (ha-293078) Calling .DriverName
	I0401 18:31:40.024117   32936 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 18:31:40.024163   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:31:40.027265   32936 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:31:40.027757   32936 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:31:40.027780   32936 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:31:40.028000   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:31:40.028193   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:31:40.028346   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:31:40.028474   32936 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078/id_rsa Username:docker}
	I0401 18:31:40.116781   32936 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 18:31:40.121600   32936 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 18:31:40.121619   32936 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/addons for local assets ...
	I0401 18:31:40.121687   32936 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/files for local assets ...
	I0401 18:31:40.121772   32936 filesync.go:149] local asset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> 177512.pem in /etc/ssl/certs
	I0401 18:31:40.121786   32936 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> /etc/ssl/certs/177512.pem
	I0401 18:31:40.121893   32936 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 18:31:40.132702   32936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /etc/ssl/certs/177512.pem (1708 bytes)
	I0401 18:31:40.159615   32936 start.go:296] duration metric: took 135.88365ms for postStartSetup
	I0401 18:31:40.159662   32936 main.go:141] libmachine: (ha-293078) Calling .DriverName
	I0401 18:31:40.159941   32936 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0401 18:31:40.159969   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:31:40.162351   32936 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:31:40.162828   32936 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:31:40.162853   32936 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:31:40.163017   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:31:40.163203   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:31:40.163350   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:31:40.163504   32936 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078/id_rsa Username:docker}
	W0401 18:31:40.249692   32936 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0401 18:31:40.249713   32936 fix.go:56] duration metric: took 1m31.834022077s for fixHost
	I0401 18:31:40.249731   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:31:40.252348   32936 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:31:40.252737   32936 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:31:40.252766   32936 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:31:40.252909   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:31:40.253094   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:31:40.253236   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:31:40.253372   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:31:40.253510   32936 main.go:141] libmachine: Using SSH client type: native
	I0401 18:31:40.253717   32936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.74 22 <nil> <nil>}
	I0401 18:31:40.253731   32936 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 18:31:40.363092   32936 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711996300.320265387
	
	I0401 18:31:40.363111   32936 fix.go:216] guest clock: 1711996300.320265387
	I0401 18:31:40.363119   32936 fix.go:229] Guest: 2024-04-01 18:31:40.320265387 +0000 UTC Remote: 2024-04-01 18:31:40.249719126 +0000 UTC m=+91.968465902 (delta=70.546261ms)
	I0401 18:31:40.363141   32936 fix.go:200] guest clock delta is within tolerance: 70.546261ms
	I0401 18:31:40.363146   32936 start.go:83] releasing machines lock for "ha-293078", held for 1m31.947470406s
	I0401 18:31:40.363162   32936 main.go:141] libmachine: (ha-293078) Calling .DriverName
	I0401 18:31:40.363430   32936 main.go:141] libmachine: (ha-293078) Calling .GetIP
	I0401 18:31:40.365970   32936 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:31:40.366352   32936 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:31:40.366372   32936 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:31:40.366503   32936 main.go:141] libmachine: (ha-293078) Calling .DriverName
	I0401 18:31:40.366971   32936 main.go:141] libmachine: (ha-293078) Calling .DriverName
	I0401 18:31:40.367138   32936 main.go:141] libmachine: (ha-293078) Calling .DriverName
	I0401 18:31:40.367247   32936 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 18:31:40.367292   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:31:40.367403   32936 ssh_runner.go:195] Run: cat /version.json
	I0401 18:31:40.367424   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHHostname
	I0401 18:31:40.369986   32936 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:31:40.370070   32936 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:31:40.370377   32936 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:31:40.370399   32936 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:31:40.370491   32936 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:31:40.370519   32936 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:31:40.370572   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:31:40.370749   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:31:40.370757   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHPort
	I0401 18:31:40.370916   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:31:40.370923   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHKeyPath
	I0401 18:31:40.371069   32936 main.go:141] libmachine: (ha-293078) Calling .GetSSHUsername
	I0401 18:31:40.371068   32936 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078/id_rsa Username:docker}
	I0401 18:31:40.371192   32936 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/ha-293078/id_rsa Username:docker}
	I0401 18:31:40.476222   32936 ssh_runner.go:195] Run: systemctl --version
	I0401 18:31:40.483827   32936 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 18:31:40.652200   32936 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 18:31:40.665267   32936 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 18:31:40.665339   32936 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 18:31:40.675602   32936 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 18:31:40.675626   32936 start.go:494] detecting cgroup driver to use...
	I0401 18:31:40.675679   32936 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 18:31:40.693067   32936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 18:31:40.708406   32936 docker.go:217] disabling cri-docker service (if available) ...
	I0401 18:31:40.708453   32936 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 18:31:40.723032   32936 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 18:31:40.737652   32936 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 18:31:40.889245   32936 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 18:31:41.051711   32936 docker.go:233] disabling docker service ...
	I0401 18:31:41.051782   32936 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 18:31:41.070684   32936 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 18:31:41.085234   32936 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 18:31:41.262674   32936 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 18:31:41.437549   32936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 18:31:41.454173   32936 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 18:31:41.474597   32936 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0401 18:31:41.474653   32936 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:31:41.487699   32936 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 18:31:41.487746   32936 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:31:41.499784   32936 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:31:41.512416   32936 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:31:41.524485   32936 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 18:31:41.536802   32936 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:31:41.548770   32936 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:31:41.560455   32936 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:31:41.572609   32936 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 18:31:41.583854   32936 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 18:31:41.595643   32936 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 18:31:41.753486   32936 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 18:31:42.068409   32936 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 18:31:42.068477   32936 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 18:31:42.074821   32936 start.go:562] Will wait 60s for crictl version
	I0401 18:31:42.074861   32936 ssh_runner.go:195] Run: which crictl
	I0401 18:31:42.079345   32936 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 18:31:42.121401   32936 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 18:31:42.121461   32936 ssh_runner.go:195] Run: crio --version
	I0401 18:31:42.156264   32936 ssh_runner.go:195] Run: crio --version
	I0401 18:31:42.192329   32936 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0401 18:31:42.193931   32936 main.go:141] libmachine: (ha-293078) Calling .GetIP
	I0401 18:31:42.196617   32936 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:31:42.197010   32936 main.go:141] libmachine: (ha-293078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:80:20", ip: ""} in network mk-ha-293078: {Iface:virbr1 ExpiryTime:2024-04-01 19:20:23 +0000 UTC Type:0 Mac:52:54:00:62:80:20 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:ha-293078 Clientid:01:52:54:00:62:80:20}
	I0401 18:31:42.197031   32936 main.go:141] libmachine: (ha-293078) DBG | domain ha-293078 has defined IP address 192.168.39.74 and MAC address 52:54:00:62:80:20 in network mk-ha-293078
	I0401 18:31:42.197288   32936 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0401 18:31:42.202923   32936 kubeadm.go:877] updating cluster {Name:ha-293078 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cl
usterName:ha-293078 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.74 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.161 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.14 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 18:31:42.203061   32936 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0401 18:31:42.203126   32936 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 18:31:42.251001   32936 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 18:31:42.251025   32936 crio.go:433] Images already preloaded, skipping extraction
	I0401 18:31:42.251076   32936 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 18:31:42.289972   32936 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 18:31:42.289991   32936 cache_images.go:84] Images are preloaded, skipping loading
	I0401 18:31:42.290000   32936 kubeadm.go:928] updating node { 192.168.39.74 8443 v1.29.3 crio true true} ...
	I0401 18:31:42.290110   32936 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-293078 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.74
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-293078 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 18:31:42.290186   32936 ssh_runner.go:195] Run: crio config
	I0401 18:31:42.345883   32936 cni.go:84] Creating CNI manager for ""
	I0401 18:31:42.345912   32936 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0401 18:31:42.345924   32936 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 18:31:42.345952   32936 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.74 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-293078 NodeName:ha-293078 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.74"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.74 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 18:31:42.346074   32936 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.74
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-293078"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.74
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.74"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 18:31:42.346094   32936 kube-vip.go:111] generating kube-vip config ...
	I0401 18:31:42.346131   32936 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0401 18:31:42.360102   32936 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0401 18:31:42.360226   32936 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0401 18:31:42.360298   32936 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0401 18:31:42.372276   32936 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 18:31:42.372349   32936 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0401 18:31:42.383818   32936 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0401 18:31:42.402739   32936 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 18:31:42.422117   32936 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0401 18:31:42.440824   32936 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0401 18:31:42.460166   32936 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0401 18:31:42.464722   32936 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 18:31:42.627389   32936 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 18:31:42.676728   32936 certs.go:68] Setting up /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078 for IP: 192.168.39.74
	I0401 18:31:42.676758   32936 certs.go:194] generating shared ca certs ...
	I0401 18:31:42.676784   32936 certs.go:226] acquiring lock for ca certs: {Name:mk348b3e250c104b662139cd7212c6c6dfda3180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:31:42.676969   32936 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key
	I0401 18:31:42.677035   32936 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key
	I0401 18:31:42.677055   32936 certs.go:256] generating profile certs ...
	I0401 18:31:42.677160   32936 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/client.key
	I0401 18:31:42.677197   32936 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.key.ac3d735a
	I0401 18:31:42.677226   32936 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.crt.ac3d735a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.74 192.168.39.161 192.168.39.210 192.168.39.254]
	I0401 18:31:42.855238   32936 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.crt.ac3d735a ...
	I0401 18:31:42.855270   32936 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.crt.ac3d735a: {Name:mk2d663e7ba26a85f02cfee3721bf6eaa4fa35b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:31:42.855463   32936 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.key.ac3d735a ...
	I0401 18:31:42.855478   32936 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.key.ac3d735a: {Name:mk92ff2516a96f808774f5c18d46850ca95c319a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:31:42.855572   32936 certs.go:381] copying /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.crt.ac3d735a -> /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.crt
	I0401 18:31:42.855706   32936 certs.go:385] copying /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.key.ac3d735a -> /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.key
	I0401 18:31:42.855832   32936 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/proxy-client.key
	I0401 18:31:42.855862   32936 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0401 18:31:42.855881   32936 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0401 18:31:42.855901   32936 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0401 18:31:42.855914   32936 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0401 18:31:42.855927   32936 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0401 18:31:42.855939   32936 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0401 18:31:42.855954   32936 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0401 18:31:42.855965   32936 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0401 18:31:42.856006   32936 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem (1338 bytes)
	W0401 18:31:42.856036   32936 certs.go:480] ignoring /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751_empty.pem, impossibly tiny 0 bytes
	I0401 18:31:42.856045   32936 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem (1675 bytes)
	I0401 18:31:42.856064   32936 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem (1082 bytes)
	I0401 18:31:42.856084   32936 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem (1123 bytes)
	I0401 18:31:42.856106   32936 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem (1679 bytes)
	I0401 18:31:42.856176   32936 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem (1708 bytes)
	I0401 18:31:42.856200   32936 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> /usr/share/ca-certificates/177512.pem
	I0401 18:31:42.856212   32936 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0401 18:31:42.856224   32936 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem -> /usr/share/ca-certificates/17751.pem
	I0401 18:31:42.856755   32936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 18:31:42.886462   32936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 18:31:42.913811   32936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 18:31:42.949567   32936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 18:31:42.986901   32936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0401 18:31:43.013452   32936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 18:31:43.041543   32936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 18:31:43.068355   32936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/ha-293078/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 18:31:43.096848   32936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /usr/share/ca-certificates/177512.pem (1708 bytes)
	I0401 18:31:43.135434   32936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 18:31:43.165791   32936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem --> /usr/share/ca-certificates/17751.pem (1338 bytes)
	I0401 18:31:43.192800   32936 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0401 18:31:43.211337   32936 ssh_runner.go:195] Run: openssl version
	I0401 18:31:43.217742   32936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 18:31:43.241809   32936 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 18:31:43.250572   32936 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 18:07 /usr/share/ca-certificates/minikubeCA.pem
	I0401 18:31:43.250630   32936 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 18:31:43.264937   32936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 18:31:43.279786   32936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17751.pem && ln -fs /usr/share/ca-certificates/17751.pem /etc/ssl/certs/17751.pem"
	I0401 18:31:43.292270   32936 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17751.pem
	I0401 18:31:43.297135   32936 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 18:15 /usr/share/ca-certificates/17751.pem
	I0401 18:31:43.297186   32936 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17751.pem
	I0401 18:31:43.303987   32936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17751.pem /etc/ssl/certs/51391683.0"
	I0401 18:31:43.314119   32936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177512.pem && ln -fs /usr/share/ca-certificates/177512.pem /etc/ssl/certs/177512.pem"
	I0401 18:31:43.326277   32936 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177512.pem
	I0401 18:31:43.332134   32936 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 18:15 /usr/share/ca-certificates/177512.pem
	I0401 18:31:43.332202   32936 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177512.pem
	I0401 18:31:43.338753   32936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177512.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 18:31:43.349540   32936 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 18:31:43.354635   32936 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 18:31:43.362891   32936 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 18:31:43.369675   32936 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 18:31:43.376388   32936 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 18:31:43.382934   32936 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 18:31:43.389539   32936 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 18:31:43.396196   32936 kubeadm.go:391] StartCluster: {Name:ha-293078 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clust
erName:ha-293078 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.74 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.161 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.210 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.14 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 18:31:43.396308   32936 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 18:31:43.396363   32936 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 18:31:43.459299   32936 cri.go:89] found id: "a675dbcdc9748f6386a2b82398770ad55d46e03815ede9d9d26e8a7b1ccbdc69"
	I0401 18:31:43.459322   32936 cri.go:89] found id: "83089bbda84e0857d3a6f634701946e11cd3a0e7facd446e8bd19a918aa3e3af"
	I0401 18:31:43.459328   32936 cri.go:89] found id: "b35085c638277df2d3d037d2003d5907adbfeca00f8d8e1cee4f59230a44e8aa"
	I0401 18:31:43.459332   32936 cri.go:89] found id: "d27910db04ffdc2a492a9a09511fc0ab6d4c80f4a897ccf7e48b017c277e9522"
	I0401 18:31:43.459335   32936 cri.go:89] found id: "53f1a82893f662e018743729a3b3bcb80f4eef69f6214b4ec74bc248829cbbc2"
	I0401 18:31:43.459338   32936 cri.go:89] found id: "28e71802f2d239a48bd313b15717cbd9276395c88536fea7e1d98fca1d21a38c"
	I0401 18:31:43.459340   32936 cri.go:89] found id: "be43b3abd52fcb26f579806533a081948a895cdd479befbbc9bd5446fdc060e9"
	I0401 18:31:43.459343   32936 cri.go:89] found id: "ce906a6132be484cf993679eea95d6637b9e3b3e9884820e95723b2b2c33e7e6"
	I0401 18:31:43.459345   32936 cri.go:89] found id: "8d7ab06dacb1f801ea9714513d3f23a0bad938d609fb9f291d0ec0c4903d8d6a"
	I0401 18:31:43.459350   32936 cri.go:89] found id: "c1af36287bacaf83243c8481c963e2cf6f3ec89e4ffb87b80a135b18652a2c9d"
	I0401 18:31:43.459353   32936 cri.go:89] found id: "9d9284db03ef8c515d8a7475c032ebbaa4d501954b6e1f5c383cdcdb3ebf6afb"
	I0401 18:31:43.459355   32936 cri.go:89] found id: "6bd1ccbceec8c5056f450169f49c17acf202e064825e6c51a55ca89e591e25b5"
	I0401 18:31:43.459358   32936 cri.go:89] found id: "8471f59f3de235b71fe57e79412f27884ceb62d668027d7fe3730009d2fbb1fa"
	I0401 18:31:43.459360   32936 cri.go:89] found id: "e36af39fdf13dd3cf98d2d4a8e7666aea913228d31de663d19c302848663d798"
	I0401 18:31:43.459365   32936 cri.go:89] found id: ""
	I0401 18:31:43.459409   32936 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 01 18:36:57 ha-293078 crio[3842]: time="2024-04-01 18:36:57.695758113Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711996617695714556,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=022162d9-a301-4445-8fa9-fabbaa530987 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 18:36:57 ha-293078 crio[3842]: time="2024-04-01 18:36:57.696374871Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9fa21f0a-83d7-46ef-a3de-945401e2c2f4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:36:57 ha-293078 crio[3842]: time="2024-04-01 18:36:57.696540154Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9fa21f0a-83d7-46ef-a3de-945401e2c2f4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:36:57 ha-293078 crio[3842]: time="2024-04-01 18:36:57.697047290Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f880f1f32f064d6f4d5fbba6a7e0fa85b4736d0a77363334299d84695997fc3d,PodSandboxId:da260fce1557d9db21f3100d3c6b5a6dd0189371c51d0d9faa0659ecc29f5eca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711996363950766867,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d7c42eb-192e-4ae0-b5ae-0883ef5e740c,},Annotations:map[string]string{io.kubernetes.container.hash: 245032af,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e7606a1741f035c4106a889012cf5db5431ac4a2e1390cf5fa25faf62a34ea9,PodSandboxId:88f19d546e8fac2c3ea8437bf72e612a2b907c5cea31ee8c7deb54e84bc3f710,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1711996360950374053,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rjfcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f6ecc3-4bd0-406b-8096-ffd6115a2de3,},Annotations:map[string]string{io.kubernetes.container.hash: 1c24bf0f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4cbd0e1fa74f9a0bf6ac1fcafa74e7cc52ea84d7f7d3216ffa34610961bb64b,PodSandboxId:33b0fc1f4bd7a36e0c8ae46c40a486bf79c0a94ec11325afccc90cbe8f9f2254,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711996350949798176,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 431f977c37ad2da28fe70e24f8f4cfb5,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f922d350a52c3b48d57f86e85d8225b11fcc916d1dd95577c4f5fe5d3757c986,PodSandboxId:7f6f6195913012dfa4bc213f4a58a4a72cc3c7f67aaab83cfc595d9222b1d890,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711996349948219848,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 111b7388841713ed3598aaf599c56758,},Annotations:map[string]string{io.kubernetes.container.hash: 886f76f4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c9a11dda6690123c36d59e2b56a84bd3e52ed833757b6fd4c6d8120bb7e46ba,PodSandboxId:fa2a91a3428e03ab7ef8014cb6b310ec8a127070255d1a44a2fbcf7339a44b19,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1711996341246662361,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-7tn8z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cf87f47-0b2d-42b9-9aa6-e4e3736ca728,},Annotations:map[string]string{io.kubernetes.container.hash: 94944394,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6a49a7917650045a9a22b204d79808b7124ca401e2d74faabc9b57e255fbd3c,PodSandboxId:925bf7ded7bbba806d1c4fb45d3bf0520d952ec80b99694f072306922e9b934f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1711996320647022910,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 897e54c6374ab0d6298432af511254b4,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:c748af70e7154a879fb419d898bb0eaa511a6797afb99199f8231d834dca19c4,PodSandboxId:da260fce1557d9db21f3100d3c6b5a6dd0189371c51d0d9faa0659ecc29f5eca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1711996308896675087,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d7c42eb-192e-4ae0-b5ae-0883ef5e740c,},Annotations:map[string]string{io.kubernetes.container.hash: 245032af,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:d0ba4303bba7609a3982e28cc53c7c80afb21aadb86d498d9d4b5e6340e2d039,PodSandboxId:09c3e4083c6da6744238462638563448d4c26d9611404139e6b94d0929544c7e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711996308000913140,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l5q2p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 167db687-ac11-4f57-83c1-048c31a7b2cb,},Annotations:map[string]string{io.kubernetes.container.hash: a09407a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab6cbca
43e514d079299396ef1d62ccb2d276f802ead726a35dc01e00e35e334,PodSandboxId:433aada64602b49b6c6947765acf3602ebfaf6913ad2d55c12045a6b7810caa7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711996308265312358,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-8v456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28cf6a1d-90df-4802-ad3c-9c0276380a44,},Annotations:map[string]string{io.kubernetes.container.hash: 286c3144,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5142b30b613168527e4d6ffa1c4e84c977d97a5c7e7f2cd9e331db31875309a,PodSandboxId:07143304915bd30122d8826c98b4d101e0d042a6cd06e78c5acd637ff860f4e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711996308081027181,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-sqxnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17868bbd-b0e9-460c-b191-9707f613af0a,},Annotations:map[string]string{io.kubernetes.container.hash: 48f6bb3c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a272a73055d5fd196829e20e75ef8aafb0df5ae5f665312afc9e839c52f7766,PodSandboxId:7f6f6195913012dfa4bc213f4a58a4a72cc3c7f67aaab83cfc595d9222b1d890,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711996308080918210,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-293078,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 111b7388841713ed3598aaf599c56758,},Annotations:map[string]string{io.kubernetes.container.hash: 886f76f4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1def46fae9a1e3494c6e79f3f6224d4b4ff1e4a487370fa491a92924c0622b6,PodSandboxId:33b0fc1f4bd7a36e0c8ae46c40a486bf79c0a94ec11325afccc90cbe8f9f2254,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1711996307737973043,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-293078,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 431f977c37ad2da28fe70e24f8f4cfb5,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:760c286bbb6db472837a632164ec1f41295aab88d45f26ad6be70fd606b5d770,PodSandboxId:9cb8873813e799abb80d9670bc16ce65e7c1b4aa4a41ae7da2eaedfe22ce9818,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711996307878499651,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 14a552ff6182f687744d2f77e0ce85cc,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a4bb7bf172a9ff370e2374952c73ee9f7a9407d8fbe484fef1014a4f770ea75,PodSandboxId:f2022a163b51a03502db09ec40831846d3a7a7a044ce8967cb9611a92263c393,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711996307819829498,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed3d89e46aa7fdf04d31b28a37841ad5,},An
notations:map[string]string{io.kubernetes.container.hash: 5bcf3746,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a675dbcdc9748f6386a2b82398770ad55d46e03815ede9d9d26e8a7b1ccbdc69,PodSandboxId:88f19d546e8fac2c3ea8437bf72e612a2b907c5cea31ee8c7deb54e84bc3f710,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1711996302919574959,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rjfcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f6ecc3-4bd0-406b-8096-ffd6115a2de3,},Annotations:map[string]string{io.kube
rnetes.container.hash: 1c24bf0f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61d746cfabdcf1e527c0a0136c923d19be52285d3c766da6faaba4eb3b3c013d,PodSandboxId:d2ac86b05a9f4d146abfc431861426b75aa121e86155e33f6885c2287d35c2d9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1711995814759324620,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-7tn8z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cf87f47-0b2d-42b9-9aa6-e4e3736ca728,},Annotations:map[string]string{io.kuber
netes.container.hash: 94944394,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce906a6132be484cf993679eea95d6637b9e3b3e9884820e95723b2b2c33e7e6,PodSandboxId:184b6f8a0b09d310e6167558bc2e043f793ec8069ada3f99f07f8c4bf5bbe2a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711995665008792137,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-8v456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28cf6a1d-90df-4802-ad3c-9c0276380a44,},Annotations:map[string]string{io.kubernetes.container.hash: 286c3144,i
o.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be43b3abd52fcb26f579806533a081948a895cdd479befbbc9bd5446fdc060e9,PodSandboxId:f885d7f062d4925a0c12a93de7fab4a08ad786e7dc47a543daf4c046acd992d8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711995665021082613,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: core
dns-76f75df574-sqxnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17868bbd-b0e9-460c-b191-9707f613af0a,},Annotations:map[string]string{io.kubernetes.container.hash: 48f6bb3c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d7ab06dacb1f801ea9714513d3f23a0bad938d609fb9f291d0ec0c4903d8d6a,PodSandboxId:849ffff6ee9e4b1fed8bc9e2950a7f2d227adf1318502c7d46a0e03e73165ca2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b
0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1711995662809506933,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l5q2p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 167db687-ac11-4f57-83c1-048c31a7b2cb,},Annotations:map[string]string{io.kubernetes.container.hash: a09407a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bd1ccbceec8c5056f450169f49c17acf202e064825e6c51a55ca89e591e25b5,PodSandboxId:91aa9ea508a082ce745f620d0c3c5161f596f6efef8dca30ddfad2fdc5376338,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16
b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1711995642771289176,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14a552ff6182f687744d2f77e0ce85cc,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8471f59f3de235b71fe57e79412f27884ceb62d668027d7fe3730009d2fbb1fa,PodSandboxId:34af251b6243e69ca34eeeb959254863f3933b8142c33d2027be0d4f7647ea8b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CON
TAINER_EXITED,CreatedAt:1711995642748101156,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed3d89e46aa7fdf04d31b28a37841ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 5bcf3746,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9fa21f0a-83d7-46ef-a3de-945401e2c2f4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:36:57 ha-293078 crio[3842]: time="2024-04-01 18:36:57.751111574Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3f2cb228-7c81-416c-b62a-21eadf382f4c name=/runtime.v1.RuntimeService/Version
	Apr 01 18:36:57 ha-293078 crio[3842]: time="2024-04-01 18:36:57.751215066Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3f2cb228-7c81-416c-b62a-21eadf382f4c name=/runtime.v1.RuntimeService/Version
	Apr 01 18:36:57 ha-293078 crio[3842]: time="2024-04-01 18:36:57.755246996Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=331485db-24d6-42e1-b5e4-f498aa0e3ed5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 18:36:57 ha-293078 crio[3842]: time="2024-04-01 18:36:57.755801431Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711996617755771169,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=331485db-24d6-42e1-b5e4-f498aa0e3ed5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 18:36:57 ha-293078 crio[3842]: time="2024-04-01 18:36:57.757297784Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=37eadfa3-da1b-4717-92af-98488a9b2a8f name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:36:57 ha-293078 crio[3842]: time="2024-04-01 18:36:57.758538891Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=37eadfa3-da1b-4717-92af-98488a9b2a8f name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:36:57 ha-293078 crio[3842]: time="2024-04-01 18:36:57.760220212Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f880f1f32f064d6f4d5fbba6a7e0fa85b4736d0a77363334299d84695997fc3d,PodSandboxId:da260fce1557d9db21f3100d3c6b5a6dd0189371c51d0d9faa0659ecc29f5eca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711996363950766867,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d7c42eb-192e-4ae0-b5ae-0883ef5e740c,},Annotations:map[string]string{io.kubernetes.container.hash: 245032af,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e7606a1741f035c4106a889012cf5db5431ac4a2e1390cf5fa25faf62a34ea9,PodSandboxId:88f19d546e8fac2c3ea8437bf72e612a2b907c5cea31ee8c7deb54e84bc3f710,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1711996360950374053,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rjfcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f6ecc3-4bd0-406b-8096-ffd6115a2de3,},Annotations:map[string]string{io.kubernetes.container.hash: 1c24bf0f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4cbd0e1fa74f9a0bf6ac1fcafa74e7cc52ea84d7f7d3216ffa34610961bb64b,PodSandboxId:33b0fc1f4bd7a36e0c8ae46c40a486bf79c0a94ec11325afccc90cbe8f9f2254,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711996350949798176,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 431f977c37ad2da28fe70e24f8f4cfb5,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f922d350a52c3b48d57f86e85d8225b11fcc916d1dd95577c4f5fe5d3757c986,PodSandboxId:7f6f6195913012dfa4bc213f4a58a4a72cc3c7f67aaab83cfc595d9222b1d890,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711996349948219848,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 111b7388841713ed3598aaf599c56758,},Annotations:map[string]string{io.kubernetes.container.hash: 886f76f4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c9a11dda6690123c36d59e2b56a84bd3e52ed833757b6fd4c6d8120bb7e46ba,PodSandboxId:fa2a91a3428e03ab7ef8014cb6b310ec8a127070255d1a44a2fbcf7339a44b19,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1711996341246662361,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-7tn8z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cf87f47-0b2d-42b9-9aa6-e4e3736ca728,},Annotations:map[string]string{io.kubernetes.container.hash: 94944394,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6a49a7917650045a9a22b204d79808b7124ca401e2d74faabc9b57e255fbd3c,PodSandboxId:925bf7ded7bbba806d1c4fb45d3bf0520d952ec80b99694f072306922e9b934f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1711996320647022910,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 897e54c6374ab0d6298432af511254b4,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:c748af70e7154a879fb419d898bb0eaa511a6797afb99199f8231d834dca19c4,PodSandboxId:da260fce1557d9db21f3100d3c6b5a6dd0189371c51d0d9faa0659ecc29f5eca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1711996308896675087,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d7c42eb-192e-4ae0-b5ae-0883ef5e740c,},Annotations:map[string]string{io.kubernetes.container.hash: 245032af,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:d0ba4303bba7609a3982e28cc53c7c80afb21aadb86d498d9d4b5e6340e2d039,PodSandboxId:09c3e4083c6da6744238462638563448d4c26d9611404139e6b94d0929544c7e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711996308000913140,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l5q2p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 167db687-ac11-4f57-83c1-048c31a7b2cb,},Annotations:map[string]string{io.kubernetes.container.hash: a09407a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab6cbca
43e514d079299396ef1d62ccb2d276f802ead726a35dc01e00e35e334,PodSandboxId:433aada64602b49b6c6947765acf3602ebfaf6913ad2d55c12045a6b7810caa7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711996308265312358,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-8v456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28cf6a1d-90df-4802-ad3c-9c0276380a44,},Annotations:map[string]string{io.kubernetes.container.hash: 286c3144,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5142b30b613168527e4d6ffa1c4e84c977d97a5c7e7f2cd9e331db31875309a,PodSandboxId:07143304915bd30122d8826c98b4d101e0d042a6cd06e78c5acd637ff860f4e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711996308081027181,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-sqxnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17868bbd-b0e9-460c-b191-9707f613af0a,},Annotations:map[string]string{io.kubernetes.container.hash: 48f6bb3c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a272a73055d5fd196829e20e75ef8aafb0df5ae5f665312afc9e839c52f7766,PodSandboxId:7f6f6195913012dfa4bc213f4a58a4a72cc3c7f67aaab83cfc595d9222b1d890,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711996308080918210,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-293078,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 111b7388841713ed3598aaf599c56758,},Annotations:map[string]string{io.kubernetes.container.hash: 886f76f4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1def46fae9a1e3494c6e79f3f6224d4b4ff1e4a487370fa491a92924c0622b6,PodSandboxId:33b0fc1f4bd7a36e0c8ae46c40a486bf79c0a94ec11325afccc90cbe8f9f2254,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1711996307737973043,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-293078,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 431f977c37ad2da28fe70e24f8f4cfb5,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:760c286bbb6db472837a632164ec1f41295aab88d45f26ad6be70fd606b5d770,PodSandboxId:9cb8873813e799abb80d9670bc16ce65e7c1b4aa4a41ae7da2eaedfe22ce9818,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711996307878499651,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 14a552ff6182f687744d2f77e0ce85cc,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a4bb7bf172a9ff370e2374952c73ee9f7a9407d8fbe484fef1014a4f770ea75,PodSandboxId:f2022a163b51a03502db09ec40831846d3a7a7a044ce8967cb9611a92263c393,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711996307819829498,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed3d89e46aa7fdf04d31b28a37841ad5,},An
notations:map[string]string{io.kubernetes.container.hash: 5bcf3746,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a675dbcdc9748f6386a2b82398770ad55d46e03815ede9d9d26e8a7b1ccbdc69,PodSandboxId:88f19d546e8fac2c3ea8437bf72e612a2b907c5cea31ee8c7deb54e84bc3f710,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1711996302919574959,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rjfcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f6ecc3-4bd0-406b-8096-ffd6115a2de3,},Annotations:map[string]string{io.kube
rnetes.container.hash: 1c24bf0f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61d746cfabdcf1e527c0a0136c923d19be52285d3c766da6faaba4eb3b3c013d,PodSandboxId:d2ac86b05a9f4d146abfc431861426b75aa121e86155e33f6885c2287d35c2d9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1711995814759324620,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-7tn8z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cf87f47-0b2d-42b9-9aa6-e4e3736ca728,},Annotations:map[string]string{io.kuber
netes.container.hash: 94944394,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce906a6132be484cf993679eea95d6637b9e3b3e9884820e95723b2b2c33e7e6,PodSandboxId:184b6f8a0b09d310e6167558bc2e043f793ec8069ada3f99f07f8c4bf5bbe2a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711995665008792137,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-8v456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28cf6a1d-90df-4802-ad3c-9c0276380a44,},Annotations:map[string]string{io.kubernetes.container.hash: 286c3144,i
o.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be43b3abd52fcb26f579806533a081948a895cdd479befbbc9bd5446fdc060e9,PodSandboxId:f885d7f062d4925a0c12a93de7fab4a08ad786e7dc47a543daf4c046acd992d8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711995665021082613,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: core
dns-76f75df574-sqxnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17868bbd-b0e9-460c-b191-9707f613af0a,},Annotations:map[string]string{io.kubernetes.container.hash: 48f6bb3c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d7ab06dacb1f801ea9714513d3f23a0bad938d609fb9f291d0ec0c4903d8d6a,PodSandboxId:849ffff6ee9e4b1fed8bc9e2950a7f2d227adf1318502c7d46a0e03e73165ca2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b
0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1711995662809506933,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l5q2p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 167db687-ac11-4f57-83c1-048c31a7b2cb,},Annotations:map[string]string{io.kubernetes.container.hash: a09407a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bd1ccbceec8c5056f450169f49c17acf202e064825e6c51a55ca89e591e25b5,PodSandboxId:91aa9ea508a082ce745f620d0c3c5161f596f6efef8dca30ddfad2fdc5376338,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16
b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1711995642771289176,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14a552ff6182f687744d2f77e0ce85cc,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8471f59f3de235b71fe57e79412f27884ceb62d668027d7fe3730009d2fbb1fa,PodSandboxId:34af251b6243e69ca34eeeb959254863f3933b8142c33d2027be0d4f7647ea8b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CON
TAINER_EXITED,CreatedAt:1711995642748101156,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed3d89e46aa7fdf04d31b28a37841ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 5bcf3746,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=37eadfa3-da1b-4717-92af-98488a9b2a8f name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:36:57 ha-293078 crio[3842]: time="2024-04-01 18:36:57.814650377Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=55a21848-a000-408e-bfa3-c3aecab35309 name=/runtime.v1.RuntimeService/Version
	Apr 01 18:36:57 ha-293078 crio[3842]: time="2024-04-01 18:36:57.814820157Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=55a21848-a000-408e-bfa3-c3aecab35309 name=/runtime.v1.RuntimeService/Version
	Apr 01 18:36:57 ha-293078 crio[3842]: time="2024-04-01 18:36:57.816227410Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=338b2ca4-afeb-4104-ab2c-7d409f2f577e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 18:36:57 ha-293078 crio[3842]: time="2024-04-01 18:36:57.816922384Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711996617816896447,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=338b2ca4-afeb-4104-ab2c-7d409f2f577e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 18:36:57 ha-293078 crio[3842]: time="2024-04-01 18:36:57.818215112Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9e98c304-c34a-4a25-8c99-ac64956c5969 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:36:57 ha-293078 crio[3842]: time="2024-04-01 18:36:57.818298862Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9e98c304-c34a-4a25-8c99-ac64956c5969 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:36:57 ha-293078 crio[3842]: time="2024-04-01 18:36:57.818966935Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f880f1f32f064d6f4d5fbba6a7e0fa85b4736d0a77363334299d84695997fc3d,PodSandboxId:da260fce1557d9db21f3100d3c6b5a6dd0189371c51d0d9faa0659ecc29f5eca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711996363950766867,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d7c42eb-192e-4ae0-b5ae-0883ef5e740c,},Annotations:map[string]string{io.kubernetes.container.hash: 245032af,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e7606a1741f035c4106a889012cf5db5431ac4a2e1390cf5fa25faf62a34ea9,PodSandboxId:88f19d546e8fac2c3ea8437bf72e612a2b907c5cea31ee8c7deb54e84bc3f710,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1711996360950374053,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rjfcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f6ecc3-4bd0-406b-8096-ffd6115a2de3,},Annotations:map[string]string{io.kubernetes.container.hash: 1c24bf0f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4cbd0e1fa74f9a0bf6ac1fcafa74e7cc52ea84d7f7d3216ffa34610961bb64b,PodSandboxId:33b0fc1f4bd7a36e0c8ae46c40a486bf79c0a94ec11325afccc90cbe8f9f2254,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711996350949798176,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 431f977c37ad2da28fe70e24f8f4cfb5,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f922d350a52c3b48d57f86e85d8225b11fcc916d1dd95577c4f5fe5d3757c986,PodSandboxId:7f6f6195913012dfa4bc213f4a58a4a72cc3c7f67aaab83cfc595d9222b1d890,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711996349948219848,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 111b7388841713ed3598aaf599c56758,},Annotations:map[string]string{io.kubernetes.container.hash: 886f76f4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c9a11dda6690123c36d59e2b56a84bd3e52ed833757b6fd4c6d8120bb7e46ba,PodSandboxId:fa2a91a3428e03ab7ef8014cb6b310ec8a127070255d1a44a2fbcf7339a44b19,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1711996341246662361,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-7tn8z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cf87f47-0b2d-42b9-9aa6-e4e3736ca728,},Annotations:map[string]string{io.kubernetes.container.hash: 94944394,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6a49a7917650045a9a22b204d79808b7124ca401e2d74faabc9b57e255fbd3c,PodSandboxId:925bf7ded7bbba806d1c4fb45d3bf0520d952ec80b99694f072306922e9b934f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1711996320647022910,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 897e54c6374ab0d6298432af511254b4,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:c748af70e7154a879fb419d898bb0eaa511a6797afb99199f8231d834dca19c4,PodSandboxId:da260fce1557d9db21f3100d3c6b5a6dd0189371c51d0d9faa0659ecc29f5eca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1711996308896675087,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d7c42eb-192e-4ae0-b5ae-0883ef5e740c,},Annotations:map[string]string{io.kubernetes.container.hash: 245032af,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:d0ba4303bba7609a3982e28cc53c7c80afb21aadb86d498d9d4b5e6340e2d039,PodSandboxId:09c3e4083c6da6744238462638563448d4c26d9611404139e6b94d0929544c7e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711996308000913140,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l5q2p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 167db687-ac11-4f57-83c1-048c31a7b2cb,},Annotations:map[string]string{io.kubernetes.container.hash: a09407a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab6cbca
43e514d079299396ef1d62ccb2d276f802ead726a35dc01e00e35e334,PodSandboxId:433aada64602b49b6c6947765acf3602ebfaf6913ad2d55c12045a6b7810caa7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711996308265312358,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-8v456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28cf6a1d-90df-4802-ad3c-9c0276380a44,},Annotations:map[string]string{io.kubernetes.container.hash: 286c3144,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5142b30b613168527e4d6ffa1c4e84c977d97a5c7e7f2cd9e331db31875309a,PodSandboxId:07143304915bd30122d8826c98b4d101e0d042a6cd06e78c5acd637ff860f4e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711996308081027181,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-sqxnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17868bbd-b0e9-460c-b191-9707f613af0a,},Annotations:map[string]string{io.kubernetes.container.hash: 48f6bb3c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a272a73055d5fd196829e20e75ef8aafb0df5ae5f665312afc9e839c52f7766,PodSandboxId:7f6f6195913012dfa4bc213f4a58a4a72cc3c7f67aaab83cfc595d9222b1d890,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711996308080918210,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-293078,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 111b7388841713ed3598aaf599c56758,},Annotations:map[string]string{io.kubernetes.container.hash: 886f76f4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1def46fae9a1e3494c6e79f3f6224d4b4ff1e4a487370fa491a92924c0622b6,PodSandboxId:33b0fc1f4bd7a36e0c8ae46c40a486bf79c0a94ec11325afccc90cbe8f9f2254,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1711996307737973043,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-293078,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 431f977c37ad2da28fe70e24f8f4cfb5,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:760c286bbb6db472837a632164ec1f41295aab88d45f26ad6be70fd606b5d770,PodSandboxId:9cb8873813e799abb80d9670bc16ce65e7c1b4aa4a41ae7da2eaedfe22ce9818,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711996307878499651,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 14a552ff6182f687744d2f77e0ce85cc,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a4bb7bf172a9ff370e2374952c73ee9f7a9407d8fbe484fef1014a4f770ea75,PodSandboxId:f2022a163b51a03502db09ec40831846d3a7a7a044ce8967cb9611a92263c393,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711996307819829498,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed3d89e46aa7fdf04d31b28a37841ad5,},An
notations:map[string]string{io.kubernetes.container.hash: 5bcf3746,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a675dbcdc9748f6386a2b82398770ad55d46e03815ede9d9d26e8a7b1ccbdc69,PodSandboxId:88f19d546e8fac2c3ea8437bf72e612a2b907c5cea31ee8c7deb54e84bc3f710,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1711996302919574959,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rjfcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f6ecc3-4bd0-406b-8096-ffd6115a2de3,},Annotations:map[string]string{io.kube
rnetes.container.hash: 1c24bf0f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61d746cfabdcf1e527c0a0136c923d19be52285d3c766da6faaba4eb3b3c013d,PodSandboxId:d2ac86b05a9f4d146abfc431861426b75aa121e86155e33f6885c2287d35c2d9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1711995814759324620,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-7tn8z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cf87f47-0b2d-42b9-9aa6-e4e3736ca728,},Annotations:map[string]string{io.kuber
netes.container.hash: 94944394,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce906a6132be484cf993679eea95d6637b9e3b3e9884820e95723b2b2c33e7e6,PodSandboxId:184b6f8a0b09d310e6167558bc2e043f793ec8069ada3f99f07f8c4bf5bbe2a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711995665008792137,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-8v456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28cf6a1d-90df-4802-ad3c-9c0276380a44,},Annotations:map[string]string{io.kubernetes.container.hash: 286c3144,i
o.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be43b3abd52fcb26f579806533a081948a895cdd479befbbc9bd5446fdc060e9,PodSandboxId:f885d7f062d4925a0c12a93de7fab4a08ad786e7dc47a543daf4c046acd992d8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711995665021082613,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: core
dns-76f75df574-sqxnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17868bbd-b0e9-460c-b191-9707f613af0a,},Annotations:map[string]string{io.kubernetes.container.hash: 48f6bb3c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d7ab06dacb1f801ea9714513d3f23a0bad938d609fb9f291d0ec0c4903d8d6a,PodSandboxId:849ffff6ee9e4b1fed8bc9e2950a7f2d227adf1318502c7d46a0e03e73165ca2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b
0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1711995662809506933,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l5q2p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 167db687-ac11-4f57-83c1-048c31a7b2cb,},Annotations:map[string]string{io.kubernetes.container.hash: a09407a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bd1ccbceec8c5056f450169f49c17acf202e064825e6c51a55ca89e591e25b5,PodSandboxId:91aa9ea508a082ce745f620d0c3c5161f596f6efef8dca30ddfad2fdc5376338,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16
b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1711995642771289176,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14a552ff6182f687744d2f77e0ce85cc,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8471f59f3de235b71fe57e79412f27884ceb62d668027d7fe3730009d2fbb1fa,PodSandboxId:34af251b6243e69ca34eeeb959254863f3933b8142c33d2027be0d4f7647ea8b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CON
TAINER_EXITED,CreatedAt:1711995642748101156,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed3d89e46aa7fdf04d31b28a37841ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 5bcf3746,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9e98c304-c34a-4a25-8c99-ac64956c5969 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:36:57 ha-293078 crio[3842]: time="2024-04-01 18:36:57.870139972Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ae9743b5-b9f5-4f65-a50b-6c077ef7e221 name=/runtime.v1.RuntimeService/Version
	Apr 01 18:36:57 ha-293078 crio[3842]: time="2024-04-01 18:36:57.870208589Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ae9743b5-b9f5-4f65-a50b-6c077ef7e221 name=/runtime.v1.RuntimeService/Version
	Apr 01 18:36:57 ha-293078 crio[3842]: time="2024-04-01 18:36:57.872002789Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c7cbb949-4859-4ab1-9aa4-0617c1cfe1a7 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 18:36:57 ha-293078 crio[3842]: time="2024-04-01 18:36:57.872526952Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711996617872502146,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c7cbb949-4859-4ab1-9aa4-0617c1cfe1a7 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 18:36:57 ha-293078 crio[3842]: time="2024-04-01 18:36:57.873159612Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f0f517c5-a9e4-488a-b86c-bce8ecad5000 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:36:57 ha-293078 crio[3842]: time="2024-04-01 18:36:57.873219545Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f0f517c5-a9e4-488a-b86c-bce8ecad5000 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:36:57 ha-293078 crio[3842]: time="2024-04-01 18:36:57.873825020Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f880f1f32f064d6f4d5fbba6a7e0fa85b4736d0a77363334299d84695997fc3d,PodSandboxId:da260fce1557d9db21f3100d3c6b5a6dd0189371c51d0d9faa0659ecc29f5eca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711996363950766867,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d7c42eb-192e-4ae0-b5ae-0883ef5e740c,},Annotations:map[string]string{io.kubernetes.container.hash: 245032af,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e7606a1741f035c4106a889012cf5db5431ac4a2e1390cf5fa25faf62a34ea9,PodSandboxId:88f19d546e8fac2c3ea8437bf72e612a2b907c5cea31ee8c7deb54e84bc3f710,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1711996360950374053,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rjfcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f6ecc3-4bd0-406b-8096-ffd6115a2de3,},Annotations:map[string]string{io.kubernetes.container.hash: 1c24bf0f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4cbd0e1fa74f9a0bf6ac1fcafa74e7cc52ea84d7f7d3216ffa34610961bb64b,PodSandboxId:33b0fc1f4bd7a36e0c8ae46c40a486bf79c0a94ec11325afccc90cbe8f9f2254,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711996350949798176,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 431f977c37ad2da28fe70e24f8f4cfb5,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f922d350a52c3b48d57f86e85d8225b11fcc916d1dd95577c4f5fe5d3757c986,PodSandboxId:7f6f6195913012dfa4bc213f4a58a4a72cc3c7f67aaab83cfc595d9222b1d890,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711996349948219848,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 111b7388841713ed3598aaf599c56758,},Annotations:map[string]string{io.kubernetes.container.hash: 886f76f4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c9a11dda6690123c36d59e2b56a84bd3e52ed833757b6fd4c6d8120bb7e46ba,PodSandboxId:fa2a91a3428e03ab7ef8014cb6b310ec8a127070255d1a44a2fbcf7339a44b19,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1711996341246662361,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-7tn8z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cf87f47-0b2d-42b9-9aa6-e4e3736ca728,},Annotations:map[string]string{io.kubernetes.container.hash: 94944394,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6a49a7917650045a9a22b204d79808b7124ca401e2d74faabc9b57e255fbd3c,PodSandboxId:925bf7ded7bbba806d1c4fb45d3bf0520d952ec80b99694f072306922e9b934f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1711996320647022910,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 897e54c6374ab0d6298432af511254b4,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:c748af70e7154a879fb419d898bb0eaa511a6797afb99199f8231d834dca19c4,PodSandboxId:da260fce1557d9db21f3100d3c6b5a6dd0189371c51d0d9faa0659ecc29f5eca,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1711996308896675087,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d7c42eb-192e-4ae0-b5ae-0883ef5e740c,},Annotations:map[string]string{io.kubernetes.container.hash: 245032af,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:d0ba4303bba7609a3982e28cc53c7c80afb21aadb86d498d9d4b5e6340e2d039,PodSandboxId:09c3e4083c6da6744238462638563448d4c26d9611404139e6b94d0929544c7e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711996308000913140,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l5q2p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 167db687-ac11-4f57-83c1-048c31a7b2cb,},Annotations:map[string]string{io.kubernetes.container.hash: a09407a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab6cbca
43e514d079299396ef1d62ccb2d276f802ead726a35dc01e00e35e334,PodSandboxId:433aada64602b49b6c6947765acf3602ebfaf6913ad2d55c12045a6b7810caa7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711996308265312358,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-8v456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28cf6a1d-90df-4802-ad3c-9c0276380a44,},Annotations:map[string]string{io.kubernetes.container.hash: 286c3144,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5142b30b613168527e4d6ffa1c4e84c977d97a5c7e7f2cd9e331db31875309a,PodSandboxId:07143304915bd30122d8826c98b4d101e0d042a6cd06e78c5acd637ff860f4e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711996308081027181,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-sqxnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17868bbd-b0e9-460c-b191-9707f613af0a,},Annotations:map[string]string{io.kubernetes.container.hash: 48f6bb3c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a272a73055d5fd196829e20e75ef8aafb0df5ae5f665312afc9e839c52f7766,PodSandboxId:7f6f6195913012dfa4bc213f4a58a4a72cc3c7f67aaab83cfc595d9222b1d890,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711996308080918210,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-293078,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 111b7388841713ed3598aaf599c56758,},Annotations:map[string]string{io.kubernetes.container.hash: 886f76f4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1def46fae9a1e3494c6e79f3f6224d4b4ff1e4a487370fa491a92924c0622b6,PodSandboxId:33b0fc1f4bd7a36e0c8ae46c40a486bf79c0a94ec11325afccc90cbe8f9f2254,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1711996307737973043,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-293078,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 431f977c37ad2da28fe70e24f8f4cfb5,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:760c286bbb6db472837a632164ec1f41295aab88d45f26ad6be70fd606b5d770,PodSandboxId:9cb8873813e799abb80d9670bc16ce65e7c1b4aa4a41ae7da2eaedfe22ce9818,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711996307878499651,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 14a552ff6182f687744d2f77e0ce85cc,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a4bb7bf172a9ff370e2374952c73ee9f7a9407d8fbe484fef1014a4f770ea75,PodSandboxId:f2022a163b51a03502db09ec40831846d3a7a7a044ce8967cb9611a92263c393,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711996307819829498,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed3d89e46aa7fdf04d31b28a37841ad5,},An
notations:map[string]string{io.kubernetes.container.hash: 5bcf3746,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a675dbcdc9748f6386a2b82398770ad55d46e03815ede9d9d26e8a7b1ccbdc69,PodSandboxId:88f19d546e8fac2c3ea8437bf72e612a2b907c5cea31ee8c7deb54e84bc3f710,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1711996302919574959,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rjfcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f6ecc3-4bd0-406b-8096-ffd6115a2de3,},Annotations:map[string]string{io.kube
rnetes.container.hash: 1c24bf0f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61d746cfabdcf1e527c0a0136c923d19be52285d3c766da6faaba4eb3b3c013d,PodSandboxId:d2ac86b05a9f4d146abfc431861426b75aa121e86155e33f6885c2287d35c2d9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1711995814759324620,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-7tn8z,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0cf87f47-0b2d-42b9-9aa6-e4e3736ca728,},Annotations:map[string]string{io.kuber
netes.container.hash: 94944394,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce906a6132be484cf993679eea95d6637b9e3b3e9884820e95723b2b2c33e7e6,PodSandboxId:184b6f8a0b09d310e6167558bc2e043f793ec8069ada3f99f07f8c4bf5bbe2a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711995665008792137,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-8v456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28cf6a1d-90df-4802-ad3c-9c0276380a44,},Annotations:map[string]string{io.kubernetes.container.hash: 286c3144,i
o.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be43b3abd52fcb26f579806533a081948a895cdd479befbbc9bd5446fdc060e9,PodSandboxId:f885d7f062d4925a0c12a93de7fab4a08ad786e7dc47a543daf4c046acd992d8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711995665021082613,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: core
dns-76f75df574-sqxnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17868bbd-b0e9-460c-b191-9707f613af0a,},Annotations:map[string]string{io.kubernetes.container.hash: 48f6bb3c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d7ab06dacb1f801ea9714513d3f23a0bad938d609fb9f291d0ec0c4903d8d6a,PodSandboxId:849ffff6ee9e4b1fed8bc9e2950a7f2d227adf1318502c7d46a0e03e73165ca2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b
0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1711995662809506933,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l5q2p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 167db687-ac11-4f57-83c1-048c31a7b2cb,},Annotations:map[string]string{io.kubernetes.container.hash: a09407a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bd1ccbceec8c5056f450169f49c17acf202e064825e6c51a55ca89e591e25b5,PodSandboxId:91aa9ea508a082ce745f620d0c3c5161f596f6efef8dca30ddfad2fdc5376338,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16
b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1711995642771289176,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14a552ff6182f687744d2f77e0ce85cc,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8471f59f3de235b71fe57e79412f27884ceb62d668027d7fe3730009d2fbb1fa,PodSandboxId:34af251b6243e69ca34eeeb959254863f3933b8142c33d2027be0d4f7647ea8b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CON
TAINER_EXITED,CreatedAt:1711995642748101156,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-293078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed3d89e46aa7fdf04d31b28a37841ad5,},Annotations:map[string]string{io.kubernetes.container.hash: 5bcf3746,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f0f517c5-a9e4-488a-b86c-bce8ecad5000 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f880f1f32f064       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       4                   da260fce1557d       storage-provisioner
	4e7606a1741f0       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      4 minutes ago       Running             kindnet-cni               3                   88f19d546e8fa       kindnet-rjfcj
	e4cbd0e1fa74f       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      4 minutes ago       Running             kube-controller-manager   2                   33b0fc1f4bd7a       kube-controller-manager-ha-293078
	f922d350a52c3       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      4 minutes ago       Running             kube-apiserver            3                   7f6f619591301       kube-apiserver-ha-293078
	7c9a11dda6690       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   fa2a91a3428e0       busybox-7fdf7869d9-7tn8z
	c6a49a7917650       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      4 minutes ago       Running             kube-vip                  0                   925bf7ded7bbb       kube-vip-ha-293078
	c748af70e7154       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       3                   da260fce1557d       storage-provisioner
	ab6cbca43e514       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   433aada64602b       coredns-76f75df574-8v456
	f5142b30b6131       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   07143304915bd       coredns-76f75df574-sqxnb
	8a272a73055d5       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      5 minutes ago       Exited              kube-apiserver            2                   7f6f619591301       kube-apiserver-ha-293078
	d0ba4303bba76       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      5 minutes ago       Running             kube-proxy                1                   09c3e4083c6da       kube-proxy-l5q2p
	760c286bbb6db       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      5 minutes ago       Running             kube-scheduler            1                   9cb8873813e79       kube-scheduler-ha-293078
	2a4bb7bf172a9       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      5 minutes ago       Running             etcd                      1                   f2022a163b51a       etcd-ha-293078
	d1def46fae9a1       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      5 minutes ago       Exited              kube-controller-manager   1                   33b0fc1f4bd7a       kube-controller-manager-ha-293078
	a675dbcdc9748       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      5 minutes ago       Exited              kindnet-cni               2                   88f19d546e8fa       kindnet-rjfcj
	61d746cfabdcf       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   d2ac86b05a9f4       busybox-7fdf7869d9-7tn8z
	be43b3abd52fc       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      15 minutes ago      Exited              coredns                   0                   f885d7f062d49       coredns-76f75df574-sqxnb
	ce906a6132be4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      15 minutes ago      Exited              coredns                   0                   184b6f8a0b09d       coredns-76f75df574-8v456
	8d7ab06dacb1f       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      15 minutes ago      Exited              kube-proxy                0                   849ffff6ee9e4       kube-proxy-l5q2p
	6bd1ccbceec8c       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      16 minutes ago      Exited              kube-scheduler            0                   91aa9ea508a08       kube-scheduler-ha-293078
	8471f59f3de23       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      16 minutes ago      Exited              etcd                      0                   34af251b6243e       etcd-ha-293078
	
	
	==> coredns [ab6cbca43e514d079299396ef1d62ccb2d276f802ead726a35dc01e00e35e334] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[372260125]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (01-Apr-2024 18:31:55.002) (total time: 10001ms):
	Trace[372260125]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (18:32:05.003)
	Trace[372260125]: [10.001282714s] [10.001282714s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:39210->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:39210->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [be43b3abd52fcb26f579806533a081948a895cdd479befbbc9bd5446fdc060e9] <==
	[INFO] 10.244.0.4:48954 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004445287s
	[INFO] 10.244.0.4:41430 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00325614s
	[INFO] 10.244.0.4:43938 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000214694s
	[INFO] 10.244.0.4:55272 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000150031s
	[INFO] 10.244.1.2:53484 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00036286s
	[INFO] 10.244.1.2:40882 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000191317s
	[INFO] 10.244.1.2:44362 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000231809s
	[INFO] 10.244.2.2:38878 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000130983s
	[INFO] 10.244.2.2:55123 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000140829s
	[INFO] 10.244.2.2:60293 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000207687s
	[INFO] 10.244.2.2:42748 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000162463s
	[INFO] 10.244.0.4:51962 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000171832s
	[INFO] 10.244.1.2:34522 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169219s
	[INFO] 10.244.1.2:45853 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000149138s
	[INFO] 10.244.0.4:34814 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000154553s
	[INFO] 10.244.1.2:51449 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125618s
	[INFO] 10.244.1.2:53188 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000205396s
	[INFO] 10.244.2.2:55517 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00011978s
	[INFO] 10.244.2.2:58847 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00014087s
	[INFO] 10.244.2.2:55721 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000148617s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ce906a6132be484cf993679eea95d6637b9e3b3e9884820e95723b2b2c33e7e6] <==
	[INFO] 10.244.1.2:46630 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000135925s
	[INFO] 10.244.2.2:37886 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147427s
	[INFO] 10.244.2.2:47974 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002026718s
	[INFO] 10.244.2.2:36742 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000132507s
	[INFO] 10.244.2.2:60458 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001236853s
	[INFO] 10.244.0.4:36514 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000079136s
	[INFO] 10.244.0.4:54146 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000061884s
	[INFO] 10.244.0.4:48422 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000049796s
	[INFO] 10.244.1.2:53602 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000174827s
	[INFO] 10.244.1.2:52752 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000123202s
	[INFO] 10.244.2.2:42824 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122778s
	[INFO] 10.244.2.2:39412 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000138599s
	[INFO] 10.244.2.2:46213 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000134624s
	[INFO] 10.244.2.2:41423 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000104186s
	[INFO] 10.244.0.4:56317 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189039s
	[INFO] 10.244.0.4:49692 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000121271s
	[INFO] 10.244.0.4:55372 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000369332s
	[INFO] 10.244.1.2:44134 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000161425s
	[INFO] 10.244.1.2:45595 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000086429s
	[INFO] 10.244.2.2:52399 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000233085s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f5142b30b613168527e4d6ffa1c4e84c977d97a5c7e7f2cd9e331db31875309a] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:60656->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:58584->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:58584->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:60656->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-293078
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-293078
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2
	                    minikube.k8s.io/name=ha-293078
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_01T18_20_50_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Apr 2024 18:20:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-293078
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Apr 2024 18:36:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Apr 2024 18:32:34 +0000   Mon, 01 Apr 2024 18:20:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Apr 2024 18:32:34 +0000   Mon, 01 Apr 2024 18:20:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Apr 2024 18:32:34 +0000   Mon, 01 Apr 2024 18:20:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Apr 2024 18:32:34 +0000   Mon, 01 Apr 2024 18:21:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.74
	  Hostname:    ha-293078
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3e3b54c701944ac9af1db6484a71e599
	  System UUID:                3e3b54c7-0194-4ac9-af1d-b6484a71e599
	  Boot ID:                    7f2e19c7-2c6d-417a-9d2d-1c4d117eee25
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-7tn8z             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-76f75df574-8v456             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-76f75df574-sqxnb             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-ha-293078                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-rjfcj                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-293078             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-ha-293078    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-l5q2p                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-293078             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-vip-ha-293078                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m41s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 15m    kube-proxy       
	  Normal   Starting                 4m26s  kube-proxy       
	  Normal   NodeHasNoDiskPressure    16m    kubelet          Node ha-293078 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 16m    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  16m    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  16m    kubelet          Node ha-293078 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     16m    kubelet          Node ha-293078 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           15m    node-controller  Node ha-293078 event: Registered Node ha-293078 in Controller
	  Normal   NodeReady                15m    kubelet          Node ha-293078 status is now: NodeReady
	  Normal   RegisteredNode           14m    node-controller  Node ha-293078 event: Registered Node ha-293078 in Controller
	  Normal   RegisteredNode           13m    node-controller  Node ha-293078 event: Registered Node ha-293078 in Controller
	  Warning  ContainerGCFailed        6m9s   kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m19s  node-controller  Node ha-293078 event: Registered Node ha-293078 in Controller
	  Normal   RegisteredNode           4m13s  node-controller  Node ha-293078 event: Registered Node ha-293078 in Controller
	  Normal   RegisteredNode           3m14s  node-controller  Node ha-293078 event: Registered Node ha-293078 in Controller
	
	
	Name:               ha-293078-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-293078-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2
	                    minikube.k8s.io/name=ha-293078
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_01T18_22_00_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Apr 2024 18:21:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-293078-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Apr 2024 18:36:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Apr 2024 18:35:32 +0000   Mon, 01 Apr 2024 18:35:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Apr 2024 18:35:32 +0000   Mon, 01 Apr 2024 18:35:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Apr 2024 18:35:32 +0000   Mon, 01 Apr 2024 18:35:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Apr 2024 18:35:32 +0000   Mon, 01 Apr 2024 18:35:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.161
	  Hostname:    ha-293078-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ca6adfb154a0459d8158168bf9a31bb6
	  System UUID:                ca6adfb1-54a0-459d-8158-168bf9a31bb6
	  Boot ID:                    60ca700d-5f12-448f-8b63-f87c3d66ac34
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-ntbk4                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-293078-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-f4djp                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-293078-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-293078-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-8s2xk                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-293078-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-293078-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 14m                    kube-proxy       
	  Normal  Starting                 4m8s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-293078-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-293078-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-293078-m02 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           15m                    node-controller  Node ha-293078-m02 event: Registered Node ha-293078-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-293078-m02 event: Registered Node ha-293078-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-293078-m02 event: Registered Node ha-293078-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-293078-m02 status is now: NodeNotReady
	  Normal  NodeHasNoDiskPressure    4m52s (x8 over 4m52s)  kubelet          Node ha-293078-m02 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 4m52s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m52s (x8 over 4m52s)  kubelet          Node ha-293078-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     4m52s (x7 over 4m52s)  kubelet          Node ha-293078-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m19s                  node-controller  Node ha-293078-m02 event: Registered Node ha-293078-m02 in Controller
	  Normal  RegisteredNode           4m13s                  node-controller  Node ha-293078-m02 event: Registered Node ha-293078-m02 in Controller
	  Normal  RegisteredNode           3m14s                  node-controller  Node ha-293078-m02 event: Registered Node ha-293078-m02 in Controller
	  Normal  NodeNotReady             109s                   node-controller  Node ha-293078-m02 status is now: NodeNotReady
	
	
	Name:               ha-293078-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-293078-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2
	                    minikube.k8s.io/name=ha-293078
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_01T18_24_11_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Apr 2024 18:24:10 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-293078-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Apr 2024 18:34:29 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 01 Apr 2024 18:34:09 +0000   Mon, 01 Apr 2024 18:35:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 01 Apr 2024 18:34:09 +0000   Mon, 01 Apr 2024 18:35:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 01 Apr 2024 18:34:09 +0000   Mon, 01 Apr 2024 18:35:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 01 Apr 2024 18:34:09 +0000   Mon, 01 Apr 2024 18:35:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.14
	  Hostname:    ha-293078-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 071d9c818e6d4564a98e9da52a34ff25
	  System UUID:                071d9c81-8e6d-4564-a98e-9da52a34ff25
	  Boot ID:                    28fcd272-0f75-45e0-a431-29bc701fc638
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-drlq5    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kindnet-qhwr4               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-49cqh            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m45s                  kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m (x2 over 12m)      kubelet          Node ha-293078-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x2 over 12m)      kubelet          Node ha-293078-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x2 over 12m)      kubelet          Node ha-293078-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                    node-controller  Node ha-293078-m04 event: Registered Node ha-293078-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-293078-m04 event: Registered Node ha-293078-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-293078-m04 event: Registered Node ha-293078-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-293078-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m19s                  node-controller  Node ha-293078-m04 event: Registered Node ha-293078-m04 in Controller
	  Normal   RegisteredNode           4m13s                  node-controller  Node ha-293078-m04 event: Registered Node ha-293078-m04 in Controller
	  Normal   RegisteredNode           3m14s                  node-controller  Node ha-293078-m04 event: Registered Node ha-293078-m04 in Controller
	  Normal   NodeHasSufficientMemory  2m49s (x2 over 2m49s)  kubelet          Node ha-293078-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  2m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 2m49s                  kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    2m49s (x2 over 2m49s)  kubelet          Node ha-293078-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m49s (x2 over 2m49s)  kubelet          Node ha-293078-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m49s                  kubelet          Node ha-293078-m04 has been rebooted, boot id: 28fcd272-0f75-45e0-a431-29bc701fc638
	  Normal   NodeReady                2m49s                  kubelet          Node ha-293078-m04 status is now: NodeReady
	  Normal   NodeNotReady             109s (x2 over 3m39s)   node-controller  Node ha-293078-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +6.937253] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.062108] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066440] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.214972] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.138486] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.294622] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +4.757712] systemd-fstab-generator[764]: Ignoring "noauto" option for root device
	[  +0.062342] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.163879] systemd-fstab-generator[939]: Ignoring "noauto" option for root device
	[  +0.840426] kauditd_printk_skb: 57 callbacks suppressed
	[  +7.059574] systemd-fstab-generator[1362]: Ignoring "noauto" option for root device
	[  +0.076658] kauditd_printk_skb: 40 callbacks suppressed
	[Apr 1 18:21] kauditd_printk_skb: 21 callbacks suppressed
	[Apr 1 18:22] kauditd_printk_skb: 74 callbacks suppressed
	[Apr 1 18:28] kauditd_printk_skb: 1 callbacks suppressed
	[Apr 1 18:31] systemd-fstab-generator[3760]: Ignoring "noauto" option for root device
	[  +0.149145] systemd-fstab-generator[3773]: Ignoring "noauto" option for root device
	[  +0.193570] systemd-fstab-generator[3787]: Ignoring "noauto" option for root device
	[  +0.202601] systemd-fstab-generator[3799]: Ignoring "noauto" option for root device
	[  +0.316779] systemd-fstab-generator[3827]: Ignoring "noauto" option for root device
	[  +0.867340] systemd-fstab-generator[3929]: Ignoring "noauto" option for root device
	[  +4.872420] kauditd_printk_skb: 132 callbacks suppressed
	[Apr 1 18:32] kauditd_printk_skb: 87 callbacks suppressed
	[  +9.318872] kauditd_printk_skb: 2 callbacks suppressed
	[ +40.047733] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [2a4bb7bf172a9ff370e2374952c73ee9f7a9407d8fbe484fef1014a4f770ea75] <==
	{"level":"info","ts":"2024-04-01T18:33:25.30137Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"2c3239b60c033d0c","remote-peer-id":"97b47c491108199"}
	{"level":"info","ts":"2024-04-01T18:33:25.301644Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"2c3239b60c033d0c","remote-peer-id":"97b47c491108199"}
	{"level":"info","ts":"2024-04-01T18:33:25.308592Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"2c3239b60c033d0c","to":"97b47c491108199","stream-type":"stream Message"}
	{"level":"info","ts":"2024-04-01T18:33:25.308866Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"2c3239b60c033d0c","remote-peer-id":"97b47c491108199"}
	{"level":"info","ts":"2024-04-01T18:33:25.316623Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"2c3239b60c033d0c","to":"97b47c491108199","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-04-01T18:33:25.316676Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"2c3239b60c033d0c","remote-peer-id":"97b47c491108199"}
	{"level":"info","ts":"2024-04-01T18:33:26.371223Z","caller":"traceutil/trace.go:171","msg":"trace[1734470293] transaction","detail":"{read_only:false; response_revision:2350; number_of_response:1; }","duration":"135.320178ms","start":"2024-04-01T18:33:26.235865Z","end":"2024-04-01T18:33:26.371185Z","steps":["trace[1734470293] 'process raft request'  (duration: 135.201475ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-01T18:33:28.998675Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"97b47c491108199","rtt":"0s","error":"dial tcp 192.168.39.210:2380: connect: connection refused"}
	{"level":"info","ts":"2024-04-01T18:34:23.566257Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2c3239b60c033d0c switched to configuration voters=(3184671340552731916 9031229794428167416)"}
	{"level":"info","ts":"2024-04-01T18:34:23.570282Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"52192a60fe78b16d","local-member-id":"2c3239b60c033d0c","removed-remote-peer-id":"97b47c491108199","removed-remote-peer-urls":["https://192.168.39.210:2380"]}
	{"level":"info","ts":"2024-04-01T18:34:23.570447Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"97b47c491108199"}
	{"level":"warn","ts":"2024-04-01T18:34:23.570729Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"97b47c491108199"}
	{"level":"info","ts":"2024-04-01T18:34:23.570795Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"97b47c491108199"}
	{"level":"warn","ts":"2024-04-01T18:34:23.570891Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"97b47c491108199"}
	{"level":"info","ts":"2024-04-01T18:34:23.570942Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"97b47c491108199"}
	{"level":"info","ts":"2024-04-01T18:34:23.571177Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"2c3239b60c033d0c","remote-peer-id":"97b47c491108199"}
	{"level":"warn","ts":"2024-04-01T18:34:23.571665Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"2c3239b60c033d0c","remote-peer-id":"97b47c491108199","error":"context canceled"}
	{"level":"warn","ts":"2024-04-01T18:34:23.571711Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"97b47c491108199","error":"failed to read 97b47c491108199 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-04-01T18:34:23.571921Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"2c3239b60c033d0c","remote-peer-id":"97b47c491108199"}
	{"level":"warn","ts":"2024-04-01T18:34:23.572099Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"2c3239b60c033d0c","remote-peer-id":"97b47c491108199","error":"context canceled"}
	{"level":"info","ts":"2024-04-01T18:34:23.572329Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"2c3239b60c033d0c","remote-peer-id":"97b47c491108199"}
	{"level":"info","ts":"2024-04-01T18:34:23.572353Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"97b47c491108199"}
	{"level":"info","ts":"2024-04-01T18:34:23.572366Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"2c3239b60c033d0c","removed-remote-peer-id":"97b47c491108199"}
	{"level":"warn","ts":"2024-04-01T18:34:23.585633Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"2c3239b60c033d0c","remote-peer-id-stream-handler":"2c3239b60c033d0c","remote-peer-id-from":"97b47c491108199"}
	{"level":"warn","ts":"2024-04-01T18:34:23.603176Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.210:34650","server-name":"","error":"EOF"}
	
	
	==> etcd [8471f59f3de235b71fe57e79412f27884ceb62d668027d7fe3730009d2fbb1fa] <==
	{"level":"warn","ts":"2024-04-01T18:30:09.321813Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.252588ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ingress/\" range_end:\"/registry/ingress0\" limit:10000 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-04-01T18:30:09.330834Z","caller":"traceutil/trace.go:171","msg":"trace[864783224] range","detail":"{range_begin:/registry/ingress/; range_end:/registry/ingress0; }","duration":"131.283752ms","start":"2024-04-01T18:30:09.199542Z","end":"2024-04-01T18:30:09.330826Z","steps":["trace[864783224] 'agreement among raft nodes before linearized reading'  (duration: 122.267694ms)"],"step_count":1}
	2024/04/01 18:30:09 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-04-01T18:30:09.324366Z","caller":"traceutil/trace.go:171","msg":"trace[1403237565] range","detail":"{range_begin:/registry/podtemplates/; range_end:/registry/podtemplates0; }","duration":"117.617558ms","start":"2024-04-01T18:30:09.206714Z","end":"2024-04-01T18:30:09.324332Z","steps":["trace[1403237565] 'agreement among raft nodes before linearized reading'  (duration: 115.080328ms)"],"step_count":1}
	2024/04/01 18:30:09 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-04-01T18:30:09.394326Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.74:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-01T18:30:09.394448Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.74:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-01T18:30:09.396076Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"2c3239b60c033d0c","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-04-01T18:30:09.396452Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"7d555fa605d0a4f8"}
	{"level":"info","ts":"2024-04-01T18:30:09.396526Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"7d555fa605d0a4f8"}
	{"level":"info","ts":"2024-04-01T18:30:09.396549Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"7d555fa605d0a4f8"}
	{"level":"info","ts":"2024-04-01T18:30:09.396635Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"2c3239b60c033d0c","remote-peer-id":"7d555fa605d0a4f8"}
	{"level":"info","ts":"2024-04-01T18:30:09.396702Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"2c3239b60c033d0c","remote-peer-id":"7d555fa605d0a4f8"}
	{"level":"info","ts":"2024-04-01T18:30:09.396736Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"2c3239b60c033d0c","remote-peer-id":"7d555fa605d0a4f8"}
	{"level":"info","ts":"2024-04-01T18:30:09.396746Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"7d555fa605d0a4f8"}
	{"level":"info","ts":"2024-04-01T18:30:09.396754Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"97b47c491108199"}
	{"level":"info","ts":"2024-04-01T18:30:09.396762Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"97b47c491108199"}
	{"level":"info","ts":"2024-04-01T18:30:09.396808Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"97b47c491108199"}
	{"level":"info","ts":"2024-04-01T18:30:09.396948Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"2c3239b60c033d0c","remote-peer-id":"97b47c491108199"}
	{"level":"info","ts":"2024-04-01T18:30:09.397057Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"2c3239b60c033d0c","remote-peer-id":"97b47c491108199"}
	{"level":"info","ts":"2024-04-01T18:30:09.397134Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"2c3239b60c033d0c","remote-peer-id":"97b47c491108199"}
	{"level":"info","ts":"2024-04-01T18:30:09.397173Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"97b47c491108199"}
	{"level":"info","ts":"2024-04-01T18:30:09.400096Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.74:2380"}
	{"level":"info","ts":"2024-04-01T18:30:09.400301Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.74:2380"}
	{"level":"info","ts":"2024-04-01T18:30:09.400352Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-293078","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.74:2380"],"advertise-client-urls":["https://192.168.39.74:2379"]}
	
	
	==> kernel <==
	 18:36:58 up 16 min,  0 users,  load average: 0.15, 0.53, 0.40
	Linux ha-293078 5.10.207 #1 SMP Wed Mar 27 22:02:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [4e7606a1741f035c4106a889012cf5db5431ac4a2e1390cf5fa25faf62a34ea9] <==
	I0401 18:36:12.499182       1 main.go:250] Node ha-293078-m04 has CIDR [10.244.3.0/24] 
	I0401 18:36:22.511868       1 main.go:223] Handling node with IPs: map[192.168.39.74:{}]
	I0401 18:36:22.511913       1 main.go:227] handling current node
	I0401 18:36:22.511931       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0401 18:36:22.511938       1 main.go:250] Node ha-293078-m02 has CIDR [10.244.1.0/24] 
	I0401 18:36:22.512057       1 main.go:223] Handling node with IPs: map[192.168.39.14:{}]
	I0401 18:36:22.512062       1 main.go:250] Node ha-293078-m04 has CIDR [10.244.3.0/24] 
	I0401 18:36:32.529173       1 main.go:223] Handling node with IPs: map[192.168.39.74:{}]
	I0401 18:36:32.529272       1 main.go:227] handling current node
	I0401 18:36:32.529308       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0401 18:36:32.529331       1 main.go:250] Node ha-293078-m02 has CIDR [10.244.1.0/24] 
	I0401 18:36:32.529545       1 main.go:223] Handling node with IPs: map[192.168.39.14:{}]
	I0401 18:36:32.529587       1 main.go:250] Node ha-293078-m04 has CIDR [10.244.3.0/24] 
	I0401 18:36:42.536902       1 main.go:223] Handling node with IPs: map[192.168.39.74:{}]
	I0401 18:36:42.536920       1 main.go:227] handling current node
	I0401 18:36:42.536929       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0401 18:36:42.536934       1 main.go:250] Node ha-293078-m02 has CIDR [10.244.1.0/24] 
	I0401 18:36:42.537031       1 main.go:223] Handling node with IPs: map[192.168.39.14:{}]
	I0401 18:36:42.537073       1 main.go:250] Node ha-293078-m04 has CIDR [10.244.3.0/24] 
	I0401 18:36:52.548032       1 main.go:223] Handling node with IPs: map[192.168.39.74:{}]
	I0401 18:36:52.548078       1 main.go:227] handling current node
	I0401 18:36:52.548093       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0401 18:36:52.548099       1 main.go:250] Node ha-293078-m02 has CIDR [10.244.1.0/24] 
	I0401 18:36:52.548200       1 main.go:223] Handling node with IPs: map[192.168.39.14:{}]
	I0401 18:36:52.548206       1 main.go:250] Node ha-293078-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [a675dbcdc9748f6386a2b82398770ad55d46e03815ede9d9d26e8a7b1ccbdc69] <==
	I0401 18:31:43.476892       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0401 18:31:43.477056       1 main.go:107] hostIP = 192.168.39.74
	podIP = 192.168.39.74
	I0401 18:31:43.477316       1 main.go:116] setting mtu 1500 for CNI 
	I0401 18:31:43.477460       1 main.go:146] kindnetd IP family: "ipv4"
	I0401 18:31:43.477495       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0401 18:31:43.783833       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0401 18:31:46.837942       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0401 18:31:49.910495       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0401 18:31:52.982549       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0401 18:31:56.055040       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kube-apiserver [8a272a73055d5fd196829e20e75ef8aafb0df5ae5f665312afc9e839c52f7766] <==
	I0401 18:31:48.810878       1 options.go:222] external host was not specified, using 192.168.39.74
	I0401 18:31:48.812255       1 server.go:148] Version: v1.29.3
	I0401 18:31:48.812313       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 18:31:49.647620       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0401 18:31:49.655029       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0401 18:31:49.655077       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0401 18:31:49.655302       1 instance.go:297] Using reconciler: lease
	W0401 18:32:09.651826       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0401 18:32:09.651826       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0401 18:32:09.656963       1 instance.go:290] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [f922d350a52c3b48d57f86e85d8225b11fcc916d1dd95577c4f5fe5d3757c986] <==
	I0401 18:32:32.507186       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0401 18:32:32.507826       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0401 18:32:32.507874       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0401 18:32:32.507932       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0401 18:32:32.508047       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0401 18:32:32.579931       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0401 18:32:32.590577       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0401 18:32:32.593576       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0401 18:32:32.594366       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0401 18:32:32.594438       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0401 18:32:32.594525       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0401 18:32:32.598315       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0401 18:32:32.599066       1 shared_informer.go:318] Caches are synced for configmaps
	I0401 18:32:32.608192       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0401 18:32:32.608289       1 aggregator.go:165] initial CRD sync complete...
	I0401 18:32:32.608301       1 autoregister_controller.go:141] Starting autoregister controller
	I0401 18:32:32.608308       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0401 18:32:32.608315       1 cache.go:39] Caches are synced for autoregister controller
	W0401 18:32:32.624144       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.161 192.168.39.210]
	I0401 18:32:32.625709       1 controller.go:624] quota admission added evaluator for: endpoints
	I0401 18:32:32.658269       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0401 18:32:32.666504       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0401 18:32:33.503248       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0401 18:32:33.993371       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.161 192.168.39.210 192.168.39.74]
	W0401 18:32:43.996241       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.161 192.168.39.74]
	
	
	==> kube-controller-manager [d1def46fae9a1e3494c6e79f3f6224d4b4ff1e4a487370fa491a92924c0622b6] <==
	I0401 18:31:49.565363       1 serving.go:380] Generated self-signed cert in-memory
	I0401 18:31:50.054082       1 controllermanager.go:187] "Starting" version="v1.29.3"
	I0401 18:31:50.054187       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 18:31:50.056150       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0401 18:31:50.056460       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0401 18:31:50.056726       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0401 18:31:50.056849       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0401 18:32:10.664253       1 controllermanager.go:232] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.74:8443/healthz\": dial tcp 192.168.39.74:8443: connect: connection refused"
	
	
	==> kube-controller-manager [e4cbd0e1fa74f9a0bf6ac1fcafa74e7cc52ea84d7f7d3216ffa34610961bb64b] <==
	I0401 18:34:22.396143       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="80.037µs"
	I0401 18:34:22.603784       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="44.201µs"
	I0401 18:34:22.627031       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="79.608µs"
	I0401 18:34:22.642841       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="60.192µs"
	I0401 18:34:22.657953       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="13.22841ms"
	I0401 18:34:22.658274       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="56.481µs"
	I0401 18:34:35.449348       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-293078-m04"
	E0401 18:34:35.489102       1 garbagecollector.go:408] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:"storage.k8s.io/v1", Kind:"CSINode", Name:"ha-293078-m03", UID:"4708de16-1cf9-47f3-aa9a-7414bb503871", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:""}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_
:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Node", Name:"ha-293078-m03", UID:"cffaa052-7943-409a-99ec-1f88fa921b01", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: csinodes.storage.k8s.io "ha-293078-m03" not found
	I0401 18:34:35.646310       1 event.go:376] "Event occurred" object="ha-293078-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node ha-293078-m03 event: Removing Node ha-293078-m03 from Controller"
	E0401 18:34:45.636550       1 gc_controller.go:153] "Failed to get node" err="node \"ha-293078-m03\" not found" node="ha-293078-m03"
	E0401 18:34:45.636669       1 gc_controller.go:153] "Failed to get node" err="node \"ha-293078-m03\" not found" node="ha-293078-m03"
	E0401 18:34:45.636695       1 gc_controller.go:153] "Failed to get node" err="node \"ha-293078-m03\" not found" node="ha-293078-m03"
	E0401 18:34:45.636720       1 gc_controller.go:153] "Failed to get node" err="node \"ha-293078-m03\" not found" node="ha-293078-m03"
	E0401 18:34:45.636743       1 gc_controller.go:153] "Failed to get node" err="node \"ha-293078-m03\" not found" node="ha-293078-m03"
	E0401 18:35:05.637768       1 gc_controller.go:153] "Failed to get node" err="node \"ha-293078-m03\" not found" node="ha-293078-m03"
	E0401 18:35:05.637825       1 gc_controller.go:153] "Failed to get node" err="node \"ha-293078-m03\" not found" node="ha-293078-m03"
	E0401 18:35:05.637833       1 gc_controller.go:153] "Failed to get node" err="node \"ha-293078-m03\" not found" node="ha-293078-m03"
	E0401 18:35:05.637839       1 gc_controller.go:153] "Failed to get node" err="node \"ha-293078-m03\" not found" node="ha-293078-m03"
	E0401 18:35:05.637845       1 gc_controller.go:153] "Failed to get node" err="node \"ha-293078-m03\" not found" node="ha-293078-m03"
	I0401 18:35:09.888502       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="57.428669ms"
	I0401 18:35:09.908339       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="19.722369ms"
	I0401 18:35:09.908615       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="49.01µs"
	I0401 18:35:26.363274       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="12.542667ms"
	I0401 18:35:26.364174       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="89.247µs"
	I0401 18:35:35.007260       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-ntbk4" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-7fdf7869d9-ntbk4"
	
	
	==> kube-proxy [8d7ab06dacb1f801ea9714513d3f23a0bad938d609fb9f291d0ec0c4903d8d6a] <==
	E0401 18:28:59.286576       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1780": dial tcp 192.168.39.254:8443: connect: no route to host
	W0401 18:29:02.357913       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-293078&resourceVersion=1857": dial tcp 192.168.39.254:8443: connect: no route to host
	W0401 18:29:02.358024       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1780": dial tcp 192.168.39.254:8443: connect: no route to host
	E0401 18:29:02.358120       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1780": dial tcp 192.168.39.254:8443: connect: no route to host
	E0401 18:29:02.358141       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-293078&resourceVersion=1857": dial tcp 192.168.39.254:8443: connect: no route to host
	W0401 18:29:02.357898       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1862": dial tcp 192.168.39.254:8443: connect: no route to host
	E0401 18:29:02.358267       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1862": dial tcp 192.168.39.254:8443: connect: no route to host
	W0401 18:29:08.503625       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-293078&resourceVersion=1857": dial tcp 192.168.39.254:8443: connect: no route to host
	E0401 18:29:08.503745       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-293078&resourceVersion=1857": dial tcp 192.168.39.254:8443: connect: no route to host
	W0401 18:29:08.503821       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1862": dial tcp 192.168.39.254:8443: connect: no route to host
	E0401 18:29:08.503874       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1862": dial tcp 192.168.39.254:8443: connect: no route to host
	W0401 18:29:08.503908       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1780": dial tcp 192.168.39.254:8443: connect: no route to host
	E0401 18:29:08.504162       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1780": dial tcp 192.168.39.254:8443: connect: no route to host
	W0401 18:29:17.718745       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-293078&resourceVersion=1857": dial tcp 192.168.39.254:8443: connect: no route to host
	E0401 18:29:17.718958       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-293078&resourceVersion=1857": dial tcp 192.168.39.254:8443: connect: no route to host
	W0401 18:29:20.790974       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1780": dial tcp 192.168.39.254:8443: connect: no route to host
	E0401 18:29:20.791118       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1780": dial tcp 192.168.39.254:8443: connect: no route to host
	W0401 18:29:20.791361       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1862": dial tcp 192.168.39.254:8443: connect: no route to host
	E0401 18:29:20.791479       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1862": dial tcp 192.168.39.254:8443: connect: no route to host
	W0401 18:29:36.150324       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-293078&resourceVersion=1857": dial tcp 192.168.39.254:8443: connect: no route to host
	E0401 18:29:36.150514       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-293078&resourceVersion=1857": dial tcp 192.168.39.254:8443: connect: no route to host
	W0401 18:29:42.296297       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1780": dial tcp 192.168.39.254:8443: connect: no route to host
	E0401 18:29:42.296711       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1780": dial tcp 192.168.39.254:8443: connect: no route to host
	W0401 18:29:45.366573       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1862": dial tcp 192.168.39.254:8443: connect: no route to host
	E0401 18:29:45.367179       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1862": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [d0ba4303bba7609a3982e28cc53c7c80afb21aadb86d498d9d4b5e6340e2d039] <==
	I0401 18:32:31.475669       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0401 18:32:31.475730       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0401 18:32:31.475758       1 server_others.go:168] "Using iptables Proxier"
	I0401 18:32:31.480667       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0401 18:32:31.481004       1 server.go:865] "Version info" version="v1.29.3"
	I0401 18:32:31.481053       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 18:32:31.483694       1 config.go:188] "Starting service config controller"
	I0401 18:32:31.498467       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0401 18:32:31.498544       1 config.go:97] "Starting endpoint slice config controller"
	I0401 18:32:31.498553       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0401 18:32:31.499212       1 config.go:315] "Starting node config controller"
	I0401 18:32:31.499257       1 shared_informer.go:311] Waiting for caches to sync for node config
	E0401 18:32:34.327023       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0401 18:32:34.328343       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0401 18:32:34.328725       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0401 18:32:34.328816       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0401 18:32:34.328882       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0401 18:32:34.328952       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-293078&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0401 18:32:34.329009       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-293078&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	I0401 18:32:35.298610       1 shared_informer.go:318] Caches are synced for service config
	I0401 18:32:35.799560       1 shared_informer.go:318] Caches are synced for node config
	I0401 18:32:35.899555       1 shared_informer.go:318] Caches are synced for endpoint slice config
	W0401 18:35:20.472991       1 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0401 18:35:20.473563       1 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.EndpointSlice ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0401 18:35:20.473609       1 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	
	
	==> kube-scheduler [6bd1ccbceec8c5056f450169f49c17acf202e064825e6c51a55ca89e591e25b5] <==
	E0401 18:30:06.000673       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0401 18:30:06.185594       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0401 18:30:06.185621       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0401 18:30:06.409792       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0401 18:30:06.409849       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0401 18:30:06.515055       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0401 18:30:06.515106       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0401 18:30:06.767717       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0401 18:30:06.767814       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0401 18:30:06.814304       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0401 18:30:06.814344       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0401 18:30:07.044190       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0401 18:30:07.044244       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0401 18:30:07.980831       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0401 18:30:07.980894       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0401 18:30:08.097289       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0401 18:30:08.097500       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0401 18:30:08.160852       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0401 18:30:08.161002       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0401 18:30:08.430145       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0401 18:30:08.430201       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0401 18:30:09.307710       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0401 18:30:09.307879       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0401 18:30:09.308074       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0401 18:30:09.308196       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [760c286bbb6db472837a632164ec1f41295aab88d45f26ad6be70fd606b5d770] <==
	W0401 18:32:28.743995       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.74:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.74:8443: connect: connection refused
	E0401 18:32:28.744096       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.74:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.74:8443: connect: connection refused
	W0401 18:32:28.861529       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: Get "https://192.168.39.74:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.74:8443: connect: connection refused
	E0401 18:32:28.861618       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.74:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.74:8443: connect: connection refused
	W0401 18:32:29.320939       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://192.168.39.74:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.74:8443: connect: connection refused
	E0401 18:32:29.321002       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.74:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.74:8443: connect: connection refused
	W0401 18:32:29.358778       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.74:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.74:8443: connect: connection refused
	E0401 18:32:29.358839       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.74:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.74:8443: connect: connection refused
	W0401 18:32:29.369482       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: Get "https://192.168.39.74:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.74:8443: connect: connection refused
	E0401 18:32:29.369542       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.74:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.74:8443: connect: connection refused
	W0401 18:32:29.417318       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.74:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.74:8443: connect: connection refused
	E0401 18:32:29.417525       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.74:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.74:8443: connect: connection refused
	W0401 18:32:29.740959       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: Get "https://192.168.39.74:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.74:8443: connect: connection refused
	E0401 18:32:29.741049       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.74:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.74:8443: connect: connection refused
	W0401 18:32:29.767908       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: Get "https://192.168.39.74:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.74:8443: connect: connection refused
	E0401 18:32:29.767955       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.74:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.74:8443: connect: connection refused
	W0401 18:32:32.521765       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0401 18:32:32.521841       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0401 18:32:32.521958       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0401 18:32:32.521999       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0401 18:32:41.268362       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0401 18:34:20.245670       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-drlq5\": pod busybox-7fdf7869d9-drlq5 is already assigned to node \"ha-293078-m04\"" plugin="DefaultBinder" pod="default/busybox-7fdf7869d9-drlq5" node="ha-293078-m04"
	E0401 18:34:20.245817       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod 9f7e575d-2bf1-44f3-ad30-a996de98aef3(default/busybox-7fdf7869d9-drlq5) wasn't assumed so cannot be forgotten"
	E0401 18:34:20.245889       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-drlq5\": pod busybox-7fdf7869d9-drlq5 is already assigned to node \"ha-293078-m04\"" pod="default/busybox-7fdf7869d9-drlq5"
	I0401 18:34:20.245917       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7fdf7869d9-drlq5" node="ha-293078-m04"
	
	
	==> kubelet <==
	Apr 01 18:32:49 ha-293078 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 01 18:33:17 ha-293078 kubelet[1369]: I0401 18:33:17.931976    1369 kubelet.go:1903] "Trying to delete pod" pod="kube-system/kube-vip-ha-293078" podUID="543de9ec-6f50-46b9-b6ec-f58964f81f12"
	Apr 01 18:33:17 ha-293078 kubelet[1369]: I0401 18:33:17.963524    1369 kubelet.go:1908] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-293078"
	Apr 01 18:33:18 ha-293078 kubelet[1369]: I0401 18:33:18.094754    1369 kubelet.go:1903] "Trying to delete pod" pod="kube-system/kube-vip-ha-293078" podUID="543de9ec-6f50-46b9-b6ec-f58964f81f12"
	Apr 01 18:33:44 ha-293078 kubelet[1369]: I0401 18:33:44.770088    1369 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-vip-ha-293078" podStartSLOduration=27.770003033 podStartE2EDuration="27.770003033s" podCreationTimestamp="2024-04-01 18:33:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-01 18:33:19.9568693 +0000 UTC m=+750.246997842" watchObservedRunningTime="2024-04-01 18:33:44.770003033 +0000 UTC m=+775.060131577"
	Apr 01 18:33:49 ha-293078 kubelet[1369]: E0401 18:33:49.981947    1369 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 01 18:33:49 ha-293078 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 01 18:33:49 ha-293078 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 01 18:33:49 ha-293078 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 01 18:33:49 ha-293078 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 01 18:34:49 ha-293078 kubelet[1369]: E0401 18:34:49.982372    1369 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 01 18:34:49 ha-293078 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 01 18:34:49 ha-293078 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 01 18:34:49 ha-293078 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 01 18:34:49 ha-293078 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 01 18:35:49 ha-293078 kubelet[1369]: E0401 18:35:49.982642    1369 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 01 18:35:49 ha-293078 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 01 18:35:49 ha-293078 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 01 18:35:49 ha-293078 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 01 18:35:49 ha-293078 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 01 18:36:49 ha-293078 kubelet[1369]: E0401 18:36:49.982899    1369 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 01 18:36:49 ha-293078 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 01 18:36:49 ha-293078 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 01 18:36:49 ha-293078 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 01 18:36:49 ha-293078 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0401 18:36:57.383939   34867 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18233-10493/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-293078 -n ha-293078
helpers_test.go:261: (dbg) Run:  kubectl --context ha-293078 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (142.00s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (308.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-853477
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-853477
E0401 18:53:52.856219   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/client.crt: no such file or directory
E0401 18:54:16.856570   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/functional-784295/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-853477: exit status 82 (2m2.703719518s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-853477-m03"  ...
	* Stopping node "multinode-853477-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-853477" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-853477 --wait=true -v=8 --alsologtostderr
E0401 18:57:19.905291   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/functional-784295/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-853477 --wait=true -v=8 --alsologtostderr: (3m3.207642263s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-853477
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-853477 -n multinode-853477
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-853477 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-853477 logs -n 25: (1.701802981s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	| ssh     | multinode-853477 ssh -n                                                                 | multinode-853477 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:51 UTC | 01 Apr 24 18:51 UTC |
	|         | multinode-853477-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-853477 cp multinode-853477-m02:/home/docker/cp-test.txt                       | multinode-853477 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:51 UTC | 01 Apr 24 18:51 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3069131938/001/cp-test_multinode-853477-m02.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-853477 ssh -n                                                                 | multinode-853477 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:51 UTC | 01 Apr 24 18:51 UTC |
	|         | multinode-853477-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-853477 cp multinode-853477-m02:/home/docker/cp-test.txt                       | multinode-853477 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:51 UTC | 01 Apr 24 18:51 UTC |
	|         | multinode-853477:/home/docker/cp-test_multinode-853477-m02_multinode-853477.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-853477 ssh -n                                                                 | multinode-853477 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:51 UTC | 01 Apr 24 18:51 UTC |
	|         | multinode-853477-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-853477 ssh -n multinode-853477 sudo cat                                       | multinode-853477 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:51 UTC | 01 Apr 24 18:51 UTC |
	|         | /home/docker/cp-test_multinode-853477-m02_multinode-853477.txt                          |                  |         |                |                     |                     |
	| cp      | multinode-853477 cp multinode-853477-m02:/home/docker/cp-test.txt                       | multinode-853477 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:51 UTC | 01 Apr 24 18:51 UTC |
	|         | multinode-853477-m03:/home/docker/cp-test_multinode-853477-m02_multinode-853477-m03.txt |                  |         |                |                     |                     |
	| ssh     | multinode-853477 ssh -n                                                                 | multinode-853477 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:51 UTC | 01 Apr 24 18:51 UTC |
	|         | multinode-853477-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-853477 ssh -n multinode-853477-m03 sudo cat                                   | multinode-853477 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:51 UTC | 01 Apr 24 18:51 UTC |
	|         | /home/docker/cp-test_multinode-853477-m02_multinode-853477-m03.txt                      |                  |         |                |                     |                     |
	| cp      | multinode-853477 cp testdata/cp-test.txt                                                | multinode-853477 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:51 UTC | 01 Apr 24 18:51 UTC |
	|         | multinode-853477-m03:/home/docker/cp-test.txt                                           |                  |         |                |                     |                     |
	| ssh     | multinode-853477 ssh -n                                                                 | multinode-853477 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:51 UTC | 01 Apr 24 18:51 UTC |
	|         | multinode-853477-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-853477 cp multinode-853477-m03:/home/docker/cp-test.txt                       | multinode-853477 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:51 UTC | 01 Apr 24 18:51 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3069131938/001/cp-test_multinode-853477-m03.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-853477 ssh -n                                                                 | multinode-853477 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:51 UTC | 01 Apr 24 18:51 UTC |
	|         | multinode-853477-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-853477 cp multinode-853477-m03:/home/docker/cp-test.txt                       | multinode-853477 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:51 UTC | 01 Apr 24 18:51 UTC |
	|         | multinode-853477:/home/docker/cp-test_multinode-853477-m03_multinode-853477.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-853477 ssh -n                                                                 | multinode-853477 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:51 UTC | 01 Apr 24 18:51 UTC |
	|         | multinode-853477-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-853477 ssh -n multinode-853477 sudo cat                                       | multinode-853477 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:51 UTC | 01 Apr 24 18:51 UTC |
	|         | /home/docker/cp-test_multinode-853477-m03_multinode-853477.txt                          |                  |         |                |                     |                     |
	| cp      | multinode-853477 cp multinode-853477-m03:/home/docker/cp-test.txt                       | multinode-853477 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:52 UTC | 01 Apr 24 18:52 UTC |
	|         | multinode-853477-m02:/home/docker/cp-test_multinode-853477-m03_multinode-853477-m02.txt |                  |         |                |                     |                     |
	| ssh     | multinode-853477 ssh -n                                                                 | multinode-853477 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:52 UTC | 01 Apr 24 18:52 UTC |
	|         | multinode-853477-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-853477 ssh -n multinode-853477-m02 sudo cat                                   | multinode-853477 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:52 UTC | 01 Apr 24 18:52 UTC |
	|         | /home/docker/cp-test_multinode-853477-m03_multinode-853477-m02.txt                      |                  |         |                |                     |                     |
	| node    | multinode-853477 node stop m03                                                          | multinode-853477 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:52 UTC | 01 Apr 24 18:52 UTC |
	| node    | multinode-853477 node start                                                             | multinode-853477 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:52 UTC | 01 Apr 24 18:52 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |                |                     |                     |
	| node    | list -p multinode-853477                                                                | multinode-853477 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:52 UTC |                     |
	| stop    | -p multinode-853477                                                                     | multinode-853477 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:52 UTC |                     |
	| start   | -p multinode-853477                                                                     | multinode-853477 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:54 UTC | 01 Apr 24 18:57 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |                |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |                |                     |                     |
	| node    | list -p multinode-853477                                                                | multinode-853477 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:57 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/01 18:54:35
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 18:54:35.112290   43137 out.go:291] Setting OutFile to fd 1 ...
	I0401 18:54:35.112408   43137 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 18:54:35.112416   43137 out.go:304] Setting ErrFile to fd 2...
	I0401 18:54:35.112420   43137 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 18:54:35.112584   43137 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-10493/.minikube/bin
	I0401 18:54:35.113115   43137 out.go:298] Setting JSON to false
	I0401 18:54:35.114013   43137 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5827,"bootTime":1711991848,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 18:54:35.114074   43137 start.go:139] virtualization: kvm guest
	I0401 18:54:35.116421   43137 out.go:177] * [multinode-853477] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 18:54:35.117859   43137 out.go:177]   - MINIKUBE_LOCATION=18233
	I0401 18:54:35.117865   43137 notify.go:220] Checking for updates...
	I0401 18:54:35.119437   43137 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 18:54:35.120870   43137 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 18:54:35.122667   43137 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-10493/.minikube
	I0401 18:54:35.124294   43137 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 18:54:35.125553   43137 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 18:54:35.127011   43137 config.go:182] Loaded profile config "multinode-853477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 18:54:35.127092   43137 driver.go:392] Setting default libvirt URI to qemu:///system
	I0401 18:54:35.127510   43137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:54:35.127572   43137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:54:35.142256   43137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33125
	I0401 18:54:35.142690   43137 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:54:35.143168   43137 main.go:141] libmachine: Using API Version  1
	I0401 18:54:35.143190   43137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:54:35.143510   43137 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:54:35.143663   43137 main.go:141] libmachine: (multinode-853477) Calling .DriverName
	I0401 18:54:35.177565   43137 out.go:177] * Using the kvm2 driver based on existing profile
	I0401 18:54:35.178699   43137 start.go:297] selected driver: kvm2
	I0401 18:54:35.178713   43137 start.go:901] validating driver "kvm2" against &{Name:multinode-853477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.29.3 ClusterName:multinode-853477 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.161 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.239 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.115 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 18:54:35.178875   43137 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 18:54:35.179313   43137 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 18:54:35.179401   43137 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18233-10493/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0401 18:54:35.193694   43137 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0401 18:54:35.194371   43137 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 18:54:35.194433   43137 cni.go:84] Creating CNI manager for ""
	I0401 18:54:35.194445   43137 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0401 18:54:35.194501   43137 start.go:340] cluster config:
	{Name:multinode-853477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-853477 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.161 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.239 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.115 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 18:54:35.194628   43137 iso.go:125] acquiring lock: {Name:mka511ffe42ecd86bd7f46e7a17ddcdd3e5e4327 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 18:54:35.196224   43137 out.go:177] * Starting "multinode-853477" primary control-plane node in "multinode-853477" cluster
	I0401 18:54:35.197364   43137 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0401 18:54:35.197386   43137 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0401 18:54:35.197392   43137 cache.go:56] Caching tarball of preloaded images
	I0401 18:54:35.197453   43137 preload.go:173] Found /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 18:54:35.197465   43137 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0401 18:54:35.197578   43137 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/multinode-853477/config.json ...
	I0401 18:54:35.197795   43137 start.go:360] acquireMachinesLock for multinode-853477: {Name:mk6b7472209a8db5f40be4c2f0565da7e0094c19 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 18:54:35.197834   43137 start.go:364] duration metric: took 22.189µs to acquireMachinesLock for "multinode-853477"
	I0401 18:54:35.197848   43137 start.go:96] Skipping create...Using existing machine configuration
	I0401 18:54:35.197860   43137 fix.go:54] fixHost starting: 
	I0401 18:54:35.198098   43137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:54:35.198127   43137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:54:35.211547   43137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36887
	I0401 18:54:35.211883   43137 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:54:35.212336   43137 main.go:141] libmachine: Using API Version  1
	I0401 18:54:35.212356   43137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:54:35.212683   43137 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:54:35.212879   43137 main.go:141] libmachine: (multinode-853477) Calling .DriverName
	I0401 18:54:35.213030   43137 main.go:141] libmachine: (multinode-853477) Calling .GetState
	I0401 18:54:35.214546   43137 fix.go:112] recreateIfNeeded on multinode-853477: state=Running err=<nil>
	W0401 18:54:35.214572   43137 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 18:54:35.216486   43137 out.go:177] * Updating the running kvm2 "multinode-853477" VM ...
	I0401 18:54:35.217625   43137 machine.go:94] provisionDockerMachine start ...
	I0401 18:54:35.217661   43137 main.go:141] libmachine: (multinode-853477) Calling .DriverName
	I0401 18:54:35.217863   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHHostname
	I0401 18:54:35.220207   43137 main.go:141] libmachine: (multinode-853477) DBG | domain multinode-853477 has defined MAC address 52:54:00:e9:6f:8b in network mk-multinode-853477
	I0401 18:54:35.220646   43137 main.go:141] libmachine: (multinode-853477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:6f:8b", ip: ""} in network mk-multinode-853477: {Iface:virbr1 ExpiryTime:2024-04-01 19:49:43 +0000 UTC Type:0 Mac:52:54:00:e9:6f:8b Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:multinode-853477 Clientid:01:52:54:00:e9:6f:8b}
	I0401 18:54:35.220671   43137 main.go:141] libmachine: (multinode-853477) DBG | domain multinode-853477 has defined IP address 192.168.39.161 and MAC address 52:54:00:e9:6f:8b in network mk-multinode-853477
	I0401 18:54:35.220797   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHPort
	I0401 18:54:35.220942   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHKeyPath
	I0401 18:54:35.221055   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHKeyPath
	I0401 18:54:35.221177   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHUsername
	I0401 18:54:35.221318   43137 main.go:141] libmachine: Using SSH client type: native
	I0401 18:54:35.221524   43137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.161 22 <nil> <nil>}
	I0401 18:54:35.221536   43137 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 18:54:35.339251   43137 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-853477
	
	I0401 18:54:35.339273   43137 main.go:141] libmachine: (multinode-853477) Calling .GetMachineName
	I0401 18:54:35.339481   43137 buildroot.go:166] provisioning hostname "multinode-853477"
	I0401 18:54:35.339501   43137 main.go:141] libmachine: (multinode-853477) Calling .GetMachineName
	I0401 18:54:35.339675   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHHostname
	I0401 18:54:35.342322   43137 main.go:141] libmachine: (multinode-853477) DBG | domain multinode-853477 has defined MAC address 52:54:00:e9:6f:8b in network mk-multinode-853477
	I0401 18:54:35.342755   43137 main.go:141] libmachine: (multinode-853477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:6f:8b", ip: ""} in network mk-multinode-853477: {Iface:virbr1 ExpiryTime:2024-04-01 19:49:43 +0000 UTC Type:0 Mac:52:54:00:e9:6f:8b Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:multinode-853477 Clientid:01:52:54:00:e9:6f:8b}
	I0401 18:54:35.342782   43137 main.go:141] libmachine: (multinode-853477) DBG | domain multinode-853477 has defined IP address 192.168.39.161 and MAC address 52:54:00:e9:6f:8b in network mk-multinode-853477
	I0401 18:54:35.342983   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHPort
	I0401 18:54:35.343150   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHKeyPath
	I0401 18:54:35.343292   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHKeyPath
	I0401 18:54:35.343422   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHUsername
	I0401 18:54:35.343561   43137 main.go:141] libmachine: Using SSH client type: native
	I0401 18:54:35.343709   43137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.161 22 <nil> <nil>}
	I0401 18:54:35.343721   43137 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-853477 && echo "multinode-853477" | sudo tee /etc/hostname
	I0401 18:54:35.471352   43137 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-853477
	
	I0401 18:54:35.471393   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHHostname
	I0401 18:54:35.473941   43137 main.go:141] libmachine: (multinode-853477) DBG | domain multinode-853477 has defined MAC address 52:54:00:e9:6f:8b in network mk-multinode-853477
	I0401 18:54:35.474300   43137 main.go:141] libmachine: (multinode-853477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:6f:8b", ip: ""} in network mk-multinode-853477: {Iface:virbr1 ExpiryTime:2024-04-01 19:49:43 +0000 UTC Type:0 Mac:52:54:00:e9:6f:8b Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:multinode-853477 Clientid:01:52:54:00:e9:6f:8b}
	I0401 18:54:35.474341   43137 main.go:141] libmachine: (multinode-853477) DBG | domain multinode-853477 has defined IP address 192.168.39.161 and MAC address 52:54:00:e9:6f:8b in network mk-multinode-853477
	I0401 18:54:35.474482   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHPort
	I0401 18:54:35.474644   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHKeyPath
	I0401 18:54:35.474786   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHKeyPath
	I0401 18:54:35.474899   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHUsername
	I0401 18:54:35.475037   43137 main.go:141] libmachine: Using SSH client type: native
	I0401 18:54:35.475233   43137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.161 22 <nil> <nil>}
	I0401 18:54:35.475257   43137 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-853477' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-853477/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-853477' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 18:54:35.591329   43137 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 18:54:35.591363   43137 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18233-10493/.minikube CaCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18233-10493/.minikube}
	I0401 18:54:35.591389   43137 buildroot.go:174] setting up certificates
	I0401 18:54:35.591398   43137 provision.go:84] configureAuth start
	I0401 18:54:35.591407   43137 main.go:141] libmachine: (multinode-853477) Calling .GetMachineName
	I0401 18:54:35.591660   43137 main.go:141] libmachine: (multinode-853477) Calling .GetIP
	I0401 18:54:35.594092   43137 main.go:141] libmachine: (multinode-853477) DBG | domain multinode-853477 has defined MAC address 52:54:00:e9:6f:8b in network mk-multinode-853477
	I0401 18:54:35.594446   43137 main.go:141] libmachine: (multinode-853477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:6f:8b", ip: ""} in network mk-multinode-853477: {Iface:virbr1 ExpiryTime:2024-04-01 19:49:43 +0000 UTC Type:0 Mac:52:54:00:e9:6f:8b Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:multinode-853477 Clientid:01:52:54:00:e9:6f:8b}
	I0401 18:54:35.594481   43137 main.go:141] libmachine: (multinode-853477) DBG | domain multinode-853477 has defined IP address 192.168.39.161 and MAC address 52:54:00:e9:6f:8b in network mk-multinode-853477
	I0401 18:54:35.594548   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHHostname
	I0401 18:54:35.596526   43137 main.go:141] libmachine: (multinode-853477) DBG | domain multinode-853477 has defined MAC address 52:54:00:e9:6f:8b in network mk-multinode-853477
	I0401 18:54:35.596810   43137 main.go:141] libmachine: (multinode-853477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:6f:8b", ip: ""} in network mk-multinode-853477: {Iface:virbr1 ExpiryTime:2024-04-01 19:49:43 +0000 UTC Type:0 Mac:52:54:00:e9:6f:8b Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:multinode-853477 Clientid:01:52:54:00:e9:6f:8b}
	I0401 18:54:35.596840   43137 main.go:141] libmachine: (multinode-853477) DBG | domain multinode-853477 has defined IP address 192.168.39.161 and MAC address 52:54:00:e9:6f:8b in network mk-multinode-853477
	I0401 18:54:35.596979   43137 provision.go:143] copyHostCerts
	I0401 18:54:35.597010   43137 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem
	I0401 18:54:35.597043   43137 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem, removing ...
	I0401 18:54:35.597051   43137 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem
	I0401 18:54:35.597129   43137 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem (1082 bytes)
	I0401 18:54:35.597215   43137 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem
	I0401 18:54:35.597233   43137 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem, removing ...
	I0401 18:54:35.597238   43137 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem
	I0401 18:54:35.597261   43137 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem (1123 bytes)
	I0401 18:54:35.597315   43137 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem
	I0401 18:54:35.597337   43137 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem, removing ...
	I0401 18:54:35.597344   43137 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem
	I0401 18:54:35.597364   43137 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem (1679 bytes)
	I0401 18:54:35.597419   43137 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem org=jenkins.multinode-853477 san=[127.0.0.1 192.168.39.161 localhost minikube multinode-853477]
	I0401 18:54:35.731835   43137 provision.go:177] copyRemoteCerts
	I0401 18:54:35.731901   43137 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 18:54:35.731924   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHHostname
	I0401 18:54:35.735061   43137 main.go:141] libmachine: (multinode-853477) DBG | domain multinode-853477 has defined MAC address 52:54:00:e9:6f:8b in network mk-multinode-853477
	I0401 18:54:35.735406   43137 main.go:141] libmachine: (multinode-853477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:6f:8b", ip: ""} in network mk-multinode-853477: {Iface:virbr1 ExpiryTime:2024-04-01 19:49:43 +0000 UTC Type:0 Mac:52:54:00:e9:6f:8b Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:multinode-853477 Clientid:01:52:54:00:e9:6f:8b}
	I0401 18:54:35.735447   43137 main.go:141] libmachine: (multinode-853477) DBG | domain multinode-853477 has defined IP address 192.168.39.161 and MAC address 52:54:00:e9:6f:8b in network mk-multinode-853477
	I0401 18:54:35.735622   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHPort
	I0401 18:54:35.735825   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHKeyPath
	I0401 18:54:35.735998   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHUsername
	I0401 18:54:35.736140   43137 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/multinode-853477/id_rsa Username:docker}
	I0401 18:54:35.825857   43137 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0401 18:54:35.825958   43137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 18:54:35.855911   43137 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0401 18:54:35.855959   43137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0401 18:54:35.882303   43137 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0401 18:54:35.882359   43137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 18:54:35.911941   43137 provision.go:87] duration metric: took 320.533658ms to configureAuth
	I0401 18:54:35.911961   43137 buildroot.go:189] setting minikube options for container-runtime
	I0401 18:54:35.912163   43137 config.go:182] Loaded profile config "multinode-853477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 18:54:35.912236   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHHostname
	I0401 18:54:35.914674   43137 main.go:141] libmachine: (multinode-853477) DBG | domain multinode-853477 has defined MAC address 52:54:00:e9:6f:8b in network mk-multinode-853477
	I0401 18:54:35.914987   43137 main.go:141] libmachine: (multinode-853477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:6f:8b", ip: ""} in network mk-multinode-853477: {Iface:virbr1 ExpiryTime:2024-04-01 19:49:43 +0000 UTC Type:0 Mac:52:54:00:e9:6f:8b Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:multinode-853477 Clientid:01:52:54:00:e9:6f:8b}
	I0401 18:54:35.915019   43137 main.go:141] libmachine: (multinode-853477) DBG | domain multinode-853477 has defined IP address 192.168.39.161 and MAC address 52:54:00:e9:6f:8b in network mk-multinode-853477
	I0401 18:54:35.915166   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHPort
	I0401 18:54:35.915390   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHKeyPath
	I0401 18:54:35.915547   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHKeyPath
	I0401 18:54:35.915682   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHUsername
	I0401 18:54:35.915835   43137 main.go:141] libmachine: Using SSH client type: native
	I0401 18:54:35.915988   43137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.161 22 <nil> <nil>}
	I0401 18:54:35.916004   43137 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 18:56:06.859140   43137 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 18:56:06.859174   43137 machine.go:97] duration metric: took 1m31.641534818s to provisionDockerMachine
	I0401 18:56:06.859190   43137 start.go:293] postStartSetup for "multinode-853477" (driver="kvm2")
	I0401 18:56:06.859206   43137 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 18:56:06.859225   43137 main.go:141] libmachine: (multinode-853477) Calling .DriverName
	I0401 18:56:06.859572   43137 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 18:56:06.859598   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHHostname
	I0401 18:56:06.862599   43137 main.go:141] libmachine: (multinode-853477) DBG | domain multinode-853477 has defined MAC address 52:54:00:e9:6f:8b in network mk-multinode-853477
	I0401 18:56:06.863063   43137 main.go:141] libmachine: (multinode-853477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:6f:8b", ip: ""} in network mk-multinode-853477: {Iface:virbr1 ExpiryTime:2024-04-01 19:49:43 +0000 UTC Type:0 Mac:52:54:00:e9:6f:8b Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:multinode-853477 Clientid:01:52:54:00:e9:6f:8b}
	I0401 18:56:06.863088   43137 main.go:141] libmachine: (multinode-853477) DBG | domain multinode-853477 has defined IP address 192.168.39.161 and MAC address 52:54:00:e9:6f:8b in network mk-multinode-853477
	I0401 18:56:06.863205   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHPort
	I0401 18:56:06.863372   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHKeyPath
	I0401 18:56:06.863523   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHUsername
	I0401 18:56:06.863690   43137 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/multinode-853477/id_rsa Username:docker}
	I0401 18:56:06.949537   43137 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 18:56:06.954249   43137 command_runner.go:130] > NAME=Buildroot
	I0401 18:56:06.954267   43137 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0401 18:56:06.954271   43137 command_runner.go:130] > ID=buildroot
	I0401 18:56:06.954276   43137 command_runner.go:130] > VERSION_ID=2023.02.9
	I0401 18:56:06.954290   43137 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0401 18:56:06.954326   43137 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 18:56:06.954352   43137 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/addons for local assets ...
	I0401 18:56:06.954412   43137 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/files for local assets ...
	I0401 18:56:06.954508   43137 filesync.go:149] local asset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> 177512.pem in /etc/ssl/certs
	I0401 18:56:06.954518   43137 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> /etc/ssl/certs/177512.pem
	I0401 18:56:06.954616   43137 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 18:56:06.964741   43137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /etc/ssl/certs/177512.pem (1708 bytes)
	I0401 18:56:06.991158   43137 start.go:296] duration metric: took 131.953919ms for postStartSetup
	I0401 18:56:06.991192   43137 fix.go:56] duration metric: took 1m31.793331155s for fixHost
	I0401 18:56:06.991215   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHHostname
	I0401 18:56:06.993807   43137 main.go:141] libmachine: (multinode-853477) DBG | domain multinode-853477 has defined MAC address 52:54:00:e9:6f:8b in network mk-multinode-853477
	I0401 18:56:06.994093   43137 main.go:141] libmachine: (multinode-853477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:6f:8b", ip: ""} in network mk-multinode-853477: {Iface:virbr1 ExpiryTime:2024-04-01 19:49:43 +0000 UTC Type:0 Mac:52:54:00:e9:6f:8b Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:multinode-853477 Clientid:01:52:54:00:e9:6f:8b}
	I0401 18:56:06.994113   43137 main.go:141] libmachine: (multinode-853477) DBG | domain multinode-853477 has defined IP address 192.168.39.161 and MAC address 52:54:00:e9:6f:8b in network mk-multinode-853477
	I0401 18:56:06.994263   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHPort
	I0401 18:56:06.994444   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHKeyPath
	I0401 18:56:06.994604   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHKeyPath
	I0401 18:56:06.994725   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHUsername
	I0401 18:56:06.994854   43137 main.go:141] libmachine: Using SSH client type: native
	I0401 18:56:06.995002   43137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.161 22 <nil> <nil>}
	I0401 18:56:06.995012   43137 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 18:56:07.106523   43137 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711997767.083421289
	
	I0401 18:56:07.106540   43137 fix.go:216] guest clock: 1711997767.083421289
	I0401 18:56:07.106546   43137 fix.go:229] Guest: 2024-04-01 18:56:07.083421289 +0000 UTC Remote: 2024-04-01 18:56:06.991196377 +0000 UTC m=+91.927467503 (delta=92.224912ms)
	I0401 18:56:07.106563   43137 fix.go:200] guest clock delta is within tolerance: 92.224912ms
	I0401 18:56:07.106569   43137 start.go:83] releasing machines lock for "multinode-853477", held for 1m31.908725389s
	I0401 18:56:07.106585   43137 main.go:141] libmachine: (multinode-853477) Calling .DriverName
	I0401 18:56:07.106802   43137 main.go:141] libmachine: (multinode-853477) Calling .GetIP
	I0401 18:56:07.109429   43137 main.go:141] libmachine: (multinode-853477) DBG | domain multinode-853477 has defined MAC address 52:54:00:e9:6f:8b in network mk-multinode-853477
	I0401 18:56:07.109879   43137 main.go:141] libmachine: (multinode-853477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:6f:8b", ip: ""} in network mk-multinode-853477: {Iface:virbr1 ExpiryTime:2024-04-01 19:49:43 +0000 UTC Type:0 Mac:52:54:00:e9:6f:8b Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:multinode-853477 Clientid:01:52:54:00:e9:6f:8b}
	I0401 18:56:07.109919   43137 main.go:141] libmachine: (multinode-853477) DBG | domain multinode-853477 has defined IP address 192.168.39.161 and MAC address 52:54:00:e9:6f:8b in network mk-multinode-853477
	I0401 18:56:07.110034   43137 main.go:141] libmachine: (multinode-853477) Calling .DriverName
	I0401 18:56:07.110525   43137 main.go:141] libmachine: (multinode-853477) Calling .DriverName
	I0401 18:56:07.110699   43137 main.go:141] libmachine: (multinode-853477) Calling .DriverName
	I0401 18:56:07.110814   43137 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 18:56:07.110853   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHHostname
	I0401 18:56:07.110893   43137 ssh_runner.go:195] Run: cat /version.json
	I0401 18:56:07.110915   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHHostname
	I0401 18:56:07.113510   43137 main.go:141] libmachine: (multinode-853477) DBG | domain multinode-853477 has defined MAC address 52:54:00:e9:6f:8b in network mk-multinode-853477
	I0401 18:56:07.113701   43137 main.go:141] libmachine: (multinode-853477) DBG | domain multinode-853477 has defined MAC address 52:54:00:e9:6f:8b in network mk-multinode-853477
	I0401 18:56:07.113967   43137 main.go:141] libmachine: (multinode-853477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:6f:8b", ip: ""} in network mk-multinode-853477: {Iface:virbr1 ExpiryTime:2024-04-01 19:49:43 +0000 UTC Type:0 Mac:52:54:00:e9:6f:8b Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:multinode-853477 Clientid:01:52:54:00:e9:6f:8b}
	I0401 18:56:07.114009   43137 main.go:141] libmachine: (multinode-853477) DBG | domain multinode-853477 has defined IP address 192.168.39.161 and MAC address 52:54:00:e9:6f:8b in network mk-multinode-853477
	I0401 18:56:07.114095   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHPort
	I0401 18:56:07.114236   43137 main.go:141] libmachine: (multinode-853477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:6f:8b", ip: ""} in network mk-multinode-853477: {Iface:virbr1 ExpiryTime:2024-04-01 19:49:43 +0000 UTC Type:0 Mac:52:54:00:e9:6f:8b Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:multinode-853477 Clientid:01:52:54:00:e9:6f:8b}
	I0401 18:56:07.114253   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHKeyPath
	I0401 18:56:07.114269   43137 main.go:141] libmachine: (multinode-853477) DBG | domain multinode-853477 has defined IP address 192.168.39.161 and MAC address 52:54:00:e9:6f:8b in network mk-multinode-853477
	I0401 18:56:07.114388   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHUsername
	I0401 18:56:07.114437   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHPort
	I0401 18:56:07.114533   43137 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/multinode-853477/id_rsa Username:docker}
	I0401 18:56:07.114615   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHKeyPath
	I0401 18:56:07.114730   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHUsername
	I0401 18:56:07.114885   43137 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/multinode-853477/id_rsa Username:docker}
	I0401 18:56:07.194236   43137 command_runner.go:130] > {"iso_version": "v1.33.0-1711559712-18485", "kicbase_version": "v0.0.43-beta.0", "minikube_version": "v1.33.0-beta.0", "commit": "db97f5257476488cfa11a4cd2d95d2aa6fbd9d33"}
	I0401 18:56:07.194517   43137 ssh_runner.go:195] Run: systemctl --version
	I0401 18:56:07.221043   43137 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0401 18:56:07.221103   43137 command_runner.go:130] > systemd 252 (252)
	I0401 18:56:07.221127   43137 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0401 18:56:07.221175   43137 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 18:56:07.381795   43137 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 18:56:07.390145   43137 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0401 18:56:07.390582   43137 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 18:56:07.390644   43137 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 18:56:07.400788   43137 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 18:56:07.400807   43137 start.go:494] detecting cgroup driver to use...
	I0401 18:56:07.400868   43137 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 18:56:07.418360   43137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 18:56:07.433118   43137 docker.go:217] disabling cri-docker service (if available) ...
	I0401 18:56:07.433176   43137 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 18:56:07.447362   43137 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 18:56:07.461611   43137 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 18:56:07.615784   43137 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 18:56:07.779232   43137 docker.go:233] disabling docker service ...
	I0401 18:56:07.779307   43137 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 18:56:07.800556   43137 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 18:56:07.816488   43137 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 18:56:07.973654   43137 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 18:56:08.130631   43137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 18:56:08.146670   43137 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 18:56:08.166362   43137 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0401 18:56:08.166647   43137 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0401 18:56:08.166697   43137 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:56:08.179499   43137 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 18:56:08.179562   43137 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:56:08.191264   43137 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:56:08.203054   43137 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:56:08.215458   43137 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 18:56:08.227590   43137 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:56:08.240432   43137 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:56:08.252349   43137 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:56:08.264258   43137 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 18:56:08.274767   43137 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0401 18:56:08.274957   43137 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 18:56:08.285638   43137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 18:56:08.430186   43137 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 18:56:09.520561   43137 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.090344963s)
	I0401 18:56:09.520588   43137 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 18:56:09.520637   43137 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 18:56:09.526097   43137 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0401 18:56:09.526112   43137 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0401 18:56:09.526119   43137 command_runner.go:130] > Device: 0,22	Inode: 1339        Links: 1
	I0401 18:56:09.526136   43137 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0401 18:56:09.526142   43137 command_runner.go:130] > Access: 2024-04-01 18:56:09.447321892 +0000
	I0401 18:56:09.526156   43137 command_runner.go:130] > Modify: 2024-04-01 18:56:09.386320402 +0000
	I0401 18:56:09.526171   43137 command_runner.go:130] > Change: 2024-04-01 18:56:09.386320402 +0000
	I0401 18:56:09.526180   43137 command_runner.go:130] >  Birth: -
	I0401 18:56:09.526262   43137 start.go:562] Will wait 60s for crictl version
	I0401 18:56:09.526316   43137 ssh_runner.go:195] Run: which crictl
	I0401 18:56:09.530812   43137 command_runner.go:130] > /usr/bin/crictl
	I0401 18:56:09.530860   43137 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 18:56:09.570886   43137 command_runner.go:130] > Version:  0.1.0
	I0401 18:56:09.570939   43137 command_runner.go:130] > RuntimeName:  cri-o
	I0401 18:56:09.570970   43137 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0401 18:56:09.570982   43137 command_runner.go:130] > RuntimeApiVersion:  v1
	I0401 18:56:09.572364   43137 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 18:56:09.572439   43137 ssh_runner.go:195] Run: crio --version
	I0401 18:56:09.604038   43137 command_runner.go:130] > crio version 1.29.1
	I0401 18:56:09.604056   43137 command_runner.go:130] > Version:        1.29.1
	I0401 18:56:09.604061   43137 command_runner.go:130] > GitCommit:      unknown
	I0401 18:56:09.604067   43137 command_runner.go:130] > GitCommitDate:  unknown
	I0401 18:56:09.604073   43137 command_runner.go:130] > GitTreeState:   clean
	I0401 18:56:09.604086   43137 command_runner.go:130] > BuildDate:      2024-03-27T22:46:22Z
	I0401 18:56:09.604100   43137 command_runner.go:130] > GoVersion:      go1.21.6
	I0401 18:56:09.604106   43137 command_runner.go:130] > Compiler:       gc
	I0401 18:56:09.604112   43137 command_runner.go:130] > Platform:       linux/amd64
	I0401 18:56:09.604120   43137 command_runner.go:130] > Linkmode:       dynamic
	I0401 18:56:09.604127   43137 command_runner.go:130] > BuildTags:      
	I0401 18:56:09.604134   43137 command_runner.go:130] >   containers_image_ostree_stub
	I0401 18:56:09.604141   43137 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0401 18:56:09.604147   43137 command_runner.go:130] >   btrfs_noversion
	I0401 18:56:09.604154   43137 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0401 18:56:09.604161   43137 command_runner.go:130] >   libdm_no_deferred_remove
	I0401 18:56:09.604167   43137 command_runner.go:130] >   seccomp
	I0401 18:56:09.604174   43137 command_runner.go:130] > LDFlags:          unknown
	I0401 18:56:09.604180   43137 command_runner.go:130] > SeccompEnabled:   true
	I0401 18:56:09.604191   43137 command_runner.go:130] > AppArmorEnabled:  false
	I0401 18:56:09.604264   43137 ssh_runner.go:195] Run: crio --version
	I0401 18:56:09.637119   43137 command_runner.go:130] > crio version 1.29.1
	I0401 18:56:09.637149   43137 command_runner.go:130] > Version:        1.29.1
	I0401 18:56:09.637158   43137 command_runner.go:130] > GitCommit:      unknown
	I0401 18:56:09.637166   43137 command_runner.go:130] > GitCommitDate:  unknown
	I0401 18:56:09.637173   43137 command_runner.go:130] > GitTreeState:   clean
	I0401 18:56:09.637183   43137 command_runner.go:130] > BuildDate:      2024-03-27T22:46:22Z
	I0401 18:56:09.637191   43137 command_runner.go:130] > GoVersion:      go1.21.6
	I0401 18:56:09.637199   43137 command_runner.go:130] > Compiler:       gc
	I0401 18:56:09.637205   43137 command_runner.go:130] > Platform:       linux/amd64
	I0401 18:56:09.637211   43137 command_runner.go:130] > Linkmode:       dynamic
	I0401 18:56:09.637218   43137 command_runner.go:130] > BuildTags:      
	I0401 18:56:09.637223   43137 command_runner.go:130] >   containers_image_ostree_stub
	I0401 18:56:09.637227   43137 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0401 18:56:09.637232   43137 command_runner.go:130] >   btrfs_noversion
	I0401 18:56:09.637237   43137 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0401 18:56:09.637242   43137 command_runner.go:130] >   libdm_no_deferred_remove
	I0401 18:56:09.637246   43137 command_runner.go:130] >   seccomp
	I0401 18:56:09.637250   43137 command_runner.go:130] > LDFlags:          unknown
	I0401 18:56:09.637258   43137 command_runner.go:130] > SeccompEnabled:   true
	I0401 18:56:09.637266   43137 command_runner.go:130] > AppArmorEnabled:  false
	I0401 18:56:09.640198   43137 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0401 18:56:09.641861   43137 main.go:141] libmachine: (multinode-853477) Calling .GetIP
	I0401 18:56:09.644340   43137 main.go:141] libmachine: (multinode-853477) DBG | domain multinode-853477 has defined MAC address 52:54:00:e9:6f:8b in network mk-multinode-853477
	I0401 18:56:09.644692   43137 main.go:141] libmachine: (multinode-853477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:6f:8b", ip: ""} in network mk-multinode-853477: {Iface:virbr1 ExpiryTime:2024-04-01 19:49:43 +0000 UTC Type:0 Mac:52:54:00:e9:6f:8b Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:multinode-853477 Clientid:01:52:54:00:e9:6f:8b}
	I0401 18:56:09.644721   43137 main.go:141] libmachine: (multinode-853477) DBG | domain multinode-853477 has defined IP address 192.168.39.161 and MAC address 52:54:00:e9:6f:8b in network mk-multinode-853477
	I0401 18:56:09.644869   43137 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0401 18:56:09.650038   43137 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0401 18:56:09.650310   43137 kubeadm.go:877] updating cluster {Name:multinode-853477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
29.3 ClusterName:multinode-853477 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.161 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.239 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.115 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fa
lse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 18:56:09.650434   43137 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0401 18:56:09.650474   43137 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 18:56:09.699280   43137 command_runner.go:130] > {
	I0401 18:56:09.699305   43137 command_runner.go:130] >   "images": [
	I0401 18:56:09.699309   43137 command_runner.go:130] >     {
	I0401 18:56:09.699317   43137 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0401 18:56:09.699321   43137 command_runner.go:130] >       "repoTags": [
	I0401 18:56:09.699327   43137 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0401 18:56:09.699330   43137 command_runner.go:130] >       ],
	I0401 18:56:09.699334   43137 command_runner.go:130] >       "repoDigests": [
	I0401 18:56:09.699341   43137 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0401 18:56:09.699349   43137 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0401 18:56:09.699359   43137 command_runner.go:130] >       ],
	I0401 18:56:09.699372   43137 command_runner.go:130] >       "size": "65291810",
	I0401 18:56:09.699376   43137 command_runner.go:130] >       "uid": null,
	I0401 18:56:09.699380   43137 command_runner.go:130] >       "username": "",
	I0401 18:56:09.699387   43137 command_runner.go:130] >       "spec": null,
	I0401 18:56:09.699391   43137 command_runner.go:130] >       "pinned": false
	I0401 18:56:09.699395   43137 command_runner.go:130] >     },
	I0401 18:56:09.699398   43137 command_runner.go:130] >     {
	I0401 18:56:09.699404   43137 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0401 18:56:09.699408   43137 command_runner.go:130] >       "repoTags": [
	I0401 18:56:09.699413   43137 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0401 18:56:09.699417   43137 command_runner.go:130] >       ],
	I0401 18:56:09.699421   43137 command_runner.go:130] >       "repoDigests": [
	I0401 18:56:09.699428   43137 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0401 18:56:09.699441   43137 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0401 18:56:09.699445   43137 command_runner.go:130] >       ],
	I0401 18:56:09.699485   43137 command_runner.go:130] >       "size": "1363676",
	I0401 18:56:09.699524   43137 command_runner.go:130] >       "uid": null,
	I0401 18:56:09.699546   43137 command_runner.go:130] >       "username": "",
	I0401 18:56:09.699557   43137 command_runner.go:130] >       "spec": null,
	I0401 18:56:09.699564   43137 command_runner.go:130] >       "pinned": false
	I0401 18:56:09.699574   43137 command_runner.go:130] >     },
	I0401 18:56:09.699580   43137 command_runner.go:130] >     {
	I0401 18:56:09.699589   43137 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0401 18:56:09.699597   43137 command_runner.go:130] >       "repoTags": [
	I0401 18:56:09.699602   43137 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0401 18:56:09.699608   43137 command_runner.go:130] >       ],
	I0401 18:56:09.699612   43137 command_runner.go:130] >       "repoDigests": [
	I0401 18:56:09.699619   43137 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0401 18:56:09.699627   43137 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0401 18:56:09.699632   43137 command_runner.go:130] >       ],
	I0401 18:56:09.699640   43137 command_runner.go:130] >       "size": "31470524",
	I0401 18:56:09.699646   43137 command_runner.go:130] >       "uid": null,
	I0401 18:56:09.699654   43137 command_runner.go:130] >       "username": "",
	I0401 18:56:09.699660   43137 command_runner.go:130] >       "spec": null,
	I0401 18:56:09.699667   43137 command_runner.go:130] >       "pinned": false
	I0401 18:56:09.699683   43137 command_runner.go:130] >     },
	I0401 18:56:09.699690   43137 command_runner.go:130] >     {
	I0401 18:56:09.699698   43137 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0401 18:56:09.699705   43137 command_runner.go:130] >       "repoTags": [
	I0401 18:56:09.699716   43137 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0401 18:56:09.699725   43137 command_runner.go:130] >       ],
	I0401 18:56:09.699732   43137 command_runner.go:130] >       "repoDigests": [
	I0401 18:56:09.699748   43137 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0401 18:56:09.699769   43137 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0401 18:56:09.699781   43137 command_runner.go:130] >       ],
	I0401 18:56:09.699787   43137 command_runner.go:130] >       "size": "61245718",
	I0401 18:56:09.699794   43137 command_runner.go:130] >       "uid": null,
	I0401 18:56:09.699801   43137 command_runner.go:130] >       "username": "nonroot",
	I0401 18:56:09.699811   43137 command_runner.go:130] >       "spec": null,
	I0401 18:56:09.699817   43137 command_runner.go:130] >       "pinned": false
	I0401 18:56:09.699822   43137 command_runner.go:130] >     },
	I0401 18:56:09.699825   43137 command_runner.go:130] >     {
	I0401 18:56:09.699835   43137 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0401 18:56:09.699846   43137 command_runner.go:130] >       "repoTags": [
	I0401 18:56:09.699854   43137 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0401 18:56:09.699863   43137 command_runner.go:130] >       ],
	I0401 18:56:09.699870   43137 command_runner.go:130] >       "repoDigests": [
	I0401 18:56:09.699884   43137 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0401 18:56:09.699898   43137 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0401 18:56:09.699907   43137 command_runner.go:130] >       ],
	I0401 18:56:09.699914   43137 command_runner.go:130] >       "size": "150779692",
	I0401 18:56:09.699922   43137 command_runner.go:130] >       "uid": {
	I0401 18:56:09.699926   43137 command_runner.go:130] >         "value": "0"
	I0401 18:56:09.699931   43137 command_runner.go:130] >       },
	I0401 18:56:09.699938   43137 command_runner.go:130] >       "username": "",
	I0401 18:56:09.699945   43137 command_runner.go:130] >       "spec": null,
	I0401 18:56:09.699953   43137 command_runner.go:130] >       "pinned": false
	I0401 18:56:09.699961   43137 command_runner.go:130] >     },
	I0401 18:56:09.699966   43137 command_runner.go:130] >     {
	I0401 18:56:09.699978   43137 command_runner.go:130] >       "id": "39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533",
	I0401 18:56:09.699985   43137 command_runner.go:130] >       "repoTags": [
	I0401 18:56:09.700000   43137 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.29.3"
	I0401 18:56:09.700009   43137 command_runner.go:130] >       ],
	I0401 18:56:09.700015   43137 command_runner.go:130] >       "repoDigests": [
	I0401 18:56:09.700025   43137 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:21be2c03b528e582a63a41d8270f469ad1b24e2f6ba8238386768fc981ca1322",
	I0401 18:56:09.700040   43137 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c"
	I0401 18:56:09.700047   43137 command_runner.go:130] >       ],
	I0401 18:56:09.700054   43137 command_runner.go:130] >       "size": "128508878",
	I0401 18:56:09.700063   43137 command_runner.go:130] >       "uid": {
	I0401 18:56:09.700070   43137 command_runner.go:130] >         "value": "0"
	I0401 18:56:09.700078   43137 command_runner.go:130] >       },
	I0401 18:56:09.700084   43137 command_runner.go:130] >       "username": "",
	I0401 18:56:09.700094   43137 command_runner.go:130] >       "spec": null,
	I0401 18:56:09.700101   43137 command_runner.go:130] >       "pinned": false
	I0401 18:56:09.700107   43137 command_runner.go:130] >     },
	I0401 18:56:09.700111   43137 command_runner.go:130] >     {
	I0401 18:56:09.700120   43137 command_runner.go:130] >       "id": "6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3",
	I0401 18:56:09.700130   43137 command_runner.go:130] >       "repoTags": [
	I0401 18:56:09.700140   43137 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.29.3"
	I0401 18:56:09.700147   43137 command_runner.go:130] >       ],
	I0401 18:56:09.700154   43137 command_runner.go:130] >       "repoDigests": [
	I0401 18:56:09.700170   43137 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:495e03d609009733264502138f33ab4ebff55e4ccc34b51fce1dc48eba5aa606",
	I0401 18:56:09.700181   43137 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104"
	I0401 18:56:09.700190   43137 command_runner.go:130] >       ],
	I0401 18:56:09.700197   43137 command_runner.go:130] >       "size": "123142962",
	I0401 18:56:09.700204   43137 command_runner.go:130] >       "uid": {
	I0401 18:56:09.700208   43137 command_runner.go:130] >         "value": "0"
	I0401 18:56:09.700213   43137 command_runner.go:130] >       },
	I0401 18:56:09.700223   43137 command_runner.go:130] >       "username": "",
	I0401 18:56:09.700229   43137 command_runner.go:130] >       "spec": null,
	I0401 18:56:09.700239   43137 command_runner.go:130] >       "pinned": false
	I0401 18:56:09.700244   43137 command_runner.go:130] >     },
	I0401 18:56:09.700253   43137 command_runner.go:130] >     {
	I0401 18:56:09.700263   43137 command_runner.go:130] >       "id": "a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392",
	I0401 18:56:09.700272   43137 command_runner.go:130] >       "repoTags": [
	I0401 18:56:09.700323   43137 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.29.3"
	I0401 18:56:09.700396   43137 command_runner.go:130] >       ],
	I0401 18:56:09.700442   43137 command_runner.go:130] >       "repoDigests": [
	I0401 18:56:09.700477   43137 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:d137dd922e588abc7b0e2f20afd338065e9abccdecfe705abfb19f588fbac11d",
	I0401 18:56:09.700493   43137 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863"
	I0401 18:56:09.700500   43137 command_runner.go:130] >       ],
	I0401 18:56:09.700510   43137 command_runner.go:130] >       "size": "83634073",
	I0401 18:56:09.700515   43137 command_runner.go:130] >       "uid": null,
	I0401 18:56:09.700520   43137 command_runner.go:130] >       "username": "",
	I0401 18:56:09.700524   43137 command_runner.go:130] >       "spec": null,
	I0401 18:56:09.700527   43137 command_runner.go:130] >       "pinned": false
	I0401 18:56:09.700531   43137 command_runner.go:130] >     },
	I0401 18:56:09.700534   43137 command_runner.go:130] >     {
	I0401 18:56:09.700539   43137 command_runner.go:130] >       "id": "8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b",
	I0401 18:56:09.700546   43137 command_runner.go:130] >       "repoTags": [
	I0401 18:56:09.700554   43137 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.29.3"
	I0401 18:56:09.700557   43137 command_runner.go:130] >       ],
	I0401 18:56:09.700561   43137 command_runner.go:130] >       "repoDigests": [
	I0401 18:56:09.700569   43137 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a",
	I0401 18:56:09.700577   43137 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:c6dae5df00e42512d2baa3e1e74efbf08bddd595e930123f6021f715198b8e88"
	I0401 18:56:09.700582   43137 command_runner.go:130] >       ],
	I0401 18:56:09.700588   43137 command_runner.go:130] >       "size": "60724018",
	I0401 18:56:09.700595   43137 command_runner.go:130] >       "uid": {
	I0401 18:56:09.700601   43137 command_runner.go:130] >         "value": "0"
	I0401 18:56:09.700604   43137 command_runner.go:130] >       },
	I0401 18:56:09.700608   43137 command_runner.go:130] >       "username": "",
	I0401 18:56:09.700611   43137 command_runner.go:130] >       "spec": null,
	I0401 18:56:09.700614   43137 command_runner.go:130] >       "pinned": false
	I0401 18:56:09.700618   43137 command_runner.go:130] >     },
	I0401 18:56:09.700621   43137 command_runner.go:130] >     {
	I0401 18:56:09.700626   43137 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0401 18:56:09.700629   43137 command_runner.go:130] >       "repoTags": [
	I0401 18:56:09.700634   43137 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0401 18:56:09.700637   43137 command_runner.go:130] >       ],
	I0401 18:56:09.700641   43137 command_runner.go:130] >       "repoDigests": [
	I0401 18:56:09.700647   43137 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0401 18:56:09.700655   43137 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0401 18:56:09.700662   43137 command_runner.go:130] >       ],
	I0401 18:56:09.700675   43137 command_runner.go:130] >       "size": "750414",
	I0401 18:56:09.700686   43137 command_runner.go:130] >       "uid": {
	I0401 18:56:09.700691   43137 command_runner.go:130] >         "value": "65535"
	I0401 18:56:09.700703   43137 command_runner.go:130] >       },
	I0401 18:56:09.700714   43137 command_runner.go:130] >       "username": "",
	I0401 18:56:09.700729   43137 command_runner.go:130] >       "spec": null,
	I0401 18:56:09.700736   43137 command_runner.go:130] >       "pinned": true
	I0401 18:56:09.700740   43137 command_runner.go:130] >     }
	I0401 18:56:09.700743   43137 command_runner.go:130] >   ]
	I0401 18:56:09.700746   43137 command_runner.go:130] > }
	I0401 18:56:09.701609   43137 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 18:56:09.701623   43137 crio.go:433] Images already preloaded, skipping extraction
	I0401 18:56:09.701679   43137 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 18:56:09.739358   43137 command_runner.go:130] > {
	I0401 18:56:09.739386   43137 command_runner.go:130] >   "images": [
	I0401 18:56:09.739390   43137 command_runner.go:130] >     {
	I0401 18:56:09.739398   43137 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0401 18:56:09.739402   43137 command_runner.go:130] >       "repoTags": [
	I0401 18:56:09.739408   43137 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0401 18:56:09.739412   43137 command_runner.go:130] >       ],
	I0401 18:56:09.739415   43137 command_runner.go:130] >       "repoDigests": [
	I0401 18:56:09.739429   43137 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0401 18:56:09.739436   43137 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0401 18:56:09.739440   43137 command_runner.go:130] >       ],
	I0401 18:56:09.739444   43137 command_runner.go:130] >       "size": "65291810",
	I0401 18:56:09.739448   43137 command_runner.go:130] >       "uid": null,
	I0401 18:56:09.739451   43137 command_runner.go:130] >       "username": "",
	I0401 18:56:09.739458   43137 command_runner.go:130] >       "spec": null,
	I0401 18:56:09.739464   43137 command_runner.go:130] >       "pinned": false
	I0401 18:56:09.739468   43137 command_runner.go:130] >     },
	I0401 18:56:09.739472   43137 command_runner.go:130] >     {
	I0401 18:56:09.739478   43137 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0401 18:56:09.739484   43137 command_runner.go:130] >       "repoTags": [
	I0401 18:56:09.739490   43137 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0401 18:56:09.739493   43137 command_runner.go:130] >       ],
	I0401 18:56:09.739498   43137 command_runner.go:130] >       "repoDigests": [
	I0401 18:56:09.739505   43137 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0401 18:56:09.739515   43137 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0401 18:56:09.739523   43137 command_runner.go:130] >       ],
	I0401 18:56:09.739539   43137 command_runner.go:130] >       "size": "1363676",
	I0401 18:56:09.739545   43137 command_runner.go:130] >       "uid": null,
	I0401 18:56:09.739556   43137 command_runner.go:130] >       "username": "",
	I0401 18:56:09.739564   43137 command_runner.go:130] >       "spec": null,
	I0401 18:56:09.739570   43137 command_runner.go:130] >       "pinned": false
	I0401 18:56:09.739579   43137 command_runner.go:130] >     },
	I0401 18:56:09.739584   43137 command_runner.go:130] >     {
	I0401 18:56:09.739591   43137 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0401 18:56:09.739595   43137 command_runner.go:130] >       "repoTags": [
	I0401 18:56:09.739600   43137 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0401 18:56:09.739603   43137 command_runner.go:130] >       ],
	I0401 18:56:09.739607   43137 command_runner.go:130] >       "repoDigests": [
	I0401 18:56:09.739615   43137 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0401 18:56:09.739623   43137 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0401 18:56:09.739630   43137 command_runner.go:130] >       ],
	I0401 18:56:09.739636   43137 command_runner.go:130] >       "size": "31470524",
	I0401 18:56:09.739647   43137 command_runner.go:130] >       "uid": null,
	I0401 18:56:09.739653   43137 command_runner.go:130] >       "username": "",
	I0401 18:56:09.739659   43137 command_runner.go:130] >       "spec": null,
	I0401 18:56:09.739668   43137 command_runner.go:130] >       "pinned": false
	I0401 18:56:09.739673   43137 command_runner.go:130] >     },
	I0401 18:56:09.739678   43137 command_runner.go:130] >     {
	I0401 18:56:09.739690   43137 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0401 18:56:09.739694   43137 command_runner.go:130] >       "repoTags": [
	I0401 18:56:09.739700   43137 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0401 18:56:09.739704   43137 command_runner.go:130] >       ],
	I0401 18:56:09.739707   43137 command_runner.go:130] >       "repoDigests": [
	I0401 18:56:09.739718   43137 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0401 18:56:09.739741   43137 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0401 18:56:09.739751   43137 command_runner.go:130] >       ],
	I0401 18:56:09.739758   43137 command_runner.go:130] >       "size": "61245718",
	I0401 18:56:09.739768   43137 command_runner.go:130] >       "uid": null,
	I0401 18:56:09.739778   43137 command_runner.go:130] >       "username": "nonroot",
	I0401 18:56:09.739788   43137 command_runner.go:130] >       "spec": null,
	I0401 18:56:09.739796   43137 command_runner.go:130] >       "pinned": false
	I0401 18:56:09.739812   43137 command_runner.go:130] >     },
	I0401 18:56:09.739827   43137 command_runner.go:130] >     {
	I0401 18:56:09.739837   43137 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0401 18:56:09.739843   43137 command_runner.go:130] >       "repoTags": [
	I0401 18:56:09.739860   43137 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0401 18:56:09.739869   43137 command_runner.go:130] >       ],
	I0401 18:56:09.739875   43137 command_runner.go:130] >       "repoDigests": [
	I0401 18:56:09.739886   43137 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0401 18:56:09.739901   43137 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0401 18:56:09.739910   43137 command_runner.go:130] >       ],
	I0401 18:56:09.739920   43137 command_runner.go:130] >       "size": "150779692",
	I0401 18:56:09.739929   43137 command_runner.go:130] >       "uid": {
	I0401 18:56:09.739938   43137 command_runner.go:130] >         "value": "0"
	I0401 18:56:09.739947   43137 command_runner.go:130] >       },
	I0401 18:56:09.739954   43137 command_runner.go:130] >       "username": "",
	I0401 18:56:09.739962   43137 command_runner.go:130] >       "spec": null,
	I0401 18:56:09.739969   43137 command_runner.go:130] >       "pinned": false
	I0401 18:56:09.739975   43137 command_runner.go:130] >     },
	I0401 18:56:09.739983   43137 command_runner.go:130] >     {
	I0401 18:56:09.739996   43137 command_runner.go:130] >       "id": "39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533",
	I0401 18:56:09.740006   43137 command_runner.go:130] >       "repoTags": [
	I0401 18:56:09.740014   43137 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.29.3"
	I0401 18:56:09.740022   43137 command_runner.go:130] >       ],
	I0401 18:56:09.740032   43137 command_runner.go:130] >       "repoDigests": [
	I0401 18:56:09.740045   43137 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:21be2c03b528e582a63a41d8270f469ad1b24e2f6ba8238386768fc981ca1322",
	I0401 18:56:09.740056   43137 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c"
	I0401 18:56:09.740064   43137 command_runner.go:130] >       ],
	I0401 18:56:09.740075   43137 command_runner.go:130] >       "size": "128508878",
	I0401 18:56:09.740084   43137 command_runner.go:130] >       "uid": {
	I0401 18:56:09.740094   43137 command_runner.go:130] >         "value": "0"
	I0401 18:56:09.740099   43137 command_runner.go:130] >       },
	I0401 18:56:09.740108   43137 command_runner.go:130] >       "username": "",
	I0401 18:56:09.740116   43137 command_runner.go:130] >       "spec": null,
	I0401 18:56:09.740123   43137 command_runner.go:130] >       "pinned": false
	I0401 18:56:09.740128   43137 command_runner.go:130] >     },
	I0401 18:56:09.740133   43137 command_runner.go:130] >     {
	I0401 18:56:09.740147   43137 command_runner.go:130] >       "id": "6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3",
	I0401 18:56:09.740158   43137 command_runner.go:130] >       "repoTags": [
	I0401 18:56:09.740167   43137 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.29.3"
	I0401 18:56:09.740175   43137 command_runner.go:130] >       ],
	I0401 18:56:09.740182   43137 command_runner.go:130] >       "repoDigests": [
	I0401 18:56:09.740197   43137 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:495e03d609009733264502138f33ab4ebff55e4ccc34b51fce1dc48eba5aa606",
	I0401 18:56:09.740212   43137 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104"
	I0401 18:56:09.740221   43137 command_runner.go:130] >       ],
	I0401 18:56:09.740226   43137 command_runner.go:130] >       "size": "123142962",
	I0401 18:56:09.740236   43137 command_runner.go:130] >       "uid": {
	I0401 18:56:09.740243   43137 command_runner.go:130] >         "value": "0"
	I0401 18:56:09.740252   43137 command_runner.go:130] >       },
	I0401 18:56:09.740259   43137 command_runner.go:130] >       "username": "",
	I0401 18:56:09.740268   43137 command_runner.go:130] >       "spec": null,
	I0401 18:56:09.740296   43137 command_runner.go:130] >       "pinned": false
	I0401 18:56:09.740311   43137 command_runner.go:130] >     },
	I0401 18:56:09.740317   43137 command_runner.go:130] >     {
	I0401 18:56:09.740325   43137 command_runner.go:130] >       "id": "a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392",
	I0401 18:56:09.740334   43137 command_runner.go:130] >       "repoTags": [
	I0401 18:56:09.740346   43137 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.29.3"
	I0401 18:56:09.740355   43137 command_runner.go:130] >       ],
	I0401 18:56:09.740370   43137 command_runner.go:130] >       "repoDigests": [
	I0401 18:56:09.740401   43137 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:d137dd922e588abc7b0e2f20afd338065e9abccdecfe705abfb19f588fbac11d",
	I0401 18:56:09.740416   43137 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863"
	I0401 18:56:09.740424   43137 command_runner.go:130] >       ],
	I0401 18:56:09.740431   43137 command_runner.go:130] >       "size": "83634073",
	I0401 18:56:09.740439   43137 command_runner.go:130] >       "uid": null,
	I0401 18:56:09.740449   43137 command_runner.go:130] >       "username": "",
	I0401 18:56:09.740457   43137 command_runner.go:130] >       "spec": null,
	I0401 18:56:09.740467   43137 command_runner.go:130] >       "pinned": false
	I0401 18:56:09.740476   43137 command_runner.go:130] >     },
	I0401 18:56:09.740484   43137 command_runner.go:130] >     {
	I0401 18:56:09.740493   43137 command_runner.go:130] >       "id": "8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b",
	I0401 18:56:09.740509   43137 command_runner.go:130] >       "repoTags": [
	I0401 18:56:09.740526   43137 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.29.3"
	I0401 18:56:09.740532   43137 command_runner.go:130] >       ],
	I0401 18:56:09.740544   43137 command_runner.go:130] >       "repoDigests": [
	I0401 18:56:09.740556   43137 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a",
	I0401 18:56:09.740569   43137 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:c6dae5df00e42512d2baa3e1e74efbf08bddd595e930123f6021f715198b8e88"
	I0401 18:56:09.740575   43137 command_runner.go:130] >       ],
	I0401 18:56:09.740582   43137 command_runner.go:130] >       "size": "60724018",
	I0401 18:56:09.740589   43137 command_runner.go:130] >       "uid": {
	I0401 18:56:09.740595   43137 command_runner.go:130] >         "value": "0"
	I0401 18:56:09.740601   43137 command_runner.go:130] >       },
	I0401 18:56:09.740606   43137 command_runner.go:130] >       "username": "",
	I0401 18:56:09.740611   43137 command_runner.go:130] >       "spec": null,
	I0401 18:56:09.740615   43137 command_runner.go:130] >       "pinned": false
	I0401 18:56:09.740618   43137 command_runner.go:130] >     },
	I0401 18:56:09.740626   43137 command_runner.go:130] >     {
	I0401 18:56:09.740635   43137 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0401 18:56:09.740646   43137 command_runner.go:130] >       "repoTags": [
	I0401 18:56:09.740653   43137 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0401 18:56:09.740658   43137 command_runner.go:130] >       ],
	I0401 18:56:09.740666   43137 command_runner.go:130] >       "repoDigests": [
	I0401 18:56:09.740677   43137 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0401 18:56:09.740696   43137 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0401 18:56:09.740705   43137 command_runner.go:130] >       ],
	I0401 18:56:09.740711   43137 command_runner.go:130] >       "size": "750414",
	I0401 18:56:09.740718   43137 command_runner.go:130] >       "uid": {
	I0401 18:56:09.740722   43137 command_runner.go:130] >         "value": "65535"
	I0401 18:56:09.740728   43137 command_runner.go:130] >       },
	I0401 18:56:09.740733   43137 command_runner.go:130] >       "username": "",
	I0401 18:56:09.740739   43137 command_runner.go:130] >       "spec": null,
	I0401 18:56:09.740747   43137 command_runner.go:130] >       "pinned": true
	I0401 18:56:09.740752   43137 command_runner.go:130] >     }
	I0401 18:56:09.740757   43137 command_runner.go:130] >   ]
	I0401 18:56:09.740763   43137 command_runner.go:130] > }
	I0401 18:56:09.740915   43137 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 18:56:09.740927   43137 cache_images.go:84] Images are preloaded, skipping loading
	I0401 18:56:09.740933   43137 kubeadm.go:928] updating node { 192.168.39.161 8443 v1.29.3 crio true true} ...
	I0401 18:56:09.741056   43137 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-853477 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.161
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:multinode-853477 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 18:56:09.741141   43137 ssh_runner.go:195] Run: crio config
	I0401 18:56:09.777666   43137 command_runner.go:130] ! time="2024-04-01 18:56:09.754434169Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0401 18:56:09.788522   43137 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0401 18:56:09.798885   43137 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0401 18:56:09.798910   43137 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0401 18:56:09.798921   43137 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0401 18:56:09.798926   43137 command_runner.go:130] > #
	I0401 18:56:09.798935   43137 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0401 18:56:09.798943   43137 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0401 18:56:09.798952   43137 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0401 18:56:09.798973   43137 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0401 18:56:09.798983   43137 command_runner.go:130] > # reload'.
	I0401 18:56:09.798994   43137 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0401 18:56:09.799006   43137 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0401 18:56:09.799016   43137 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0401 18:56:09.799029   43137 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0401 18:56:09.799039   43137 command_runner.go:130] > [crio]
	I0401 18:56:09.799049   43137 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0401 18:56:09.799056   43137 command_runner.go:130] > # containers images, in this directory.
	I0401 18:56:09.799063   43137 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0401 18:56:09.799075   43137 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0401 18:56:09.799084   43137 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0401 18:56:09.799097   43137 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0401 18:56:09.799103   43137 command_runner.go:130] > # imagestore = ""
	I0401 18:56:09.799109   43137 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0401 18:56:09.799118   43137 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0401 18:56:09.799125   43137 command_runner.go:130] > storage_driver = "overlay"
	I0401 18:56:09.799130   43137 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0401 18:56:09.799138   43137 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0401 18:56:09.799145   43137 command_runner.go:130] > storage_option = [
	I0401 18:56:09.799150   43137 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0401 18:56:09.799155   43137 command_runner.go:130] > ]
	I0401 18:56:09.799162   43137 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0401 18:56:09.799170   43137 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0401 18:56:09.799177   43137 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0401 18:56:09.799182   43137 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0401 18:56:09.799190   43137 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0401 18:56:09.799197   43137 command_runner.go:130] > # always happen on a node reboot
	I0401 18:56:09.799202   43137 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0401 18:56:09.799215   43137 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0401 18:56:09.799223   43137 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0401 18:56:09.799228   43137 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0401 18:56:09.799236   43137 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0401 18:56:09.799243   43137 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0401 18:56:09.799253   43137 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0401 18:56:09.799259   43137 command_runner.go:130] > # internal_wipe = true
	I0401 18:56:09.799267   43137 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0401 18:56:09.799276   43137 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0401 18:56:09.799282   43137 command_runner.go:130] > # internal_repair = false
	I0401 18:56:09.799288   43137 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0401 18:56:09.799301   43137 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0401 18:56:09.799308   43137 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0401 18:56:09.799319   43137 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0401 18:56:09.799330   43137 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0401 18:56:09.799336   43137 command_runner.go:130] > [crio.api]
	I0401 18:56:09.799342   43137 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0401 18:56:09.799349   43137 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0401 18:56:09.799354   43137 command_runner.go:130] > # IP address on which the stream server will listen.
	I0401 18:56:09.799366   43137 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0401 18:56:09.799375   43137 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0401 18:56:09.799382   43137 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0401 18:56:09.799386   43137 command_runner.go:130] > # stream_port = "0"
	I0401 18:56:09.799393   43137 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0401 18:56:09.799400   43137 command_runner.go:130] > # stream_enable_tls = false
	I0401 18:56:09.799406   43137 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0401 18:56:09.799413   43137 command_runner.go:130] > # stream_idle_timeout = ""
	I0401 18:56:09.799419   43137 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0401 18:56:09.799427   43137 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0401 18:56:09.799433   43137 command_runner.go:130] > # minutes.
	I0401 18:56:09.799437   43137 command_runner.go:130] > # stream_tls_cert = ""
	I0401 18:56:09.799446   43137 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0401 18:56:09.799454   43137 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0401 18:56:09.799459   43137 command_runner.go:130] > # stream_tls_key = ""
	I0401 18:56:09.799464   43137 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0401 18:56:09.799473   43137 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0401 18:56:09.799493   43137 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0401 18:56:09.799503   43137 command_runner.go:130] > # stream_tls_ca = ""
	I0401 18:56:09.799510   43137 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0401 18:56:09.799514   43137 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0401 18:56:09.799521   43137 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0401 18:56:09.799528   43137 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0401 18:56:09.799534   43137 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0401 18:56:09.799541   43137 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0401 18:56:09.799545   43137 command_runner.go:130] > [crio.runtime]
	I0401 18:56:09.799550   43137 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0401 18:56:09.799558   43137 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0401 18:56:09.799564   43137 command_runner.go:130] > # "nofile=1024:2048"
	I0401 18:56:09.799570   43137 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0401 18:56:09.799577   43137 command_runner.go:130] > # default_ulimits = [
	I0401 18:56:09.799580   43137 command_runner.go:130] > # ]
	I0401 18:56:09.799588   43137 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0401 18:56:09.799594   43137 command_runner.go:130] > # no_pivot = false
	I0401 18:56:09.799602   43137 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0401 18:56:09.799611   43137 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0401 18:56:09.799623   43137 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0401 18:56:09.799636   43137 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0401 18:56:09.799647   43137 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0401 18:56:09.799659   43137 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0401 18:56:09.799667   43137 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0401 18:56:09.799678   43137 command_runner.go:130] > # Cgroup setting for conmon
	I0401 18:56:09.799690   43137 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0401 18:56:09.799700   43137 command_runner.go:130] > conmon_cgroup = "pod"
	I0401 18:56:09.799709   43137 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0401 18:56:09.799719   43137 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0401 18:56:09.799729   43137 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0401 18:56:09.799737   43137 command_runner.go:130] > conmon_env = [
	I0401 18:56:09.799746   43137 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0401 18:56:09.799753   43137 command_runner.go:130] > ]
	I0401 18:56:09.799759   43137 command_runner.go:130] > # Additional environment variables to set for all the
	I0401 18:56:09.799764   43137 command_runner.go:130] > # containers. These are overridden if set in the
	I0401 18:56:09.799772   43137 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0401 18:56:09.799776   43137 command_runner.go:130] > # default_env = [
	I0401 18:56:09.799780   43137 command_runner.go:130] > # ]
	I0401 18:56:09.799787   43137 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0401 18:56:09.799794   43137 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0401 18:56:09.799800   43137 command_runner.go:130] > # selinux = false
	I0401 18:56:09.799806   43137 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0401 18:56:09.799814   43137 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0401 18:56:09.799822   43137 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0401 18:56:09.799828   43137 command_runner.go:130] > # seccomp_profile = ""
	I0401 18:56:09.799834   43137 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0401 18:56:09.799841   43137 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0401 18:56:09.799858   43137 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0401 18:56:09.799866   43137 command_runner.go:130] > # which might increase security.
	I0401 18:56:09.799870   43137 command_runner.go:130] > # This option is currently deprecated,
	I0401 18:56:09.799878   43137 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0401 18:56:09.799883   43137 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0401 18:56:09.799889   43137 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0401 18:56:09.799897   43137 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0401 18:56:09.799908   43137 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0401 18:56:09.799922   43137 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0401 18:56:09.799929   43137 command_runner.go:130] > # This option supports live configuration reload.
	I0401 18:56:09.799934   43137 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0401 18:56:09.799942   43137 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0401 18:56:09.799946   43137 command_runner.go:130] > # the cgroup blockio controller.
	I0401 18:56:09.799952   43137 command_runner.go:130] > # blockio_config_file = ""
	I0401 18:56:09.799959   43137 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0401 18:56:09.799965   43137 command_runner.go:130] > # blockio parameters.
	I0401 18:56:09.799969   43137 command_runner.go:130] > # blockio_reload = false
	I0401 18:56:09.799980   43137 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0401 18:56:09.799986   43137 command_runner.go:130] > # irqbalance daemon.
	I0401 18:56:09.799991   43137 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0401 18:56:09.800000   43137 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0401 18:56:09.800007   43137 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0401 18:56:09.800015   43137 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0401 18:56:09.800021   43137 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0401 18:56:09.800027   43137 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0401 18:56:09.800033   43137 command_runner.go:130] > # This option supports live configuration reload.
	I0401 18:56:09.800036   43137 command_runner.go:130] > # rdt_config_file = ""
	I0401 18:56:09.800044   43137 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0401 18:56:09.800049   43137 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0401 18:56:09.800080   43137 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0401 18:56:09.800088   43137 command_runner.go:130] > # separate_pull_cgroup = ""
	I0401 18:56:09.800093   43137 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0401 18:56:09.800099   43137 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0401 18:56:09.800105   43137 command_runner.go:130] > # will be added.
	I0401 18:56:09.800109   43137 command_runner.go:130] > # default_capabilities = [
	I0401 18:56:09.800115   43137 command_runner.go:130] > # 	"CHOWN",
	I0401 18:56:09.800119   43137 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0401 18:56:09.800124   43137 command_runner.go:130] > # 	"FSETID",
	I0401 18:56:09.800128   43137 command_runner.go:130] > # 	"FOWNER",
	I0401 18:56:09.800134   43137 command_runner.go:130] > # 	"SETGID",
	I0401 18:56:09.800138   43137 command_runner.go:130] > # 	"SETUID",
	I0401 18:56:09.800144   43137 command_runner.go:130] > # 	"SETPCAP",
	I0401 18:56:09.800148   43137 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0401 18:56:09.800154   43137 command_runner.go:130] > # 	"KILL",
	I0401 18:56:09.800167   43137 command_runner.go:130] > # ]
	I0401 18:56:09.800177   43137 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0401 18:56:09.800185   43137 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0401 18:56:09.800194   43137 command_runner.go:130] > # add_inheritable_capabilities = false
	I0401 18:56:09.800203   43137 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0401 18:56:09.800211   43137 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0401 18:56:09.800214   43137 command_runner.go:130] > default_sysctls = [
	I0401 18:56:09.800221   43137 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0401 18:56:09.800224   43137 command_runner.go:130] > ]
	I0401 18:56:09.800229   43137 command_runner.go:130] > # List of devices on the host that a
	I0401 18:56:09.800237   43137 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0401 18:56:09.800243   43137 command_runner.go:130] > # allowed_devices = [
	I0401 18:56:09.800247   43137 command_runner.go:130] > # 	"/dev/fuse",
	I0401 18:56:09.800252   43137 command_runner.go:130] > # ]
	I0401 18:56:09.800256   43137 command_runner.go:130] > # List of additional devices. specified as
	I0401 18:56:09.800264   43137 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0401 18:56:09.800271   43137 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0401 18:56:09.800276   43137 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0401 18:56:09.800283   43137 command_runner.go:130] > # additional_devices = [
	I0401 18:56:09.800286   43137 command_runner.go:130] > # ]
	I0401 18:56:09.800294   43137 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0401 18:56:09.800298   43137 command_runner.go:130] > # cdi_spec_dirs = [
	I0401 18:56:09.800302   43137 command_runner.go:130] > # 	"/etc/cdi",
	I0401 18:56:09.800305   43137 command_runner.go:130] > # 	"/var/run/cdi",
	I0401 18:56:09.800311   43137 command_runner.go:130] > # ]
	I0401 18:56:09.800317   43137 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0401 18:56:09.800326   43137 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0401 18:56:09.800332   43137 command_runner.go:130] > # Defaults to false.
	I0401 18:56:09.800337   43137 command_runner.go:130] > # device_ownership_from_security_context = false
	I0401 18:56:09.800345   43137 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0401 18:56:09.800353   43137 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0401 18:56:09.800362   43137 command_runner.go:130] > # hooks_dir = [
	I0401 18:56:09.800369   43137 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0401 18:56:09.800372   43137 command_runner.go:130] > # ]
	I0401 18:56:09.800378   43137 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0401 18:56:09.800387   43137 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0401 18:56:09.800398   43137 command_runner.go:130] > # its default mounts from the following two files:
	I0401 18:56:09.800404   43137 command_runner.go:130] > #
	I0401 18:56:09.800410   43137 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0401 18:56:09.800419   43137 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0401 18:56:09.800424   43137 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0401 18:56:09.800429   43137 command_runner.go:130] > #
	I0401 18:56:09.800435   43137 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0401 18:56:09.800443   43137 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0401 18:56:09.800452   43137 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0401 18:56:09.800461   43137 command_runner.go:130] > #      only add mounts it finds in this file.
	I0401 18:56:09.800465   43137 command_runner.go:130] > #
	I0401 18:56:09.800470   43137 command_runner.go:130] > # default_mounts_file = ""
	I0401 18:56:09.800477   43137 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0401 18:56:09.800483   43137 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0401 18:56:09.800489   43137 command_runner.go:130] > pids_limit = 1024
	I0401 18:56:09.800495   43137 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0401 18:56:09.800503   43137 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0401 18:56:09.800509   43137 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0401 18:56:09.800519   43137 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0401 18:56:09.800526   43137 command_runner.go:130] > # log_size_max = -1
	I0401 18:56:09.800536   43137 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0401 18:56:09.800545   43137 command_runner.go:130] > # log_to_journald = false
	I0401 18:56:09.800558   43137 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0401 18:56:09.800568   43137 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0401 18:56:09.800579   43137 command_runner.go:130] > # Path to directory for container attach sockets.
	I0401 18:56:09.800590   43137 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0401 18:56:09.800600   43137 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0401 18:56:09.800609   43137 command_runner.go:130] > # bind_mount_prefix = ""
	I0401 18:56:09.800621   43137 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0401 18:56:09.800638   43137 command_runner.go:130] > # read_only = false
	I0401 18:56:09.800654   43137 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0401 18:56:09.800667   43137 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0401 18:56:09.800676   43137 command_runner.go:130] > # live configuration reload.
	I0401 18:56:09.800682   43137 command_runner.go:130] > # log_level = "info"
	I0401 18:56:09.800690   43137 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0401 18:56:09.800701   43137 command_runner.go:130] > # This option supports live configuration reload.
	I0401 18:56:09.800720   43137 command_runner.go:130] > # log_filter = ""
	I0401 18:56:09.800732   43137 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0401 18:56:09.800744   43137 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0401 18:56:09.800751   43137 command_runner.go:130] > # separated by comma.
	I0401 18:56:09.800762   43137 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0401 18:56:09.800772   43137 command_runner.go:130] > # uid_mappings = ""
	I0401 18:56:09.800781   43137 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0401 18:56:09.800793   43137 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0401 18:56:09.800802   43137 command_runner.go:130] > # separated by comma.
	I0401 18:56:09.800816   43137 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0401 18:56:09.800830   43137 command_runner.go:130] > # gid_mappings = ""
	I0401 18:56:09.800843   43137 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0401 18:56:09.800860   43137 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0401 18:56:09.800872   43137 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0401 18:56:09.800886   43137 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0401 18:56:09.800895   43137 command_runner.go:130] > # minimum_mappable_uid = -1
	I0401 18:56:09.800908   43137 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0401 18:56:09.800920   43137 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0401 18:56:09.800933   43137 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0401 18:56:09.800947   43137 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0401 18:56:09.800957   43137 command_runner.go:130] > # minimum_mappable_gid = -1
	I0401 18:56:09.800969   43137 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0401 18:56:09.800982   43137 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0401 18:56:09.800993   43137 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0401 18:56:09.801002   43137 command_runner.go:130] > # ctr_stop_timeout = 30
	I0401 18:56:09.801013   43137 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0401 18:56:09.801025   43137 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0401 18:56:09.801036   43137 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0401 18:56:09.801048   43137 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0401 18:56:09.801057   43137 command_runner.go:130] > drop_infra_ctr = false
	I0401 18:56:09.801070   43137 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0401 18:56:09.801081   43137 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0401 18:56:09.801095   43137 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0401 18:56:09.801105   43137 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0401 18:56:09.801115   43137 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0401 18:56:09.801127   43137 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0401 18:56:09.801146   43137 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0401 18:56:09.801157   43137 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0401 18:56:09.801167   43137 command_runner.go:130] > # shared_cpuset = ""
	I0401 18:56:09.801176   43137 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0401 18:56:09.801187   43137 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0401 18:56:09.801193   43137 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0401 18:56:09.801204   43137 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0401 18:56:09.801213   43137 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0401 18:56:09.801221   43137 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0401 18:56:09.801237   43137 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0401 18:56:09.801247   43137 command_runner.go:130] > # enable_criu_support = false
	I0401 18:56:09.801255   43137 command_runner.go:130] > # Enable/disable the generation of the container,
	I0401 18:56:09.801267   43137 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0401 18:56:09.801277   43137 command_runner.go:130] > # enable_pod_events = false
	I0401 18:56:09.801285   43137 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0401 18:56:09.801297   43137 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0401 18:56:09.801308   43137 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0401 18:56:09.801316   43137 command_runner.go:130] > # default_runtime = "runc"
	I0401 18:56:09.801326   43137 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0401 18:56:09.801338   43137 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0401 18:56:09.801354   43137 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0401 18:56:09.801365   43137 command_runner.go:130] > # creation as a file is not desired either.
	I0401 18:56:09.801376   43137 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0401 18:56:09.801388   43137 command_runner.go:130] > # the hostname is being managed dynamically.
	I0401 18:56:09.801400   43137 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0401 18:56:09.801408   43137 command_runner.go:130] > # ]
	I0401 18:56:09.801420   43137 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0401 18:56:09.801432   43137 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0401 18:56:09.801443   43137 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0401 18:56:09.801454   43137 command_runner.go:130] > # Each entry in the table should follow the format:
	I0401 18:56:09.801458   43137 command_runner.go:130] > #
	I0401 18:56:09.801467   43137 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0401 18:56:09.801474   43137 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0401 18:56:09.801520   43137 command_runner.go:130] > # runtime_type = "oci"
	I0401 18:56:09.801534   43137 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0401 18:56:09.801539   43137 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0401 18:56:09.801548   43137 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0401 18:56:09.801553   43137 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0401 18:56:09.801559   43137 command_runner.go:130] > # monitor_env = []
	I0401 18:56:09.801564   43137 command_runner.go:130] > # privileged_without_host_devices = false
	I0401 18:56:09.801570   43137 command_runner.go:130] > # allowed_annotations = []
	I0401 18:56:09.801575   43137 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0401 18:56:09.801581   43137 command_runner.go:130] > # Where:
	I0401 18:56:09.801586   43137 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0401 18:56:09.801594   43137 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0401 18:56:09.801603   43137 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0401 18:56:09.801612   43137 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0401 18:56:09.801624   43137 command_runner.go:130] > #   in $PATH.
	I0401 18:56:09.801636   43137 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0401 18:56:09.801660   43137 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0401 18:56:09.801673   43137 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0401 18:56:09.801682   43137 command_runner.go:130] > #   state.
	I0401 18:56:09.801694   43137 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0401 18:56:09.801705   43137 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0401 18:56:09.801718   43137 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0401 18:56:09.801729   43137 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0401 18:56:09.801741   43137 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0401 18:56:09.801754   43137 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0401 18:56:09.801764   43137 command_runner.go:130] > #   The currently recognized values are:
	I0401 18:56:09.801777   43137 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0401 18:56:09.801791   43137 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0401 18:56:09.801800   43137 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0401 18:56:09.801808   43137 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0401 18:56:09.801818   43137 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0401 18:56:09.801827   43137 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0401 18:56:09.801835   43137 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0401 18:56:09.801844   43137 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0401 18:56:09.801850   43137 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0401 18:56:09.801862   43137 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0401 18:56:09.801868   43137 command_runner.go:130] > #   deprecated option "conmon".
	I0401 18:56:09.801875   43137 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0401 18:56:09.801882   43137 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0401 18:56:09.801894   43137 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0401 18:56:09.801902   43137 command_runner.go:130] > #   should be moved to the container's cgroup
	I0401 18:56:09.801911   43137 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0401 18:56:09.801918   43137 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0401 18:56:09.801924   43137 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0401 18:56:09.801932   43137 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0401 18:56:09.801937   43137 command_runner.go:130] > #
	I0401 18:56:09.801941   43137 command_runner.go:130] > # Using the seccomp notifier feature:
	I0401 18:56:09.801949   43137 command_runner.go:130] > #
	I0401 18:56:09.801955   43137 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0401 18:56:09.801963   43137 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0401 18:56:09.801966   43137 command_runner.go:130] > #
	I0401 18:56:09.801975   43137 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0401 18:56:09.801983   43137 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0401 18:56:09.801986   43137 command_runner.go:130] > #
	I0401 18:56:09.801995   43137 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0401 18:56:09.802001   43137 command_runner.go:130] > # feature.
	I0401 18:56:09.802004   43137 command_runner.go:130] > #
	I0401 18:56:09.802012   43137 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0401 18:56:09.802020   43137 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0401 18:56:09.802029   43137 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0401 18:56:09.802037   43137 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0401 18:56:09.802043   43137 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0401 18:56:09.802048   43137 command_runner.go:130] > #
	I0401 18:56:09.802053   43137 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0401 18:56:09.802059   43137 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0401 18:56:09.802064   43137 command_runner.go:130] > #
	I0401 18:56:09.802070   43137 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0401 18:56:09.802083   43137 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0401 18:56:09.802088   43137 command_runner.go:130] > #
	I0401 18:56:09.802096   43137 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0401 18:56:09.802105   43137 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0401 18:56:09.802110   43137 command_runner.go:130] > # limitation.
	I0401 18:56:09.802116   43137 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0401 18:56:09.802122   43137 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0401 18:56:09.802126   43137 command_runner.go:130] > runtime_type = "oci"
	I0401 18:56:09.802137   43137 command_runner.go:130] > runtime_root = "/run/runc"
	I0401 18:56:09.802141   43137 command_runner.go:130] > runtime_config_path = ""
	I0401 18:56:09.802149   43137 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0401 18:56:09.802153   43137 command_runner.go:130] > monitor_cgroup = "pod"
	I0401 18:56:09.802159   43137 command_runner.go:130] > monitor_exec_cgroup = ""
	I0401 18:56:09.802163   43137 command_runner.go:130] > monitor_env = [
	I0401 18:56:09.802171   43137 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0401 18:56:09.802176   43137 command_runner.go:130] > ]
	I0401 18:56:09.802181   43137 command_runner.go:130] > privileged_without_host_devices = false
	I0401 18:56:09.802189   43137 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0401 18:56:09.802196   43137 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0401 18:56:09.802202   43137 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0401 18:56:09.802211   43137 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0401 18:56:09.802223   43137 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0401 18:56:09.802230   43137 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0401 18:56:09.802241   43137 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0401 18:56:09.802250   43137 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0401 18:56:09.802258   43137 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0401 18:56:09.802265   43137 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0401 18:56:09.802271   43137 command_runner.go:130] > # Example:
	I0401 18:56:09.802276   43137 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0401 18:56:09.802283   43137 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0401 18:56:09.802288   43137 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0401 18:56:09.802295   43137 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0401 18:56:09.802299   43137 command_runner.go:130] > # cpuset = 0
	I0401 18:56:09.802303   43137 command_runner.go:130] > # cpushares = "0-1"
	I0401 18:56:09.802306   43137 command_runner.go:130] > # Where:
	I0401 18:56:09.802311   43137 command_runner.go:130] > # The workload name is workload-type.
	I0401 18:56:09.802320   43137 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0401 18:56:09.802327   43137 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0401 18:56:09.802336   43137 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0401 18:56:09.802346   43137 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0401 18:56:09.802354   43137 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0401 18:56:09.802359   43137 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0401 18:56:09.802366   43137 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0401 18:56:09.802373   43137 command_runner.go:130] > # Default value is set to true
	I0401 18:56:09.802381   43137 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0401 18:56:09.802389   43137 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0401 18:56:09.802394   43137 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0401 18:56:09.802401   43137 command_runner.go:130] > # Default value is set to 'false'
	I0401 18:56:09.802405   43137 command_runner.go:130] > # disable_hostport_mapping = false
	I0401 18:56:09.802411   43137 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0401 18:56:09.802414   43137 command_runner.go:130] > #
	I0401 18:56:09.802420   43137 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0401 18:56:09.802425   43137 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0401 18:56:09.802431   43137 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0401 18:56:09.802436   43137 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0401 18:56:09.802443   43137 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0401 18:56:09.802446   43137 command_runner.go:130] > [crio.image]
	I0401 18:56:09.802451   43137 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0401 18:56:09.802455   43137 command_runner.go:130] > # default_transport = "docker://"
	I0401 18:56:09.802460   43137 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0401 18:56:09.802466   43137 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0401 18:56:09.802470   43137 command_runner.go:130] > # global_auth_file = ""
	I0401 18:56:09.802474   43137 command_runner.go:130] > # The image used to instantiate infra containers.
	I0401 18:56:09.802479   43137 command_runner.go:130] > # This option supports live configuration reload.
	I0401 18:56:09.802483   43137 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0401 18:56:09.802488   43137 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0401 18:56:09.802494   43137 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0401 18:56:09.802498   43137 command_runner.go:130] > # This option supports live configuration reload.
	I0401 18:56:09.802502   43137 command_runner.go:130] > # pause_image_auth_file = ""
	I0401 18:56:09.802507   43137 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0401 18:56:09.802512   43137 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0401 18:56:09.802518   43137 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0401 18:56:09.802523   43137 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0401 18:56:09.802527   43137 command_runner.go:130] > # pause_command = "/pause"
	I0401 18:56:09.802532   43137 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0401 18:56:09.802537   43137 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0401 18:56:09.802543   43137 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0401 18:56:09.802549   43137 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0401 18:56:09.802554   43137 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0401 18:56:09.802560   43137 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0401 18:56:09.802569   43137 command_runner.go:130] > # pinned_images = [
	I0401 18:56:09.802572   43137 command_runner.go:130] > # ]
	I0401 18:56:09.802578   43137 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0401 18:56:09.802583   43137 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0401 18:56:09.802589   43137 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0401 18:56:09.802597   43137 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0401 18:56:09.802603   43137 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0401 18:56:09.802610   43137 command_runner.go:130] > # signature_policy = ""
	I0401 18:56:09.802616   43137 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0401 18:56:09.802629   43137 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0401 18:56:09.802641   43137 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0401 18:56:09.802656   43137 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0401 18:56:09.802668   43137 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0401 18:56:09.802679   43137 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0401 18:56:09.802691   43137 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0401 18:56:09.802703   43137 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0401 18:56:09.802712   43137 command_runner.go:130] > # changing them here.
	I0401 18:56:09.802716   43137 command_runner.go:130] > # insecure_registries = [
	I0401 18:56:09.802722   43137 command_runner.go:130] > # ]
	I0401 18:56:09.802728   43137 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0401 18:56:09.802736   43137 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0401 18:56:09.802743   43137 command_runner.go:130] > # image_volumes = "mkdir"
	I0401 18:56:09.802748   43137 command_runner.go:130] > # Temporary directory to use for storing big files
	I0401 18:56:09.802754   43137 command_runner.go:130] > # big_files_temporary_dir = ""
	I0401 18:56:09.802760   43137 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0401 18:56:09.802766   43137 command_runner.go:130] > # CNI plugins.
	I0401 18:56:09.802770   43137 command_runner.go:130] > [crio.network]
	I0401 18:56:09.802778   43137 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0401 18:56:09.802784   43137 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0401 18:56:09.802795   43137 command_runner.go:130] > # cni_default_network = ""
	I0401 18:56:09.802804   43137 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0401 18:56:09.802808   43137 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0401 18:56:09.802815   43137 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0401 18:56:09.802819   43137 command_runner.go:130] > # plugin_dirs = [
	I0401 18:56:09.802825   43137 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0401 18:56:09.802828   43137 command_runner.go:130] > # ]
	I0401 18:56:09.802840   43137 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0401 18:56:09.802847   43137 command_runner.go:130] > [crio.metrics]
	I0401 18:56:09.802851   43137 command_runner.go:130] > # Globally enable or disable metrics support.
	I0401 18:56:09.802859   43137 command_runner.go:130] > enable_metrics = true
	I0401 18:56:09.802865   43137 command_runner.go:130] > # Specify enabled metrics collectors.
	I0401 18:56:09.802870   43137 command_runner.go:130] > # Per default all metrics are enabled.
	I0401 18:56:09.802878   43137 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0401 18:56:09.802886   43137 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0401 18:56:09.802892   43137 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0401 18:56:09.802898   43137 command_runner.go:130] > # metrics_collectors = [
	I0401 18:56:09.802902   43137 command_runner.go:130] > # 	"operations",
	I0401 18:56:09.802910   43137 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0401 18:56:09.802917   43137 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0401 18:56:09.802921   43137 command_runner.go:130] > # 	"operations_errors",
	I0401 18:56:09.802927   43137 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0401 18:56:09.802931   43137 command_runner.go:130] > # 	"image_pulls_by_name",
	I0401 18:56:09.802937   43137 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0401 18:56:09.802943   43137 command_runner.go:130] > # 	"image_pulls_failures",
	I0401 18:56:09.802949   43137 command_runner.go:130] > # 	"image_pulls_successes",
	I0401 18:56:09.802953   43137 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0401 18:56:09.802960   43137 command_runner.go:130] > # 	"image_layer_reuse",
	I0401 18:56:09.802964   43137 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0401 18:56:09.802970   43137 command_runner.go:130] > # 	"containers_oom_total",
	I0401 18:56:09.802974   43137 command_runner.go:130] > # 	"containers_oom",
	I0401 18:56:09.802978   43137 command_runner.go:130] > # 	"processes_defunct",
	I0401 18:56:09.802984   43137 command_runner.go:130] > # 	"operations_total",
	I0401 18:56:09.802988   43137 command_runner.go:130] > # 	"operations_latency_seconds",
	I0401 18:56:09.802994   43137 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0401 18:56:09.802999   43137 command_runner.go:130] > # 	"operations_errors_total",
	I0401 18:56:09.803005   43137 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0401 18:56:09.803010   43137 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0401 18:56:09.803014   43137 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0401 18:56:09.803021   43137 command_runner.go:130] > # 	"image_pulls_success_total",
	I0401 18:56:09.803025   43137 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0401 18:56:09.803032   43137 command_runner.go:130] > # 	"containers_oom_count_total",
	I0401 18:56:09.803037   43137 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0401 18:56:09.803049   43137 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0401 18:56:09.803055   43137 command_runner.go:130] > # ]
	I0401 18:56:09.803060   43137 command_runner.go:130] > # The port on which the metrics server will listen.
	I0401 18:56:09.803064   43137 command_runner.go:130] > # metrics_port = 9090
	I0401 18:56:09.803072   43137 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0401 18:56:09.803077   43137 command_runner.go:130] > # metrics_socket = ""
	I0401 18:56:09.803082   43137 command_runner.go:130] > # The certificate for the secure metrics server.
	I0401 18:56:09.803090   43137 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0401 18:56:09.803099   43137 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0401 18:56:09.803106   43137 command_runner.go:130] > # certificate on any modification event.
	I0401 18:56:09.803110   43137 command_runner.go:130] > # metrics_cert = ""
	I0401 18:56:09.803116   43137 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0401 18:56:09.803123   43137 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0401 18:56:09.803127   43137 command_runner.go:130] > # metrics_key = ""
	I0401 18:56:09.803135   43137 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0401 18:56:09.803141   43137 command_runner.go:130] > [crio.tracing]
	I0401 18:56:09.803146   43137 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0401 18:56:09.803150   43137 command_runner.go:130] > # enable_tracing = false
	I0401 18:56:09.803157   43137 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0401 18:56:09.803162   43137 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0401 18:56:09.803171   43137 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0401 18:56:09.803178   43137 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0401 18:56:09.803182   43137 command_runner.go:130] > # CRI-O NRI configuration.
	I0401 18:56:09.803187   43137 command_runner.go:130] > [crio.nri]
	I0401 18:56:09.803192   43137 command_runner.go:130] > # Globally enable or disable NRI.
	I0401 18:56:09.803199   43137 command_runner.go:130] > # enable_nri = false
	I0401 18:56:09.803205   43137 command_runner.go:130] > # NRI socket to listen on.
	I0401 18:56:09.803212   43137 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0401 18:56:09.803216   43137 command_runner.go:130] > # NRI plugin directory to use.
	I0401 18:56:09.803227   43137 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0401 18:56:09.803234   43137 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0401 18:56:09.803239   43137 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0401 18:56:09.803246   43137 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0401 18:56:09.803253   43137 command_runner.go:130] > # nri_disable_connections = false
	I0401 18:56:09.803258   43137 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0401 18:56:09.803265   43137 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0401 18:56:09.803274   43137 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0401 18:56:09.803282   43137 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0401 18:56:09.803287   43137 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0401 18:56:09.803293   43137 command_runner.go:130] > [crio.stats]
	I0401 18:56:09.803299   43137 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0401 18:56:09.803306   43137 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0401 18:56:09.803310   43137 command_runner.go:130] > # stats_collection_period = 0
	I0401 18:56:09.803442   43137 cni.go:84] Creating CNI manager for ""
	I0401 18:56:09.803454   43137 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0401 18:56:09.803463   43137 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 18:56:09.803481   43137 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.161 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-853477 NodeName:multinode-853477 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.161"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.161 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 18:56:09.803621   43137 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.161
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-853477"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.161
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.161"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 18:56:09.803696   43137 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0401 18:56:09.815494   43137 command_runner.go:130] > kubeadm
	I0401 18:56:09.815510   43137 command_runner.go:130] > kubectl
	I0401 18:56:09.815514   43137 command_runner.go:130] > kubelet
	I0401 18:56:09.815532   43137 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 18:56:09.815580   43137 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 18:56:09.827098   43137 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0401 18:56:09.845512   43137 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 18:56:09.863536   43137 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0401 18:56:09.882379   43137 ssh_runner.go:195] Run: grep 192.168.39.161	control-plane.minikube.internal$ /etc/hosts
	I0401 18:56:09.886443   43137 command_runner.go:130] > 192.168.39.161	control-plane.minikube.internal
	I0401 18:56:09.886701   43137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 18:56:10.029311   43137 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 18:56:10.048364   43137 certs.go:68] Setting up /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/multinode-853477 for IP: 192.168.39.161
	I0401 18:56:10.048384   43137 certs.go:194] generating shared ca certs ...
	I0401 18:56:10.048403   43137 certs.go:226] acquiring lock for ca certs: {Name:mk348b3e250c104b662139cd7212c6c6dfda3180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:56:10.048543   43137 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key
	I0401 18:56:10.048584   43137 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key
	I0401 18:56:10.048593   43137 certs.go:256] generating profile certs ...
	I0401 18:56:10.048690   43137 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/multinode-853477/client.key
	I0401 18:56:10.048746   43137 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/multinode-853477/apiserver.key.fc9b9454
	I0401 18:56:10.048778   43137 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/multinode-853477/proxy-client.key
	I0401 18:56:10.048788   43137 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0401 18:56:10.048803   43137 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0401 18:56:10.048815   43137 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0401 18:56:10.048834   43137 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0401 18:56:10.048852   43137 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/multinode-853477/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0401 18:56:10.048868   43137 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/multinode-853477/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0401 18:56:10.048881   43137 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/multinode-853477/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0401 18:56:10.048892   43137 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/multinode-853477/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0401 18:56:10.048935   43137 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem (1338 bytes)
	W0401 18:56:10.048963   43137 certs.go:480] ignoring /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751_empty.pem, impossibly tiny 0 bytes
	I0401 18:56:10.048973   43137 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem (1675 bytes)
	I0401 18:56:10.048997   43137 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem (1082 bytes)
	I0401 18:56:10.049025   43137 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem (1123 bytes)
	I0401 18:56:10.049045   43137 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem (1679 bytes)
	I0401 18:56:10.049085   43137 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem (1708 bytes)
	I0401 18:56:10.049109   43137 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem -> /usr/share/ca-certificates/17751.pem
	I0401 18:56:10.049122   43137 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> /usr/share/ca-certificates/177512.pem
	I0401 18:56:10.049134   43137 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0401 18:56:10.049658   43137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 18:56:10.078349   43137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 18:56:10.104974   43137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 18:56:10.140833   43137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 18:56:10.173801   43137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/multinode-853477/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0401 18:56:10.199484   43137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/multinode-853477/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 18:56:10.225370   43137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/multinode-853477/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 18:56:10.252393   43137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/multinode-853477/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 18:56:10.278526   43137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem --> /usr/share/ca-certificates/17751.pem (1338 bytes)
	I0401 18:56:10.314120   43137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /usr/share/ca-certificates/177512.pem (1708 bytes)
	I0401 18:56:10.352012   43137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 18:56:10.377969   43137 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0401 18:56:10.395047   43137 ssh_runner.go:195] Run: openssl version
	I0401 18:56:10.401245   43137 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0401 18:56:10.401449   43137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 18:56:10.412805   43137 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 18:56:10.417375   43137 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr  1 18:07 /usr/share/ca-certificates/minikubeCA.pem
	I0401 18:56:10.417527   43137 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 18:07 /usr/share/ca-certificates/minikubeCA.pem
	I0401 18:56:10.417570   43137 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 18:56:10.423886   43137 command_runner.go:130] > b5213941
	I0401 18:56:10.424190   43137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 18:56:10.434283   43137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17751.pem && ln -fs /usr/share/ca-certificates/17751.pem /etc/ssl/certs/17751.pem"
	I0401 18:56:10.445286   43137 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17751.pem
	I0401 18:56:10.450072   43137 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr  1 18:15 /usr/share/ca-certificates/17751.pem
	I0401 18:56:10.450125   43137 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 18:15 /usr/share/ca-certificates/17751.pem
	I0401 18:56:10.450161   43137 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17751.pem
	I0401 18:56:10.456110   43137 command_runner.go:130] > 51391683
	I0401 18:56:10.456283   43137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17751.pem /etc/ssl/certs/51391683.0"
	I0401 18:56:10.465671   43137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177512.pem && ln -fs /usr/share/ca-certificates/177512.pem /etc/ssl/certs/177512.pem"
	I0401 18:56:10.476658   43137 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177512.pem
	I0401 18:56:10.481397   43137 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr  1 18:15 /usr/share/ca-certificates/177512.pem
	I0401 18:56:10.481419   43137 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 18:15 /usr/share/ca-certificates/177512.pem
	I0401 18:56:10.481451   43137 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177512.pem
	I0401 18:56:10.487151   43137 command_runner.go:130] > 3ec20f2e
	I0401 18:56:10.487205   43137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177512.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 18:56:10.498693   43137 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 18:56:10.503432   43137 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 18:56:10.503446   43137 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0401 18:56:10.503451   43137 command_runner.go:130] > Device: 253,1	Inode: 7339526     Links: 1
	I0401 18:56:10.503458   43137 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0401 18:56:10.503464   43137 command_runner.go:130] > Access: 2024-04-01 18:49:59.744156811 +0000
	I0401 18:56:10.503468   43137 command_runner.go:130] > Modify: 2024-04-01 18:49:59.744156811 +0000
	I0401 18:56:10.503473   43137 command_runner.go:130] > Change: 2024-04-01 18:49:59.744156811 +0000
	I0401 18:56:10.503481   43137 command_runner.go:130] >  Birth: 2024-04-01 18:49:59.744156811 +0000
	I0401 18:56:10.503521   43137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 18:56:10.509209   43137 command_runner.go:130] > Certificate will not expire
	I0401 18:56:10.509401   43137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 18:56:10.515106   43137 command_runner.go:130] > Certificate will not expire
	I0401 18:56:10.515171   43137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 18:56:10.520994   43137 command_runner.go:130] > Certificate will not expire
	I0401 18:56:10.521047   43137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 18:56:10.526518   43137 command_runner.go:130] > Certificate will not expire
	I0401 18:56:10.526683   43137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 18:56:10.532413   43137 command_runner.go:130] > Certificate will not expire
	I0401 18:56:10.532615   43137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 18:56:10.538350   43137 command_runner.go:130] > Certificate will not expire
	I0401 18:56:10.538403   43137 kubeadm.go:391] StartCluster: {Name:multinode-853477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.
3 ClusterName:multinode-853477 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.161 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.239 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.115 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 18:56:10.538517   43137 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 18:56:10.538566   43137 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 18:56:10.579309   43137 command_runner.go:130] > f363d25764f108d74d1c4cbdede73e53197698e8cfc9cef20d968a540108693c
	I0401 18:56:10.579331   43137 command_runner.go:130] > 5a98e75b61219ad53b41f90e4b2b7d39ae5d12800f8785637118d47269176df5
	I0401 18:56:10.579337   43137 command_runner.go:130] > ceef8d6cd3cb9dd8e9f3d0597ce26adfd43b02b4adba15febb6c5a429b172af6
	I0401 18:56:10.579347   43137 command_runner.go:130] > f6ce7b69665bbc01d73b598a2d86525641c7d4fbe714ef3997a3688d286471c4
	I0401 18:56:10.579352   43137 command_runner.go:130] > eb60348bd91879fc1995b558036ae53948482d80d31ba95e51d89b06b08a34ef
	I0401 18:56:10.579358   43137 command_runner.go:130] > 2c0ce953a27c267af6dc36c244ee162929b89360da2de35e1bcc350e83cd008c
	I0401 18:56:10.579363   43137 command_runner.go:130] > a358f83537522b0aa3022d82ed82c21a975b8c4647196a4c3761ee917e86e184
	I0401 18:56:10.579369   43137 command_runner.go:130] > 4004daf6fb9e1f819bba0832635a01a785b9e1cbaa7ceefb622a2956cfe7dac8
	I0401 18:56:10.579388   43137 cri.go:89] found id: "f363d25764f108d74d1c4cbdede73e53197698e8cfc9cef20d968a540108693c"
	I0401 18:56:10.579403   43137 cri.go:89] found id: "5a98e75b61219ad53b41f90e4b2b7d39ae5d12800f8785637118d47269176df5"
	I0401 18:56:10.579406   43137 cri.go:89] found id: "ceef8d6cd3cb9dd8e9f3d0597ce26adfd43b02b4adba15febb6c5a429b172af6"
	I0401 18:56:10.579409   43137 cri.go:89] found id: "f6ce7b69665bbc01d73b598a2d86525641c7d4fbe714ef3997a3688d286471c4"
	I0401 18:56:10.579412   43137 cri.go:89] found id: "eb60348bd91879fc1995b558036ae53948482d80d31ba95e51d89b06b08a34ef"
	I0401 18:56:10.579416   43137 cri.go:89] found id: "2c0ce953a27c267af6dc36c244ee162929b89360da2de35e1bcc350e83cd008c"
	I0401 18:56:10.579418   43137 cri.go:89] found id: "a358f83537522b0aa3022d82ed82c21a975b8c4647196a4c3761ee917e86e184"
	I0401 18:56:10.579421   43137 cri.go:89] found id: "4004daf6fb9e1f819bba0832635a01a785b9e1cbaa7ceefb622a2956cfe7dac8"
	I0401 18:56:10.579424   43137 cri.go:89] found id: ""
	I0401 18:56:10.579464   43137 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 01 18:57:39 multinode-853477 crio[2869]: time="2024-04-01 18:57:39.013349699Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711997859013322679,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cd6c0257-4d2c-4f0c-aa2f-80ec5e8228d3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 18:57:39 multinode-853477 crio[2869]: time="2024-04-01 18:57:39.014198438Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4a7bba31-6f13-401c-9dd0-08c83da4bd8d name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:57:39 multinode-853477 crio[2869]: time="2024-04-01 18:57:39.014297166Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4a7bba31-6f13-401c-9dd0-08c83da4bd8d name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:57:39 multinode-853477 crio[2869]: time="2024-04-01 18:57:39.014800095Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70f3c596e43d9e8d3b31cf53143f5cc478bffad2a4248cca14dc5cd4776f909e,PodSandboxId:9e5ac50e79a76696340adc94dc3ddf7189f2222f44961e793bfb7b7c2b6cad78,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1711997811674671722,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-pdvlk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db1681a5-1807-454a-9b1f-90edc80f2243,},Annotations:map[string]string{io.kubernetes.container.hash: fb426c0a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ecfdc0dbda808c8c4ead8c7452d6524da5a13144aba9f938c64d8fa5c5ed1f0,PodSandboxId:2f4f34ebfbf9f62db6d01afc251a6ba79230c0adfa2b5f8c84ab2b82d1a39e6d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1711997778129800477,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9rlkp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dfd3904-101a-4734-abf3-8cb24d0a5e04,},Annotations:map[string]string{io.kubernetes.container.hash: 5f44814b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:849e9f37f84373779afaa5cd86916e8b33f29beca08bd4e6763958559d31542d,PodSandboxId:5f2a65f14a8f49bac67b099fd12587d500026829604f3efbace2fa5dc9bc174d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711997778109957480,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lxn6t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a5c765-3e3b-408f-8e4a-c53083b879f3,},Annotations:map[string]string{io.kubernetes.container.hash: 8592d515,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbeb52ccbffef630a4b0f60491151c9767b53a67fac75e4820327919e1082ade,PodSandboxId:2dac7b61a67ebfb5ab27367bd26e0c0d12346b18ad2d2b93b9715c4d87139ee0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711997778088197053,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5a8401b-2fe6-4724-99e0-63a1b6cb4367,},A
nnotations:map[string]string{io.kubernetes.container.hash: e4c65c26,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:598be87df6b76ed97503616e0de22130de187736a15f217c9492fd3bcbe3165f,PodSandboxId:7fbecc3bc99f6ddbe93c98f59dc5b1eaa7824b2c8da6919c8f2465ed3b60037f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711997777989901184,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jkvlp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3c447a9-e35f-4cf8-95db-abfbb425cab3,},Annotations:map[string]string{io.k
ubernetes.container.hash: 9ee8a2b0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02dd78d1553972c1e494a9d605cec48c8b6c014d95916dc796c582d670024b66,PodSandboxId:c5c68f96d10469734e2ce62551fa212bc0624069060475a6bf5d71b64913d27d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711997773097623562,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b9674072226eb1dc8bbe6dde036b55a,},Annotations:map[string]string{io.kubernetes.container.hash: 53b97f00,io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9db106b7e04aca816f3897bd76a858c1184d517e4cb4e5a76c9b39ecfe288833,PodSandboxId:9294efd42dca6896c1c403d936dea125bb679a7b5eaf55740f4c0ec44f36c524,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711997773088154555,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecfdaf127945ce28382d8f90ac75c026,},Annotations:map[string]string{io.kubernetes.container.hash: f92c5150,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10cd91b3bcb1d75ffab7fd0f2f49522fbc9f9df61971c55fb1e6debfe05b20ae,PodSandboxId:ef69bd499d25ddd6ab0ef6465a620bfaa940a872b7f0d5a5c6d4a2f1cfca4445,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711997773102314345,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28a933531bbfe72a832eefa37f2dd17c,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d3e943c35850d286cf09b65639b8d61930d5765722ef41eea300de98f1c435b,PodSandboxId:edfc68dba135ffbdc317d12ba5552a9bbe7dca7a9f7d6e8148d4f6cf1391a2ee,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711997772968553144,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe554da581d909271f7689d472dd2373,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53017c2864ba3ca6f57fbfe5944fd8f393050a0bb68c442eb4f27bf434c640ef,PodSandboxId:62f47c59bbfa39e1bc767709d6843ee547ff6fcd0bca672229b763606063b3df,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1711997470537506807,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-pdvlk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db1681a5-1807-454a-9b1f-90edc80f2243,},Annotations:map[string]string{io.kubernetes.container.hash: fb426c0a,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f363d25764f108d74d1c4cbdede73e53197698e8cfc9cef20d968a540108693c,PodSandboxId:7bee69ef0da5ad552234d2defa9c6e8528220a57a16b3ff76f744f5f7614f96a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1711997426138483793,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5a8401b-2fe6-4724-99e0-63a1b6cb4367,},Annotations:map[string]string{io.kubernetes.container.hash: e4c65c26,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a98e75b61219ad53b41f90e4b2b7d39ae5d12800f8785637118d47269176df5,PodSandboxId:75744372fad26de7b6e1b6c839fc23b2ff308507c86fdca8567273156dbda995,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711997425615045842,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lxn6t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a5c765-3e3b-408f-8e4a-c53083b879f3,},Annotations:map[string]string{io.kubernetes.container.hash: 8592d515,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceef8d6cd3cb9dd8e9f3d0597ce26adfd43b02b4adba15febb6c5a429b172af6,PodSandboxId:77c08d4d25f512e9dfdf6ef3f9d48537f063a828df94fe22c0ae7d0dddb8cae0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1711997423893313774,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9rlkp,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 1dfd3904-101a-4734-abf3-8cb24d0a5e04,},Annotations:map[string]string{io.kubernetes.container.hash: 5f44814b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6ce7b69665bbc01d73b598a2d86525641c7d4fbe714ef3997a3688d286471c4,PodSandboxId:128a8579b95a9768f81397b4d324b60498232e0267943a31e4c4c96dbefd2fd8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1711997423695561198,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jkvlp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3c447a9-e35f-4cf8-95
db-abfbb425cab3,},Annotations:map[string]string{io.kubernetes.container.hash: 9ee8a2b0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c0ce953a27c267af6dc36c244ee162929b89360da2de35e1bcc350e83cd008c,PodSandboxId:90fafaa15e6fa316c3a0a0cee56c57f10126e6ee997df72f659139377156b3c8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1711997404250433158,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b9674072226eb1dc8bbe6dde036b55a,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 53b97f00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb60348bd91879fc1995b558036ae53948482d80d31ba95e51d89b06b08a34ef,PodSandboxId:9c287c311be2a1629eaaa8319f309fec795e952b03175efcd304daa90298f755,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1711997404275109580,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe554da581d909271f7689d472dd2373,},Annotations:map
[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a358f83537522b0aa3022d82ed82c21a975b8c4647196a4c3761ee917e86e184,PodSandboxId:bf98c87f2f39aa7c3585e2ac2500f198edf1b142a9efb4ed874f5a29bfdcf084,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711997404188637125,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecfdaf127945ce28382d8f90ac75c026,},Annotations:map[string]string{
io.kubernetes.container.hash: f92c5150,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4004daf6fb9e1f819bba0832635a01a785b9e1cbaa7ceefb622a2956cfe7dac8,PodSandboxId:559da33691144c5132081fb906e2cdeb7f31734801c64dd7ed7d29b9dd0145c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1711997404136502948,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28a933531bbfe72a832eefa37f2dd17c,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4a7bba31-6f13-401c-9dd0-08c83da4bd8d name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:57:39 multinode-853477 crio[2869]: time="2024-04-01 18:57:39.064665147Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=70f58db0-8f49-4251-98ca-e6d7fad3524d name=/runtime.v1.RuntimeService/Version
	Apr 01 18:57:39 multinode-853477 crio[2869]: time="2024-04-01 18:57:39.064813927Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=70f58db0-8f49-4251-98ca-e6d7fad3524d name=/runtime.v1.RuntimeService/Version
	Apr 01 18:57:39 multinode-853477 crio[2869]: time="2024-04-01 18:57:39.066261550Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=560633b9-ebb0-4251-8e9b-81c7630774b3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 18:57:39 multinode-853477 crio[2869]: time="2024-04-01 18:57:39.066799260Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711997859066711940,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=560633b9-ebb0-4251-8e9b-81c7630774b3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 18:57:39 multinode-853477 crio[2869]: time="2024-04-01 18:57:39.067452335Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d2049de8-5756-4ed9-a87d-ad1928ac9667 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:57:39 multinode-853477 crio[2869]: time="2024-04-01 18:57:39.067511457Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d2049de8-5756-4ed9-a87d-ad1928ac9667 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:57:39 multinode-853477 crio[2869]: time="2024-04-01 18:57:39.067919204Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70f3c596e43d9e8d3b31cf53143f5cc478bffad2a4248cca14dc5cd4776f909e,PodSandboxId:9e5ac50e79a76696340adc94dc3ddf7189f2222f44961e793bfb7b7c2b6cad78,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1711997811674671722,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-pdvlk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db1681a5-1807-454a-9b1f-90edc80f2243,},Annotations:map[string]string{io.kubernetes.container.hash: fb426c0a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ecfdc0dbda808c8c4ead8c7452d6524da5a13144aba9f938c64d8fa5c5ed1f0,PodSandboxId:2f4f34ebfbf9f62db6d01afc251a6ba79230c0adfa2b5f8c84ab2b82d1a39e6d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1711997778129800477,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9rlkp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dfd3904-101a-4734-abf3-8cb24d0a5e04,},Annotations:map[string]string{io.kubernetes.container.hash: 5f44814b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:849e9f37f84373779afaa5cd86916e8b33f29beca08bd4e6763958559d31542d,PodSandboxId:5f2a65f14a8f49bac67b099fd12587d500026829604f3efbace2fa5dc9bc174d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711997778109957480,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lxn6t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a5c765-3e3b-408f-8e4a-c53083b879f3,},Annotations:map[string]string{io.kubernetes.container.hash: 8592d515,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbeb52ccbffef630a4b0f60491151c9767b53a67fac75e4820327919e1082ade,PodSandboxId:2dac7b61a67ebfb5ab27367bd26e0c0d12346b18ad2d2b93b9715c4d87139ee0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711997778088197053,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5a8401b-2fe6-4724-99e0-63a1b6cb4367,},A
nnotations:map[string]string{io.kubernetes.container.hash: e4c65c26,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:598be87df6b76ed97503616e0de22130de187736a15f217c9492fd3bcbe3165f,PodSandboxId:7fbecc3bc99f6ddbe93c98f59dc5b1eaa7824b2c8da6919c8f2465ed3b60037f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711997777989901184,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jkvlp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3c447a9-e35f-4cf8-95db-abfbb425cab3,},Annotations:map[string]string{io.k
ubernetes.container.hash: 9ee8a2b0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02dd78d1553972c1e494a9d605cec48c8b6c014d95916dc796c582d670024b66,PodSandboxId:c5c68f96d10469734e2ce62551fa212bc0624069060475a6bf5d71b64913d27d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711997773097623562,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b9674072226eb1dc8bbe6dde036b55a,},Annotations:map[string]string{io.kubernetes.container.hash: 53b97f00,io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9db106b7e04aca816f3897bd76a858c1184d517e4cb4e5a76c9b39ecfe288833,PodSandboxId:9294efd42dca6896c1c403d936dea125bb679a7b5eaf55740f4c0ec44f36c524,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711997773088154555,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecfdaf127945ce28382d8f90ac75c026,},Annotations:map[string]string{io.kubernetes.container.hash: f92c5150,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10cd91b3bcb1d75ffab7fd0f2f49522fbc9f9df61971c55fb1e6debfe05b20ae,PodSandboxId:ef69bd499d25ddd6ab0ef6465a620bfaa940a872b7f0d5a5c6d4a2f1cfca4445,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711997773102314345,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28a933531bbfe72a832eefa37f2dd17c,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d3e943c35850d286cf09b65639b8d61930d5765722ef41eea300de98f1c435b,PodSandboxId:edfc68dba135ffbdc317d12ba5552a9bbe7dca7a9f7d6e8148d4f6cf1391a2ee,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711997772968553144,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe554da581d909271f7689d472dd2373,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53017c2864ba3ca6f57fbfe5944fd8f393050a0bb68c442eb4f27bf434c640ef,PodSandboxId:62f47c59bbfa39e1bc767709d6843ee547ff6fcd0bca672229b763606063b3df,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1711997470537506807,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-pdvlk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db1681a5-1807-454a-9b1f-90edc80f2243,},Annotations:map[string]string{io.kubernetes.container.hash: fb426c0a,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f363d25764f108d74d1c4cbdede73e53197698e8cfc9cef20d968a540108693c,PodSandboxId:7bee69ef0da5ad552234d2defa9c6e8528220a57a16b3ff76f744f5f7614f96a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1711997426138483793,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5a8401b-2fe6-4724-99e0-63a1b6cb4367,},Annotations:map[string]string{io.kubernetes.container.hash: e4c65c26,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a98e75b61219ad53b41f90e4b2b7d39ae5d12800f8785637118d47269176df5,PodSandboxId:75744372fad26de7b6e1b6c839fc23b2ff308507c86fdca8567273156dbda995,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711997425615045842,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lxn6t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a5c765-3e3b-408f-8e4a-c53083b879f3,},Annotations:map[string]string{io.kubernetes.container.hash: 8592d515,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceef8d6cd3cb9dd8e9f3d0597ce26adfd43b02b4adba15febb6c5a429b172af6,PodSandboxId:77c08d4d25f512e9dfdf6ef3f9d48537f063a828df94fe22c0ae7d0dddb8cae0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1711997423893313774,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9rlkp,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 1dfd3904-101a-4734-abf3-8cb24d0a5e04,},Annotations:map[string]string{io.kubernetes.container.hash: 5f44814b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6ce7b69665bbc01d73b598a2d86525641c7d4fbe714ef3997a3688d286471c4,PodSandboxId:128a8579b95a9768f81397b4d324b60498232e0267943a31e4c4c96dbefd2fd8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1711997423695561198,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jkvlp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3c447a9-e35f-4cf8-95
db-abfbb425cab3,},Annotations:map[string]string{io.kubernetes.container.hash: 9ee8a2b0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c0ce953a27c267af6dc36c244ee162929b89360da2de35e1bcc350e83cd008c,PodSandboxId:90fafaa15e6fa316c3a0a0cee56c57f10126e6ee997df72f659139377156b3c8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1711997404250433158,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b9674072226eb1dc8bbe6dde036b55a,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 53b97f00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb60348bd91879fc1995b558036ae53948482d80d31ba95e51d89b06b08a34ef,PodSandboxId:9c287c311be2a1629eaaa8319f309fec795e952b03175efcd304daa90298f755,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1711997404275109580,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe554da581d909271f7689d472dd2373,},Annotations:map
[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a358f83537522b0aa3022d82ed82c21a975b8c4647196a4c3761ee917e86e184,PodSandboxId:bf98c87f2f39aa7c3585e2ac2500f198edf1b142a9efb4ed874f5a29bfdcf084,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711997404188637125,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecfdaf127945ce28382d8f90ac75c026,},Annotations:map[string]string{
io.kubernetes.container.hash: f92c5150,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4004daf6fb9e1f819bba0832635a01a785b9e1cbaa7ceefb622a2956cfe7dac8,PodSandboxId:559da33691144c5132081fb906e2cdeb7f31734801c64dd7ed7d29b9dd0145c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1711997404136502948,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28a933531bbfe72a832eefa37f2dd17c,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d2049de8-5756-4ed9-a87d-ad1928ac9667 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:57:39 multinode-853477 crio[2869]: time="2024-04-01 18:57:39.111961058Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=98169def-eb6b-4bc0-91f7-2f36d2a15511 name=/runtime.v1.RuntimeService/Version
	Apr 01 18:57:39 multinode-853477 crio[2869]: time="2024-04-01 18:57:39.112036967Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=98169def-eb6b-4bc0-91f7-2f36d2a15511 name=/runtime.v1.RuntimeService/Version
	Apr 01 18:57:39 multinode-853477 crio[2869]: time="2024-04-01 18:57:39.114147502Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cbad2ee0-92de-4250-b614-92695386bcc5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 18:57:39 multinode-853477 crio[2869]: time="2024-04-01 18:57:39.114905499Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711997859114878749,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cbad2ee0-92de-4250-b614-92695386bcc5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 18:57:39 multinode-853477 crio[2869]: time="2024-04-01 18:57:39.115441935Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a4cd8499-24f6-41e5-bce6-5f5510ed6337 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:57:39 multinode-853477 crio[2869]: time="2024-04-01 18:57:39.115678359Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a4cd8499-24f6-41e5-bce6-5f5510ed6337 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:57:39 multinode-853477 crio[2869]: time="2024-04-01 18:57:39.116080745Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70f3c596e43d9e8d3b31cf53143f5cc478bffad2a4248cca14dc5cd4776f909e,PodSandboxId:9e5ac50e79a76696340adc94dc3ddf7189f2222f44961e793bfb7b7c2b6cad78,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1711997811674671722,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-pdvlk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db1681a5-1807-454a-9b1f-90edc80f2243,},Annotations:map[string]string{io.kubernetes.container.hash: fb426c0a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ecfdc0dbda808c8c4ead8c7452d6524da5a13144aba9f938c64d8fa5c5ed1f0,PodSandboxId:2f4f34ebfbf9f62db6d01afc251a6ba79230c0adfa2b5f8c84ab2b82d1a39e6d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1711997778129800477,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9rlkp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dfd3904-101a-4734-abf3-8cb24d0a5e04,},Annotations:map[string]string{io.kubernetes.container.hash: 5f44814b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:849e9f37f84373779afaa5cd86916e8b33f29beca08bd4e6763958559d31542d,PodSandboxId:5f2a65f14a8f49bac67b099fd12587d500026829604f3efbace2fa5dc9bc174d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711997778109957480,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lxn6t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a5c765-3e3b-408f-8e4a-c53083b879f3,},Annotations:map[string]string{io.kubernetes.container.hash: 8592d515,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbeb52ccbffef630a4b0f60491151c9767b53a67fac75e4820327919e1082ade,PodSandboxId:2dac7b61a67ebfb5ab27367bd26e0c0d12346b18ad2d2b93b9715c4d87139ee0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711997778088197053,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5a8401b-2fe6-4724-99e0-63a1b6cb4367,},A
nnotations:map[string]string{io.kubernetes.container.hash: e4c65c26,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:598be87df6b76ed97503616e0de22130de187736a15f217c9492fd3bcbe3165f,PodSandboxId:7fbecc3bc99f6ddbe93c98f59dc5b1eaa7824b2c8da6919c8f2465ed3b60037f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711997777989901184,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jkvlp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3c447a9-e35f-4cf8-95db-abfbb425cab3,},Annotations:map[string]string{io.k
ubernetes.container.hash: 9ee8a2b0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02dd78d1553972c1e494a9d605cec48c8b6c014d95916dc796c582d670024b66,PodSandboxId:c5c68f96d10469734e2ce62551fa212bc0624069060475a6bf5d71b64913d27d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711997773097623562,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b9674072226eb1dc8bbe6dde036b55a,},Annotations:map[string]string{io.kubernetes.container.hash: 53b97f00,io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9db106b7e04aca816f3897bd76a858c1184d517e4cb4e5a76c9b39ecfe288833,PodSandboxId:9294efd42dca6896c1c403d936dea125bb679a7b5eaf55740f4c0ec44f36c524,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711997773088154555,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecfdaf127945ce28382d8f90ac75c026,},Annotations:map[string]string{io.kubernetes.container.hash: f92c5150,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10cd91b3bcb1d75ffab7fd0f2f49522fbc9f9df61971c55fb1e6debfe05b20ae,PodSandboxId:ef69bd499d25ddd6ab0ef6465a620bfaa940a872b7f0d5a5c6d4a2f1cfca4445,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711997773102314345,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28a933531bbfe72a832eefa37f2dd17c,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d3e943c35850d286cf09b65639b8d61930d5765722ef41eea300de98f1c435b,PodSandboxId:edfc68dba135ffbdc317d12ba5552a9bbe7dca7a9f7d6e8148d4f6cf1391a2ee,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711997772968553144,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe554da581d909271f7689d472dd2373,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53017c2864ba3ca6f57fbfe5944fd8f393050a0bb68c442eb4f27bf434c640ef,PodSandboxId:62f47c59bbfa39e1bc767709d6843ee547ff6fcd0bca672229b763606063b3df,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1711997470537506807,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-pdvlk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db1681a5-1807-454a-9b1f-90edc80f2243,},Annotations:map[string]string{io.kubernetes.container.hash: fb426c0a,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f363d25764f108d74d1c4cbdede73e53197698e8cfc9cef20d968a540108693c,PodSandboxId:7bee69ef0da5ad552234d2defa9c6e8528220a57a16b3ff76f744f5f7614f96a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1711997426138483793,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5a8401b-2fe6-4724-99e0-63a1b6cb4367,},Annotations:map[string]string{io.kubernetes.container.hash: e4c65c26,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a98e75b61219ad53b41f90e4b2b7d39ae5d12800f8785637118d47269176df5,PodSandboxId:75744372fad26de7b6e1b6c839fc23b2ff308507c86fdca8567273156dbda995,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711997425615045842,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lxn6t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a5c765-3e3b-408f-8e4a-c53083b879f3,},Annotations:map[string]string{io.kubernetes.container.hash: 8592d515,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceef8d6cd3cb9dd8e9f3d0597ce26adfd43b02b4adba15febb6c5a429b172af6,PodSandboxId:77c08d4d25f512e9dfdf6ef3f9d48537f063a828df94fe22c0ae7d0dddb8cae0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1711997423893313774,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9rlkp,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 1dfd3904-101a-4734-abf3-8cb24d0a5e04,},Annotations:map[string]string{io.kubernetes.container.hash: 5f44814b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6ce7b69665bbc01d73b598a2d86525641c7d4fbe714ef3997a3688d286471c4,PodSandboxId:128a8579b95a9768f81397b4d324b60498232e0267943a31e4c4c96dbefd2fd8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1711997423695561198,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jkvlp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3c447a9-e35f-4cf8-95
db-abfbb425cab3,},Annotations:map[string]string{io.kubernetes.container.hash: 9ee8a2b0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c0ce953a27c267af6dc36c244ee162929b89360da2de35e1bcc350e83cd008c,PodSandboxId:90fafaa15e6fa316c3a0a0cee56c57f10126e6ee997df72f659139377156b3c8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1711997404250433158,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b9674072226eb1dc8bbe6dde036b55a,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 53b97f00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb60348bd91879fc1995b558036ae53948482d80d31ba95e51d89b06b08a34ef,PodSandboxId:9c287c311be2a1629eaaa8319f309fec795e952b03175efcd304daa90298f755,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1711997404275109580,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe554da581d909271f7689d472dd2373,},Annotations:map
[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a358f83537522b0aa3022d82ed82c21a975b8c4647196a4c3761ee917e86e184,PodSandboxId:bf98c87f2f39aa7c3585e2ac2500f198edf1b142a9efb4ed874f5a29bfdcf084,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711997404188637125,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecfdaf127945ce28382d8f90ac75c026,},Annotations:map[string]string{
io.kubernetes.container.hash: f92c5150,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4004daf6fb9e1f819bba0832635a01a785b9e1cbaa7ceefb622a2956cfe7dac8,PodSandboxId:559da33691144c5132081fb906e2cdeb7f31734801c64dd7ed7d29b9dd0145c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1711997404136502948,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28a933531bbfe72a832eefa37f2dd17c,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a4cd8499-24f6-41e5-bce6-5f5510ed6337 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:57:39 multinode-853477 crio[2869]: time="2024-04-01 18:57:39.165505953Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f23c96b0-b891-4994-b50f-2e2dbc4431aa name=/runtime.v1.RuntimeService/Version
	Apr 01 18:57:39 multinode-853477 crio[2869]: time="2024-04-01 18:57:39.165614874Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f23c96b0-b891-4994-b50f-2e2dbc4431aa name=/runtime.v1.RuntimeService/Version
	Apr 01 18:57:39 multinode-853477 crio[2869]: time="2024-04-01 18:57:39.167004428Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6d350c0d-bf36-4808-b887-1fef679ea85a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 18:57:39 multinode-853477 crio[2869]: time="2024-04-01 18:57:39.167393222Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711997859167370257,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6d350c0d-bf36-4808-b887-1fef679ea85a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 18:57:39 multinode-853477 crio[2869]: time="2024-04-01 18:57:39.168022824Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=94cf375d-3733-4124-923b-2c3ca2bd5c51 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:57:39 multinode-853477 crio[2869]: time="2024-04-01 18:57:39.168105219Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=94cf375d-3733-4124-923b-2c3ca2bd5c51 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 18:57:39 multinode-853477 crio[2869]: time="2024-04-01 18:57:39.168658166Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70f3c596e43d9e8d3b31cf53143f5cc478bffad2a4248cca14dc5cd4776f909e,PodSandboxId:9e5ac50e79a76696340adc94dc3ddf7189f2222f44961e793bfb7b7c2b6cad78,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1711997811674671722,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-pdvlk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db1681a5-1807-454a-9b1f-90edc80f2243,},Annotations:map[string]string{io.kubernetes.container.hash: fb426c0a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ecfdc0dbda808c8c4ead8c7452d6524da5a13144aba9f938c64d8fa5c5ed1f0,PodSandboxId:2f4f34ebfbf9f62db6d01afc251a6ba79230c0adfa2b5f8c84ab2b82d1a39e6d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1711997778129800477,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9rlkp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dfd3904-101a-4734-abf3-8cb24d0a5e04,},Annotations:map[string]string{io.kubernetes.container.hash: 5f44814b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:849e9f37f84373779afaa5cd86916e8b33f29beca08bd4e6763958559d31542d,PodSandboxId:5f2a65f14a8f49bac67b099fd12587d500026829604f3efbace2fa5dc9bc174d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711997778109957480,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lxn6t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a5c765-3e3b-408f-8e4a-c53083b879f3,},Annotations:map[string]string{io.kubernetes.container.hash: 8592d515,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbeb52ccbffef630a4b0f60491151c9767b53a67fac75e4820327919e1082ade,PodSandboxId:2dac7b61a67ebfb5ab27367bd26e0c0d12346b18ad2d2b93b9715c4d87139ee0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711997778088197053,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5a8401b-2fe6-4724-99e0-63a1b6cb4367,},A
nnotations:map[string]string{io.kubernetes.container.hash: e4c65c26,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:598be87df6b76ed97503616e0de22130de187736a15f217c9492fd3bcbe3165f,PodSandboxId:7fbecc3bc99f6ddbe93c98f59dc5b1eaa7824b2c8da6919c8f2465ed3b60037f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711997777989901184,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jkvlp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3c447a9-e35f-4cf8-95db-abfbb425cab3,},Annotations:map[string]string{io.k
ubernetes.container.hash: 9ee8a2b0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02dd78d1553972c1e494a9d605cec48c8b6c014d95916dc796c582d670024b66,PodSandboxId:c5c68f96d10469734e2ce62551fa212bc0624069060475a6bf5d71b64913d27d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711997773097623562,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b9674072226eb1dc8bbe6dde036b55a,},Annotations:map[string]string{io.kubernetes.container.hash: 53b97f00,io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9db106b7e04aca816f3897bd76a858c1184d517e4cb4e5a76c9b39ecfe288833,PodSandboxId:9294efd42dca6896c1c403d936dea125bb679a7b5eaf55740f4c0ec44f36c524,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711997773088154555,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecfdaf127945ce28382d8f90ac75c026,},Annotations:map[string]string{io.kubernetes.container.hash: f92c5150,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10cd91b3bcb1d75ffab7fd0f2f49522fbc9f9df61971c55fb1e6debfe05b20ae,PodSandboxId:ef69bd499d25ddd6ab0ef6465a620bfaa940a872b7f0d5a5c6d4a2f1cfca4445,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711997773102314345,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28a933531bbfe72a832eefa37f2dd17c,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d3e943c35850d286cf09b65639b8d61930d5765722ef41eea300de98f1c435b,PodSandboxId:edfc68dba135ffbdc317d12ba5552a9bbe7dca7a9f7d6e8148d4f6cf1391a2ee,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711997772968553144,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe554da581d909271f7689d472dd2373,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53017c2864ba3ca6f57fbfe5944fd8f393050a0bb68c442eb4f27bf434c640ef,PodSandboxId:62f47c59bbfa39e1bc767709d6843ee547ff6fcd0bca672229b763606063b3df,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1711997470537506807,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-pdvlk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db1681a5-1807-454a-9b1f-90edc80f2243,},Annotations:map[string]string{io.kubernetes.container.hash: fb426c0a,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f363d25764f108d74d1c4cbdede73e53197698e8cfc9cef20d968a540108693c,PodSandboxId:7bee69ef0da5ad552234d2defa9c6e8528220a57a16b3ff76f744f5f7614f96a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1711997426138483793,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5a8401b-2fe6-4724-99e0-63a1b6cb4367,},Annotations:map[string]string{io.kubernetes.container.hash: e4c65c26,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a98e75b61219ad53b41f90e4b2b7d39ae5d12800f8785637118d47269176df5,PodSandboxId:75744372fad26de7b6e1b6c839fc23b2ff308507c86fdca8567273156dbda995,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711997425615045842,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lxn6t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a5c765-3e3b-408f-8e4a-c53083b879f3,},Annotations:map[string]string{io.kubernetes.container.hash: 8592d515,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceef8d6cd3cb9dd8e9f3d0597ce26adfd43b02b4adba15febb6c5a429b172af6,PodSandboxId:77c08d4d25f512e9dfdf6ef3f9d48537f063a828df94fe22c0ae7d0dddb8cae0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1711997423893313774,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9rlkp,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 1dfd3904-101a-4734-abf3-8cb24d0a5e04,},Annotations:map[string]string{io.kubernetes.container.hash: 5f44814b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6ce7b69665bbc01d73b598a2d86525641c7d4fbe714ef3997a3688d286471c4,PodSandboxId:128a8579b95a9768f81397b4d324b60498232e0267943a31e4c4c96dbefd2fd8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1711997423695561198,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jkvlp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3c447a9-e35f-4cf8-95
db-abfbb425cab3,},Annotations:map[string]string{io.kubernetes.container.hash: 9ee8a2b0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c0ce953a27c267af6dc36c244ee162929b89360da2de35e1bcc350e83cd008c,PodSandboxId:90fafaa15e6fa316c3a0a0cee56c57f10126e6ee997df72f659139377156b3c8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1711997404250433158,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b9674072226eb1dc8bbe6dde036b55a,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 53b97f00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb60348bd91879fc1995b558036ae53948482d80d31ba95e51d89b06b08a34ef,PodSandboxId:9c287c311be2a1629eaaa8319f309fec795e952b03175efcd304daa90298f755,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1711997404275109580,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe554da581d909271f7689d472dd2373,},Annotations:map
[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a358f83537522b0aa3022d82ed82c21a975b8c4647196a4c3761ee917e86e184,PodSandboxId:bf98c87f2f39aa7c3585e2ac2500f198edf1b142a9efb4ed874f5a29bfdcf084,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711997404188637125,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecfdaf127945ce28382d8f90ac75c026,},Annotations:map[string]string{
io.kubernetes.container.hash: f92c5150,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4004daf6fb9e1f819bba0832635a01a785b9e1cbaa7ceefb622a2956cfe7dac8,PodSandboxId:559da33691144c5132081fb906e2cdeb7f31734801c64dd7ed7d29b9dd0145c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1711997404136502948,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28a933531bbfe72a832eefa37f2dd17c,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=94cf375d-3733-4124-923b-2c3ca2bd5c51 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	70f3c596e43d9       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      47 seconds ago       Running             busybox                   1                   9e5ac50e79a76       busybox-7fdf7869d9-pdvlk
	2ecfdc0dbda80       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               1                   2f4f34ebfbf9f       kindnet-9rlkp
	849e9f37f8437       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   5f2a65f14a8f4       coredns-76f75df574-lxn6t
	bbeb52ccbffef       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   2dac7b61a67eb       storage-provisioner
	598be87df6b76       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      About a minute ago   Running             kube-proxy                1                   7fbecc3bc99f6       kube-proxy-jkvlp
	10cd91b3bcb1d       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      About a minute ago   Running             kube-scheduler            1                   ef69bd499d25d       kube-scheduler-multinode-853477
	02dd78d155397       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   c5c68f96d1046       etcd-multinode-853477
	9db106b7e04ac       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      About a minute ago   Running             kube-apiserver            1                   9294efd42dca6       kube-apiserver-multinode-853477
	2d3e943c35850       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      About a minute ago   Running             kube-controller-manager   1                   edfc68dba135f       kube-controller-manager-multinode-853477
	53017c2864ba3       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   6 minutes ago        Exited              busybox                   0                   62f47c59bbfa3       busybox-7fdf7869d9-pdvlk
	f363d25764f10       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   7bee69ef0da5a       storage-provisioner
	5a98e75b61219       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago        Exited              coredns                   0                   75744372fad26       coredns-76f75df574-lxn6t
	ceef8d6cd3cb9       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      7 minutes ago        Exited              kindnet-cni               0                   77c08d4d25f51       kindnet-9rlkp
	f6ce7b69665bb       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      7 minutes ago        Exited              kube-proxy                0                   128a8579b95a9       kube-proxy-jkvlp
	eb60348bd9187       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      7 minutes ago        Exited              kube-controller-manager   0                   9c287c311be2a       kube-controller-manager-multinode-853477
	2c0ce953a27c2       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago        Exited              etcd                      0                   90fafaa15e6fa       etcd-multinode-853477
	a358f83537522       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      7 minutes ago        Exited              kube-apiserver            0                   bf98c87f2f39a       kube-apiserver-multinode-853477
	4004daf6fb9e1       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      7 minutes ago        Exited              kube-scheduler            0                   559da33691144       kube-scheduler-multinode-853477
	
	
	==> coredns [5a98e75b61219ad53b41f90e4b2b7d39ae5d12800f8785637118d47269176df5] <==
	[INFO] 10.244.0.3:59681 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001883091s
	[INFO] 10.244.0.3:44099 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00011196s
	[INFO] 10.244.0.3:48105 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000150131s
	[INFO] 10.244.0.3:39069 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001203414s
	[INFO] 10.244.0.3:43290 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081841s
	[INFO] 10.244.0.3:53564 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000132362s
	[INFO] 10.244.0.3:36682 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076503s
	[INFO] 10.244.1.2:35059 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134746s
	[INFO] 10.244.1.2:52173 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000129153s
	[INFO] 10.244.1.2:38193 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000108149s
	[INFO] 10.244.1.2:35922 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079203s
	[INFO] 10.244.0.3:56492 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134167s
	[INFO] 10.244.0.3:38054 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00006376s
	[INFO] 10.244.0.3:33632 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00006817s
	[INFO] 10.244.0.3:39046 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000146214s
	[INFO] 10.244.1.2:56858 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000286319s
	[INFO] 10.244.1.2:57788 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000157646s
	[INFO] 10.244.1.2:36645 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000118332s
	[INFO] 10.244.1.2:38920 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000128982s
	[INFO] 10.244.0.3:46539 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000170638s
	[INFO] 10.244.0.3:52506 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000104326s
	[INFO] 10.244.0.3:41906 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000043411s
	[INFO] 10.244.0.3:60508 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000067863s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [849e9f37f84373779afaa5cd86916e8b33f29beca08bd4e6763958559d31542d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:47010 - 15101 "HINFO IN 2694119675735516823.2287192293218032795. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009421557s
	
	
	==> describe nodes <==
	Name:               multinode-853477
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-853477
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2
	                    minikube.k8s.io/name=multinode-853477
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_01T18_50_10_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Apr 2024 18:50:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-853477
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Apr 2024 18:57:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Apr 2024 18:56:16 +0000   Mon, 01 Apr 2024 18:50:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Apr 2024 18:56:16 +0000   Mon, 01 Apr 2024 18:50:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Apr 2024 18:56:16 +0000   Mon, 01 Apr 2024 18:50:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Apr 2024 18:56:16 +0000   Mon, 01 Apr 2024 18:50:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.161
	  Hostname:    multinode-853477
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 60f04aaa253948babc505cf6ed118280
	  System UUID:                60f04aaa-2539-48ba-bc50-5cf6ed118280
	  Boot ID:                    765a3751-9a73-4256-bcc9-9917d17d9943
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-pdvlk                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m30s
	  kube-system                 coredns-76f75df574-lxn6t                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m16s
	  kube-system                 etcd-multinode-853477                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m29s
	  kube-system                 kindnet-9rlkp                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m17s
	  kube-system                 kube-apiserver-multinode-853477             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m29s
	  kube-system                 kube-controller-manager-multinode-853477    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m29s
	  kube-system                 kube-proxy-jkvlp                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m17s
	  kube-system                 kube-scheduler-multinode-853477             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m29s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m15s                  kube-proxy       
	  Normal  Starting                 80s                    kube-proxy       
	  Normal  Starting                 7m36s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m36s (x8 over 7m36s)  kubelet          Node multinode-853477 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m36s (x8 over 7m36s)  kubelet          Node multinode-853477 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m36s (x7 over 7m36s)  kubelet          Node multinode-853477 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     7m29s                  kubelet          Node multinode-853477 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  7m29s                  kubelet          Node multinode-853477 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m29s                  kubelet          Node multinode-853477 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  7m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m29s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           7m17s                  node-controller  Node multinode-853477 event: Registered Node multinode-853477 in Controller
	  Normal  NodeReady                7m14s                  kubelet          Node multinode-853477 status is now: NodeReady
	  Normal  Starting                 87s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  87s (x8 over 87s)      kubelet          Node multinode-853477 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    87s (x8 over 87s)      kubelet          Node multinode-853477 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     87s (x7 over 87s)      kubelet          Node multinode-853477 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  87s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           70s                    node-controller  Node multinode-853477 event: Registered Node multinode-853477 in Controller
	
	
	Name:               multinode-853477-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-853477-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2
	                    minikube.k8s.io/name=multinode-853477
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_01T18_56_58_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Apr 2024 18:56:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-853477-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Apr 2024 18:57:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Apr 2024 18:57:27 +0000   Mon, 01 Apr 2024 18:56:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Apr 2024 18:57:27 +0000   Mon, 01 Apr 2024 18:56:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Apr 2024 18:57:27 +0000   Mon, 01 Apr 2024 18:56:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Apr 2024 18:57:27 +0000   Mon, 01 Apr 2024 18:57:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.239
	  Hostname:    multinode-853477-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 922f3b6f48264cea829c2c8bc673d4e2
	  System UUID:                922f3b6f-4826-4cea-829c-2c8bc673d4e2
	  Boot ID:                    666fb760-6b9f-4c6a-93a3-1da4b31c77b2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-zh9pz    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         47s
	  kube-system                 kindnet-6wvv4               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m41s
	  kube-system                 kube-proxy-mthcv            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 37s                    kube-proxy  
	  Normal  Starting                 6m35s                  kube-proxy  
	  Normal  NodeHasNoDiskPressure    6m42s (x2 over 6m42s)  kubelet     Node multinode-853477-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m42s (x2 over 6m42s)  kubelet     Node multinode-853477-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  6m42s (x2 over 6m42s)  kubelet     Node multinode-853477-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  6m41s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m33s                  kubelet     Node multinode-853477-m02 status is now: NodeReady
	  Normal  Starting                 43s                    kubelet     Starting kubelet.
	  Normal  NodeHasNoDiskPressure    43s (x2 over 43s)      kubelet     Node multinode-853477-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s (x2 over 43s)      kubelet     Node multinode-853477-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  43s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  43s (x2 over 43s)      kubelet     Node multinode-853477-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                35s                    kubelet     Node multinode-853477-m02 status is now: NodeReady
	
	
	Name:               multinode-853477-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-853477-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2
	                    minikube.k8s.io/name=multinode-853477
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_01T18_57_29_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Apr 2024 18:57:28 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-853477-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Apr 2024 18:57:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Apr 2024 18:57:35 +0000   Mon, 01 Apr 2024 18:57:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Apr 2024 18:57:35 +0000   Mon, 01 Apr 2024 18:57:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Apr 2024 18:57:35 +0000   Mon, 01 Apr 2024 18:57:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Apr 2024 18:57:35 +0000   Mon, 01 Apr 2024 18:57:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.115
	  Hostname:    multinode-853477-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 244a9317052e4299b88cb24abdd1dc8c
	  System UUID:                244a9317-052e-4299-b88c-b24abdd1dc8c
	  Boot ID:                    7ff43bef-a440-47c4-88b3-9e987ed464b5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-tjr6s       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m58s
	  kube-system                 kube-proxy-hc9f2    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 5m52s                  kube-proxy  
	  Normal  Starting                 6s                     kube-proxy  
	  Normal  Starting                 5m13s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  5m58s (x2 over 5m58s)  kubelet     Node multinode-853477-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m58s (x2 over 5m58s)  kubelet     Node multinode-853477-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m58s (x2 over 5m58s)  kubelet     Node multinode-853477-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m58s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m49s                  kubelet     Node multinode-853477-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m18s (x2 over 5m18s)  kubelet     Node multinode-853477-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m18s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m18s (x2 over 5m18s)  kubelet     Node multinode-853477-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m18s (x2 over 5m18s)  kubelet     Node multinode-853477-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m10s                  kubelet     Node multinode-853477-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  11s (x2 over 11s)      kubelet     Node multinode-853477-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11s (x2 over 11s)      kubelet     Node multinode-853477-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11s (x2 over 11s)      kubelet     Node multinode-853477-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                4s                     kubelet     Node multinode-853477-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.069736] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.209711] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.119953] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.299284] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.950887] systemd-fstab-generator[764]: Ignoring "noauto" option for root device
	[  +0.066302] kauditd_printk_skb: 130 callbacks suppressed
	[Apr 1 18:50] systemd-fstab-generator[955]: Ignoring "noauto" option for root device
	[  +0.059213] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.729437] systemd-fstab-generator[1297]: Ignoring "noauto" option for root device
	[  +0.079584] kauditd_printk_skb: 69 callbacks suppressed
	[ +13.083501] systemd-fstab-generator[1490]: Ignoring "noauto" option for root device
	[  +0.149801] kauditd_printk_skb: 21 callbacks suppressed
	[Apr 1 18:51] kauditd_printk_skb: 84 callbacks suppressed
	[Apr 1 18:56] systemd-fstab-generator[2787]: Ignoring "noauto" option for root device
	[  +0.166476] systemd-fstab-generator[2799]: Ignoring "noauto" option for root device
	[  +0.198405] systemd-fstab-generator[2813]: Ignoring "noauto" option for root device
	[  +0.153504] systemd-fstab-generator[2825]: Ignoring "noauto" option for root device
	[  +0.303313] systemd-fstab-generator[2853]: Ignoring "noauto" option for root device
	[  +1.595630] systemd-fstab-generator[2955]: Ignoring "noauto" option for root device
	[  +2.117616] systemd-fstab-generator[3083]: Ignoring "noauto" option for root device
	[  +0.856237] kauditd_printk_skb: 144 callbacks suppressed
	[  +5.026442] kauditd_printk_skb: 45 callbacks suppressed
	[ +11.988650] kauditd_printk_skb: 17 callbacks suppressed
	[  +1.368296] systemd-fstab-generator[3895]: Ignoring "noauto" option for root device
	[ +20.302467] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [02dd78d1553972c1e494a9d605cec48c8b6c014d95916dc796c582d670024b66] <==
	{"level":"info","ts":"2024-04-01T18:56:13.856356Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-01T18:56:13.858817Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-01T18:56:13.859567Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"59d4e9d626571860 switched to configuration voters=(6473055670413760608)"}
	{"level":"info","ts":"2024-04-01T18:56:13.862886Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"641f62d988bc06c1","local-member-id":"59d4e9d626571860","added-peer-id":"59d4e9d626571860","added-peer-peer-urls":["https://192.168.39.161:2380"]}
	{"level":"info","ts":"2024-04-01T18:56:13.863396Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"641f62d988bc06c1","local-member-id":"59d4e9d626571860","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-01T18:56:13.864466Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-01T18:56:13.866119Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"59d4e9d626571860","initial-advertise-peer-urls":["https://192.168.39.161:2380"],"listen-peer-urls":["https://192.168.39.161:2380"],"advertise-client-urls":["https://192.168.39.161:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.161:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-01T18:56:13.866178Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-01T18:56:13.864496Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.161:2380"}
	{"level":"info","ts":"2024-04-01T18:56:13.866253Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.161:2380"}
	{"level":"info","ts":"2024-04-01T18:56:13.86584Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-01T18:56:15.226784Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"59d4e9d626571860 is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-01T18:56:15.22685Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"59d4e9d626571860 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-01T18:56:15.226894Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"59d4e9d626571860 received MsgPreVoteResp from 59d4e9d626571860 at term 2"}
	{"level":"info","ts":"2024-04-01T18:56:15.226908Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"59d4e9d626571860 became candidate at term 3"}
	{"level":"info","ts":"2024-04-01T18:56:15.226922Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"59d4e9d626571860 received MsgVoteResp from 59d4e9d626571860 at term 3"}
	{"level":"info","ts":"2024-04-01T18:56:15.22693Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"59d4e9d626571860 became leader at term 3"}
	{"level":"info","ts":"2024-04-01T18:56:15.226941Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 59d4e9d626571860 elected leader 59d4e9d626571860 at term 3"}
	{"level":"info","ts":"2024-04-01T18:56:15.232032Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"59d4e9d626571860","local-member-attributes":"{Name:multinode-853477 ClientURLs:[https://192.168.39.161:2379]}","request-path":"/0/members/59d4e9d626571860/attributes","cluster-id":"641f62d988bc06c1","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-01T18:56:15.232051Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-01T18:56:15.232071Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-01T18:56:15.233153Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-01T18:56:15.2332Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-01T18:56:15.237047Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.161:2379"}
	{"level":"info","ts":"2024-04-01T18:56:15.237133Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [2c0ce953a27c267af6dc36c244ee162929b89360da2de35e1bcc350e83cd008c] <==
	{"level":"info","ts":"2024-04-01T18:50:05.515041Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-01T18:50:05.515654Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"641f62d988bc06c1","local-member-id":"59d4e9d626571860","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-01T18:50:05.515879Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-01T18:50:05.515903Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-01T18:50:05.516002Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-01T18:50:05.51607Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-01T18:50:05.516145Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-01T18:50:05.517652Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-01T18:50:05.518861Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.161:2379"}
	{"level":"info","ts":"2024-04-01T18:50:58.175867Z","caller":"traceutil/trace.go:171","msg":"trace[1963012792] linearizableReadLoop","detail":"{readStateIndex:493; appliedIndex:492; }","duration":"174.434957ms","start":"2024-04-01T18:50:58.001395Z","end":"2024-04-01T18:50:58.17583Z","steps":["trace[1963012792] 'read index received'  (duration: 174.163562ms)","trace[1963012792] 'applied index is now lower than readState.Index'  (duration: 270.849µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-01T18:50:58.176038Z","caller":"traceutil/trace.go:171","msg":"trace[1605147475] transaction","detail":"{read_only:false; response_revision:477; number_of_response:1; }","duration":"174.756687ms","start":"2024-04-01T18:50:58.001269Z","end":"2024-04-01T18:50:58.176026Z","steps":["trace[1605147475] 'process raft request'  (duration: 174.323025ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-01T18:50:58.176224Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"174.790134ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/multinode-853477-m02.17c23c9e0e6f7542\" ","response":"range_response_count:1 size:741"}
	{"level":"info","ts":"2024-04-01T18:50:58.176296Z","caller":"traceutil/trace.go:171","msg":"trace[645952635] range","detail":"{range_begin:/registry/events/default/multinode-853477-m02.17c23c9e0e6f7542; range_end:; response_count:1; response_revision:477; }","duration":"174.908865ms","start":"2024-04-01T18:50:58.001374Z","end":"2024-04-01T18:50:58.176283Z","steps":["trace[645952635] 'agreement among raft nodes before linearized reading'  (duration: 174.764264ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-01T18:51:41.780665Z","caller":"traceutil/trace.go:171","msg":"trace[1837425999] transaction","detail":"{read_only:false; response_revision:599; number_of_response:1; }","duration":"227.376851ms","start":"2024-04-01T18:51:41.55326Z","end":"2024-04-01T18:51:41.780636Z","steps":["trace[1837425999] 'process raft request'  (duration: 131.305062ms)","trace[1837425999] 'compare'  (duration: 95.92531ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-01T18:51:41.781048Z","caller":"traceutil/trace.go:171","msg":"trace[368992295] transaction","detail":"{read_only:false; response_revision:600; number_of_response:1; }","duration":"186.72827ms","start":"2024-04-01T18:51:41.594271Z","end":"2024-04-01T18:51:41.780999Z","steps":["trace[368992295] 'process raft request'  (duration: 186.319705ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-01T18:54:36.054373Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-04-01T18:54:36.055638Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-853477","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.161:2380"],"advertise-client-urls":["https://192.168.39.161:2379"]}
	{"level":"warn","ts":"2024-04-01T18:54:36.055957Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-01T18:54:36.056058Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-01T18:54:36.137303Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.161:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-01T18:54:36.137618Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.161:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-01T18:54:36.138809Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"59d4e9d626571860","current-leader-member-id":"59d4e9d626571860"}
	{"level":"info","ts":"2024-04-01T18:54:36.141367Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.161:2380"}
	{"level":"info","ts":"2024-04-01T18:54:36.141579Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.161:2380"}
	{"level":"info","ts":"2024-04-01T18:54:36.141631Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-853477","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.161:2380"],"advertise-client-urls":["https://192.168.39.161:2379"]}
	
	
	==> kernel <==
	 18:57:39 up 8 min,  0 users,  load average: 0.25, 0.27, 0.14
	Linux multinode-853477 5.10.207 #1 SMP Wed Mar 27 22:02:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2ecfdc0dbda808c8c4ead8c7452d6524da5a13144aba9f938c64d8fa5c5ed1f0] <==
	I0401 18:56:59.175355       1 main.go:250] Node multinode-853477-m03 has CIDR [10.244.3.0/24] 
	I0401 18:57:09.186813       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0401 18:57:09.186987       1 main.go:227] handling current node
	I0401 18:57:09.187016       1 main.go:223] Handling node with IPs: map[192.168.39.239:{}]
	I0401 18:57:09.187050       1 main.go:250] Node multinode-853477-m02 has CIDR [10.244.1.0/24] 
	I0401 18:57:09.187159       1 main.go:223] Handling node with IPs: map[192.168.39.115:{}]
	I0401 18:57:09.187178       1 main.go:250] Node multinode-853477-m03 has CIDR [10.244.3.0/24] 
	I0401 18:57:19.192231       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0401 18:57:19.192425       1 main.go:227] handling current node
	I0401 18:57:19.192476       1 main.go:223] Handling node with IPs: map[192.168.39.239:{}]
	I0401 18:57:19.192527       1 main.go:250] Node multinode-853477-m02 has CIDR [10.244.1.0/24] 
	I0401 18:57:19.193044       1 main.go:223] Handling node with IPs: map[192.168.39.115:{}]
	I0401 18:57:19.193133       1 main.go:250] Node multinode-853477-m03 has CIDR [10.244.3.0/24] 
	I0401 18:57:29.210929       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0401 18:57:29.211005       1 main.go:227] handling current node
	I0401 18:57:29.211027       1 main.go:223] Handling node with IPs: map[192.168.39.239:{}]
	I0401 18:57:29.211037       1 main.go:250] Node multinode-853477-m02 has CIDR [10.244.1.0/24] 
	I0401 18:57:29.211212       1 main.go:223] Handling node with IPs: map[192.168.39.115:{}]
	I0401 18:57:29.211246       1 main.go:250] Node multinode-853477-m03 has CIDR [10.244.2.0/24] 
	I0401 18:57:39.220889       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0401 18:57:39.220921       1 main.go:227] handling current node
	I0401 18:57:39.220930       1 main.go:223] Handling node with IPs: map[192.168.39.239:{}]
	I0401 18:57:39.220935       1 main.go:250] Node multinode-853477-m02 has CIDR [10.244.1.0/24] 
	I0401 18:57:39.221037       1 main.go:223] Handling node with IPs: map[192.168.39.115:{}]
	I0401 18:57:39.221042       1 main.go:250] Node multinode-853477-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [ceef8d6cd3cb9dd8e9f3d0597ce26adfd43b02b4adba15febb6c5a429b172af6] <==
	I0401 18:53:54.995564       1 main.go:250] Node multinode-853477-m03 has CIDR [10.244.3.0/24] 
	I0401 18:54:05.007359       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0401 18:54:05.007451       1 main.go:227] handling current node
	I0401 18:54:05.007475       1 main.go:223] Handling node with IPs: map[192.168.39.239:{}]
	I0401 18:54:05.007493       1 main.go:250] Node multinode-853477-m02 has CIDR [10.244.1.0/24] 
	I0401 18:54:05.007657       1 main.go:223] Handling node with IPs: map[192.168.39.115:{}]
	I0401 18:54:05.007678       1 main.go:250] Node multinode-853477-m03 has CIDR [10.244.3.0/24] 
	I0401 18:54:15.021376       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0401 18:54:15.021426       1 main.go:227] handling current node
	I0401 18:54:15.021436       1 main.go:223] Handling node with IPs: map[192.168.39.239:{}]
	I0401 18:54:15.021441       1 main.go:250] Node multinode-853477-m02 has CIDR [10.244.1.0/24] 
	I0401 18:54:15.021538       1 main.go:223] Handling node with IPs: map[192.168.39.115:{}]
	I0401 18:54:15.021543       1 main.go:250] Node multinode-853477-m03 has CIDR [10.244.3.0/24] 
	I0401 18:54:25.035075       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0401 18:54:25.035121       1 main.go:227] handling current node
	I0401 18:54:25.035135       1 main.go:223] Handling node with IPs: map[192.168.39.239:{}]
	I0401 18:54:25.035141       1 main.go:250] Node multinode-853477-m02 has CIDR [10.244.1.0/24] 
	I0401 18:54:25.035373       1 main.go:223] Handling node with IPs: map[192.168.39.115:{}]
	I0401 18:54:25.035407       1 main.go:250] Node multinode-853477-m03 has CIDR [10.244.3.0/24] 
	I0401 18:54:35.050078       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0401 18:54:35.050152       1 main.go:227] handling current node
	I0401 18:54:35.050168       1 main.go:223] Handling node with IPs: map[192.168.39.239:{}]
	I0401 18:54:35.050177       1 main.go:250] Node multinode-853477-m02 has CIDR [10.244.1.0/24] 
	I0401 18:54:35.050320       1 main.go:223] Handling node with IPs: map[192.168.39.115:{}]
	I0401 18:54:35.050373       1 main.go:250] Node multinode-853477-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [9db106b7e04aca816f3897bd76a858c1184d517e4cb4e5a76c9b39ecfe288833] <==
	I0401 18:56:16.573596       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0401 18:56:16.646086       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0401 18:56:16.646209       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0401 18:56:16.672329       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0401 18:56:16.673067       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0401 18:56:16.673664       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0401 18:56:16.673783       1 aggregator.go:165] initial CRD sync complete...
	I0401 18:56:16.673818       1 autoregister_controller.go:141] Starting autoregister controller
	I0401 18:56:16.673839       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0401 18:56:16.673862       1 cache.go:39] Caches are synced for autoregister controller
	I0401 18:56:16.674111       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0401 18:56:16.674156       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0401 18:56:16.674236       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0401 18:56:16.674315       1 shared_informer.go:318] Caches are synced for configmaps
	E0401 18:56:16.684114       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0401 18:56:16.709600       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0401 18:56:16.716968       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0401 18:56:17.578516       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0401 18:56:19.053352       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0401 18:56:19.218327       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0401 18:56:19.227327       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0401 18:56:19.292546       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0401 18:56:19.301521       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0401 18:56:29.867380       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0401 18:56:29.923426       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [a358f83537522b0aa3022d82ed82c21a975b8c4647196a4c3761ee917e86e184] <==
	I0401 18:50:08.482626       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0401 18:50:08.496303       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.161]
	I0401 18:50:08.497350       1 controller.go:624] quota admission added evaluator for: endpoints
	I0401 18:50:08.504340       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0401 18:50:08.832409       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0401 18:50:09.938154       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0401 18:50:09.955552       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0401 18:50:09.970205       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0401 18:50:22.691440       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0401 18:50:22.896428       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0401 18:54:36.050442       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0401 18:54:36.085487       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 18:54:36.086182       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 18:54:36.086263       1 logging.go:59] [core] [Channel #10 SubChannel #11] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 18:54:36.086307       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 18:54:36.086378       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 18:54:36.086444       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 18:54:36.086517       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 18:54:36.086550       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 18:54:36.086620       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 18:54:36.086692       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 18:54:36.086806       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 18:54:36.086913       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 18:54:36.086982       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 18:54:36.087049       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [2d3e943c35850d286cf09b65639b8d61930d5765722ef41eea300de98f1c435b] <==
	I0401 18:56:52.720025       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="39.645586ms"
	I0401 18:56:52.720209       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="60.68µs"
	I0401 18:56:52.744975       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="24.421935ms"
	I0401 18:56:52.745191       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="96.404µs"
	I0401 18:56:56.976022       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-853477-m02\" does not exist"
	I0401 18:56:56.976975       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-g2mfr" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-7fdf7869d9-g2mfr"
	I0401 18:56:56.987515       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-853477-m02" podCIDRs=["10.244.1.0/24"]
	I0401 18:56:58.865291       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="57.343µs"
	I0401 18:56:58.906899       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="53.164µs"
	I0401 18:56:58.916442       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="62.217µs"
	I0401 18:56:58.949361       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="62.005µs"
	I0401 18:56:58.955041       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="45.567µs"
	I0401 18:56:58.958952       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="64.751µs"
	I0401 18:57:00.191506       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="43.305µs"
	I0401 18:57:04.600613       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-853477-m02"
	I0401 18:57:04.623684       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="87.633µs"
	I0401 18:57:04.641039       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="48.078µs"
	I0401 18:57:04.907241       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-zh9pz" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-7fdf7869d9-zh9pz"
	I0401 18:57:06.483447       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="11.226648ms"
	I0401 18:57:06.486242       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="113.865µs"
	I0401 18:57:27.275867       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-853477-m02"
	I0401 18:57:28.282487       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-853477-m03\" does not exist"
	I0401 18:57:28.282552       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-853477-m02"
	I0401 18:57:28.295232       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-853477-m03" podCIDRs=["10.244.2.0/24"]
	I0401 18:57:35.989998       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-853477-m03"
	
	
	==> kube-controller-manager [eb60348bd91879fc1995b558036ae53948482d80d31ba95e51d89b06b08a34ef] <==
	I0401 18:51:11.431127       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="9.41657ms"
	I0401 18:51:11.431214       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="35.516µs"
	I0401 18:51:41.783692       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-853477-m03\" does not exist"
	I0401 18:51:41.784992       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-853477-m02"
	I0401 18:51:41.808327       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-853477-m03" podCIDRs=["10.244.2.0/24"]
	I0401 18:51:41.822108       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-tjr6s"
	I0401 18:51:41.834266       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-hc9f2"
	I0401 18:51:42.113598       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-853477-m03"
	I0401 18:51:42.113849       1 event.go:376] "Event occurred" object="multinode-853477-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-853477-m03 event: Registered Node multinode-853477-m03 in Controller"
	I0401 18:51:50.254954       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-853477-m03"
	I0401 18:52:20.742054       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-853477-m02"
	I0401 18:52:22.036080       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-853477-m02"
	I0401 18:52:22.041183       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-853477-m03\" does not exist"
	I0401 18:52:22.063869       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-853477-m03" podCIDRs=["10.244.3.0/24"]
	I0401 18:52:29.516523       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-853477-m02"
	I0401 18:53:12.165801       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-853477-m03"
	I0401 18:53:12.166968       1 event.go:376] "Event occurred" object="multinode-853477-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-853477-m02 status is now: NodeNotReady"
	I0401 18:53:12.182424       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-mthcv" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0401 18:53:12.199013       1 event.go:376] "Event occurred" object="kube-system/kindnet-6wvv4" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0401 18:53:12.210174       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-g2mfr" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0401 18:53:12.222249       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="11.662869ms"
	I0401 18:53:12.222413       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="90.582µs"
	I0401 18:53:17.222230       1 event.go:376] "Event occurred" object="multinode-853477-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-853477-m03 status is now: NodeNotReady"
	I0401 18:53:17.237365       1 event.go:376] "Event occurred" object="kube-system/kindnet-tjr6s" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0401 18:53:17.252605       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-hc9f2" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	
	
	==> kube-proxy [598be87df6b76ed97503616e0de22130de187736a15f217c9492fd3bcbe3165f] <==
	I0401 18:56:18.436447       1 server_others.go:72] "Using iptables proxy"
	I0401 18:56:18.481411       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.161"]
	I0401 18:56:18.589320       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0401 18:56:18.589454       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0401 18:56:18.589481       1 server_others.go:168] "Using iptables Proxier"
	I0401 18:56:18.616630       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0401 18:56:18.616975       1 server.go:865] "Version info" version="v1.29.3"
	I0401 18:56:18.616990       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 18:56:18.618864       1 config.go:188] "Starting service config controller"
	I0401 18:56:18.618899       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0401 18:56:18.618983       1 config.go:97] "Starting endpoint slice config controller"
	I0401 18:56:18.618990       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0401 18:56:18.619495       1 config.go:315] "Starting node config controller"
	I0401 18:56:18.619538       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0401 18:56:18.719954       1 shared_informer.go:318] Caches are synced for node config
	I0401 18:56:18.721317       1 shared_informer.go:318] Caches are synced for service config
	I0401 18:56:18.721500       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [f6ce7b69665bbc01d73b598a2d86525641c7d4fbe714ef3997a3688d286471c4] <==
	I0401 18:50:24.037579       1 server_others.go:72] "Using iptables proxy"
	I0401 18:50:24.076468       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.161"]
	I0401 18:50:24.216571       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0401 18:50:24.217203       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0401 18:50:24.217388       1 server_others.go:168] "Using iptables Proxier"
	I0401 18:50:24.261898       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0401 18:50:24.262170       1 server.go:865] "Version info" version="v1.29.3"
	I0401 18:50:24.262221       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 18:50:24.264146       1 config.go:188] "Starting service config controller"
	I0401 18:50:24.264201       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0401 18:50:24.264234       1 config.go:97] "Starting endpoint slice config controller"
	I0401 18:50:24.264251       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0401 18:50:24.265972       1 config.go:315] "Starting node config controller"
	I0401 18:50:24.266014       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0401 18:50:24.364568       1 shared_informer.go:318] Caches are synced for service config
	I0401 18:50:24.364531       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0401 18:50:24.366109       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [10cd91b3bcb1d75ffab7fd0f2f49522fbc9f9df61971c55fb1e6debfe05b20ae] <==
	I0401 18:56:14.465439       1 serving.go:380] Generated self-signed cert in-memory
	W0401 18:56:16.657185       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0401 18:56:16.657584       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found, role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found]
	W0401 18:56:16.657645       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0401 18:56:16.657672       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0401 18:56:16.679672       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0401 18:56:16.679842       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 18:56:16.684433       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0401 18:56:16.685343       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0401 18:56:16.685420       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0401 18:56:16.685566       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0401 18:56:16.786057       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [4004daf6fb9e1f819bba0832635a01a785b9e1cbaa7ceefb622a2956cfe7dac8] <==
	W0401 18:50:06.939056       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0401 18:50:06.939340       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0401 18:50:07.868555       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0401 18:50:07.868711       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0401 18:50:07.921818       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0401 18:50:07.921875       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0401 18:50:07.946287       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0401 18:50:07.946349       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0401 18:50:07.972653       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0401 18:50:07.972679       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0401 18:50:07.997032       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0401 18:50:07.997085       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0401 18:50:08.000808       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0401 18:50:08.000855       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0401 18:50:08.009108       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0401 18:50:08.009179       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0401 18:50:08.027527       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0401 18:50:08.027574       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0401 18:50:08.326018       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0401 18:50:08.326416       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0401 18:50:10.221298       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0401 18:54:36.049495       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0401 18:54:36.049661       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0401 18:54:36.050041       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0401 18:54:36.050248       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Apr 01 18:56:17 multinode-853477 kubelet[3090]: I0401 18:56:17.291528    3090 topology_manager.go:215] "Topology Admit Handler" podUID="48a5c765-3e3b-408f-8e4a-c53083b879f3" podNamespace="kube-system" podName="coredns-76f75df574-lxn6t"
	Apr 01 18:56:17 multinode-853477 kubelet[3090]: I0401 18:56:17.292800    3090 topology_manager.go:215] "Topology Admit Handler" podUID="a5a8401b-2fe6-4724-99e0-63a1b6cb4367" podNamespace="kube-system" podName="storage-provisioner"
	Apr 01 18:56:17 multinode-853477 kubelet[3090]: I0401 18:56:17.292952    3090 topology_manager.go:215] "Topology Admit Handler" podUID="db1681a5-1807-454a-9b1f-90edc80f2243" podNamespace="default" podName="busybox-7fdf7869d9-pdvlk"
	Apr 01 18:56:17 multinode-853477 kubelet[3090]: I0401 18:56:17.320683    3090 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Apr 01 18:56:17 multinode-853477 kubelet[3090]: I0401 18:56:17.338965    3090 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a5a8401b-2fe6-4724-99e0-63a1b6cb4367-tmp\") pod \"storage-provisioner\" (UID: \"a5a8401b-2fe6-4724-99e0-63a1b6cb4367\") " pod="kube-system/storage-provisioner"
	Apr 01 18:56:17 multinode-853477 kubelet[3090]: I0401 18:56:17.339092    3090 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1dfd3904-101a-4734-abf3-8cb24d0a5e04-cni-cfg\") pod \"kindnet-9rlkp\" (UID: \"1dfd3904-101a-4734-abf3-8cb24d0a5e04\") " pod="kube-system/kindnet-9rlkp"
	Apr 01 18:56:17 multinode-853477 kubelet[3090]: I0401 18:56:17.339140    3090 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1dfd3904-101a-4734-abf3-8cb24d0a5e04-xtables-lock\") pod \"kindnet-9rlkp\" (UID: \"1dfd3904-101a-4734-abf3-8cb24d0a5e04\") " pod="kube-system/kindnet-9rlkp"
	Apr 01 18:56:17 multinode-853477 kubelet[3090]: I0401 18:56:17.339257    3090 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c3c447a9-e35f-4cf8-95db-abfbb425cab3-lib-modules\") pod \"kube-proxy-jkvlp\" (UID: \"c3c447a9-e35f-4cf8-95db-abfbb425cab3\") " pod="kube-system/kube-proxy-jkvlp"
	Apr 01 18:56:17 multinode-853477 kubelet[3090]: I0401 18:56:17.339324    3090 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1dfd3904-101a-4734-abf3-8cb24d0a5e04-lib-modules\") pod \"kindnet-9rlkp\" (UID: \"1dfd3904-101a-4734-abf3-8cb24d0a5e04\") " pod="kube-system/kindnet-9rlkp"
	Apr 01 18:56:17 multinode-853477 kubelet[3090]: I0401 18:56:17.339416    3090 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c3c447a9-e35f-4cf8-95db-abfbb425cab3-xtables-lock\") pod \"kube-proxy-jkvlp\" (UID: \"c3c447a9-e35f-4cf8-95db-abfbb425cab3\") " pod="kube-system/kube-proxy-jkvlp"
	Apr 01 18:56:25 multinode-853477 kubelet[3090]: I0401 18:56:25.035912    3090 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Apr 01 18:57:12 multinode-853477 kubelet[3090]: E0401 18:57:12.373080    3090 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 01 18:57:12 multinode-853477 kubelet[3090]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 01 18:57:12 multinode-853477 kubelet[3090]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 01 18:57:12 multinode-853477 kubelet[3090]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 01 18:57:12 multinode-853477 kubelet[3090]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 01 18:57:12 multinode-853477 kubelet[3090]: E0401 18:57:12.410133    3090 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod28a933531bbfe72a832eefa37f2dd17c/crio-559da33691144c5132081fb906e2cdeb7f31734801c64dd7ed7d29b9dd0145c5: Error finding container 559da33691144c5132081fb906e2cdeb7f31734801c64dd7ed7d29b9dd0145c5: Status 404 returned error can't find the container with id 559da33691144c5132081fb906e2cdeb7f31734801c64dd7ed7d29b9dd0145c5
	Apr 01 18:57:12 multinode-853477 kubelet[3090]: E0401 18:57:12.410621    3090 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod48a5c765-3e3b-408f-8e4a-c53083b879f3/crio-75744372fad26de7b6e1b6c839fc23b2ff308507c86fdca8567273156dbda995: Error finding container 75744372fad26de7b6e1b6c839fc23b2ff308507c86fdca8567273156dbda995: Status 404 returned error can't find the container with id 75744372fad26de7b6e1b6c839fc23b2ff308507c86fdca8567273156dbda995
	Apr 01 18:57:12 multinode-853477 kubelet[3090]: E0401 18:57:12.410981    3090 manager.go:1116] Failed to create existing container: /kubepods/besteffort/podc3c447a9-e35f-4cf8-95db-abfbb425cab3/crio-128a8579b95a9768f81397b4d324b60498232e0267943a31e4c4c96dbefd2fd8: Error finding container 128a8579b95a9768f81397b4d324b60498232e0267943a31e4c4c96dbefd2fd8: Status 404 returned error can't find the container with id 128a8579b95a9768f81397b4d324b60498232e0267943a31e4c4c96dbefd2fd8
	Apr 01 18:57:12 multinode-853477 kubelet[3090]: E0401 18:57:12.411193    3090 manager.go:1116] Failed to create existing container: /kubepods/burstable/podecfdaf127945ce28382d8f90ac75c026/crio-bf98c87f2f39aa7c3585e2ac2500f198edf1b142a9efb4ed874f5a29bfdcf084: Error finding container bf98c87f2f39aa7c3585e2ac2500f198edf1b142a9efb4ed874f5a29bfdcf084: Status 404 returned error can't find the container with id bf98c87f2f39aa7c3585e2ac2500f198edf1b142a9efb4ed874f5a29bfdcf084
	Apr 01 18:57:12 multinode-853477 kubelet[3090]: E0401 18:57:12.411616    3090 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod2b9674072226eb1dc8bbe6dde036b55a/crio-90fafaa15e6fa316c3a0a0cee56c57f10126e6ee997df72f659139377156b3c8: Error finding container 90fafaa15e6fa316c3a0a0cee56c57f10126e6ee997df72f659139377156b3c8: Status 404 returned error can't find the container with id 90fafaa15e6fa316c3a0a0cee56c57f10126e6ee997df72f659139377156b3c8
	Apr 01 18:57:12 multinode-853477 kubelet[3090]: E0401 18:57:12.411981    3090 manager.go:1116] Failed to create existing container: /kubepods/pod1dfd3904-101a-4734-abf3-8cb24d0a5e04/crio-77c08d4d25f512e9dfdf6ef3f9d48537f063a828df94fe22c0ae7d0dddb8cae0: Error finding container 77c08d4d25f512e9dfdf6ef3f9d48537f063a828df94fe22c0ae7d0dddb8cae0: Status 404 returned error can't find the container with id 77c08d4d25f512e9dfdf6ef3f9d48537f063a828df94fe22c0ae7d0dddb8cae0
	Apr 01 18:57:12 multinode-853477 kubelet[3090]: E0401 18:57:12.412216    3090 manager.go:1116] Failed to create existing container: /kubepods/burstable/podfe554da581d909271f7689d472dd2373/crio-9c287c311be2a1629eaaa8319f309fec795e952b03175efcd304daa90298f755: Error finding container 9c287c311be2a1629eaaa8319f309fec795e952b03175efcd304daa90298f755: Status 404 returned error can't find the container with id 9c287c311be2a1629eaaa8319f309fec795e952b03175efcd304daa90298f755
	Apr 01 18:57:12 multinode-853477 kubelet[3090]: E0401 18:57:12.412576    3090 manager.go:1116] Failed to create existing container: /kubepods/besteffort/poddb1681a5-1807-454a-9b1f-90edc80f2243/crio-62f47c59bbfa39e1bc767709d6843ee547ff6fcd0bca672229b763606063b3df: Error finding container 62f47c59bbfa39e1bc767709d6843ee547ff6fcd0bca672229b763606063b3df: Status 404 returned error can't find the container with id 62f47c59bbfa39e1bc767709d6843ee547ff6fcd0bca672229b763606063b3df
	Apr 01 18:57:12 multinode-853477 kubelet[3090]: E0401 18:57:12.413108    3090 manager.go:1116] Failed to create existing container: /kubepods/besteffort/poda5a8401b-2fe6-4724-99e0-63a1b6cb4367/crio-7bee69ef0da5ad552234d2defa9c6e8528220a57a16b3ff76f744f5f7614f96a: Error finding container 7bee69ef0da5ad552234d2defa9c6e8528220a57a16b3ff76f744f5f7614f96a: Status 404 returned error can't find the container with id 7bee69ef0da5ad552234d2defa9c6e8528220a57a16b3ff76f744f5f7614f96a
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0401 18:57:38.699702   43983 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18233-10493/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-853477 -n multinode-853477
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-853477 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (308.43s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-853477 stop
E0401 18:58:52.856389   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/client.crt: no such file or directory
E0401 18:59:16.857035   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/functional-784295/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-853477 stop: exit status 82 (2m0.468270278s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-853477-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-853477 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-853477 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-853477 status: exit status 3 (18.657852235s)

                                                
                                                
-- stdout --
	multinode-853477
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-853477-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0401 19:00:02.237941   44520 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.239:22: connect: no route to host
	E0401 19:00:02.237974   44520 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.239:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-853477 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-853477 -n multinode-853477
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-853477 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-853477 logs -n 25: (1.644271762s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	| ssh     | multinode-853477 ssh -n                                                                 | multinode-853477 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:51 UTC | 01 Apr 24 18:51 UTC |
	|         | multinode-853477-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-853477 cp multinode-853477-m02:/home/docker/cp-test.txt                       | multinode-853477 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:51 UTC | 01 Apr 24 18:51 UTC |
	|         | multinode-853477:/home/docker/cp-test_multinode-853477-m02_multinode-853477.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-853477 ssh -n                                                                 | multinode-853477 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:51 UTC | 01 Apr 24 18:51 UTC |
	|         | multinode-853477-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-853477 ssh -n multinode-853477 sudo cat                                       | multinode-853477 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:51 UTC | 01 Apr 24 18:51 UTC |
	|         | /home/docker/cp-test_multinode-853477-m02_multinode-853477.txt                          |                  |         |                |                     |                     |
	| cp      | multinode-853477 cp multinode-853477-m02:/home/docker/cp-test.txt                       | multinode-853477 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:51 UTC | 01 Apr 24 18:51 UTC |
	|         | multinode-853477-m03:/home/docker/cp-test_multinode-853477-m02_multinode-853477-m03.txt |                  |         |                |                     |                     |
	| ssh     | multinode-853477 ssh -n                                                                 | multinode-853477 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:51 UTC | 01 Apr 24 18:51 UTC |
	|         | multinode-853477-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-853477 ssh -n multinode-853477-m03 sudo cat                                   | multinode-853477 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:51 UTC | 01 Apr 24 18:51 UTC |
	|         | /home/docker/cp-test_multinode-853477-m02_multinode-853477-m03.txt                      |                  |         |                |                     |                     |
	| cp      | multinode-853477 cp testdata/cp-test.txt                                                | multinode-853477 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:51 UTC | 01 Apr 24 18:51 UTC |
	|         | multinode-853477-m03:/home/docker/cp-test.txt                                           |                  |         |                |                     |                     |
	| ssh     | multinode-853477 ssh -n                                                                 | multinode-853477 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:51 UTC | 01 Apr 24 18:51 UTC |
	|         | multinode-853477-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-853477 cp multinode-853477-m03:/home/docker/cp-test.txt                       | multinode-853477 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:51 UTC | 01 Apr 24 18:51 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3069131938/001/cp-test_multinode-853477-m03.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-853477 ssh -n                                                                 | multinode-853477 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:51 UTC | 01 Apr 24 18:51 UTC |
	|         | multinode-853477-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-853477 cp multinode-853477-m03:/home/docker/cp-test.txt                       | multinode-853477 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:51 UTC | 01 Apr 24 18:51 UTC |
	|         | multinode-853477:/home/docker/cp-test_multinode-853477-m03_multinode-853477.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-853477 ssh -n                                                                 | multinode-853477 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:51 UTC | 01 Apr 24 18:51 UTC |
	|         | multinode-853477-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-853477 ssh -n multinode-853477 sudo cat                                       | multinode-853477 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:51 UTC | 01 Apr 24 18:51 UTC |
	|         | /home/docker/cp-test_multinode-853477-m03_multinode-853477.txt                          |                  |         |                |                     |                     |
	| cp      | multinode-853477 cp multinode-853477-m03:/home/docker/cp-test.txt                       | multinode-853477 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:52 UTC | 01 Apr 24 18:52 UTC |
	|         | multinode-853477-m02:/home/docker/cp-test_multinode-853477-m03_multinode-853477-m02.txt |                  |         |                |                     |                     |
	| ssh     | multinode-853477 ssh -n                                                                 | multinode-853477 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:52 UTC | 01 Apr 24 18:52 UTC |
	|         | multinode-853477-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-853477 ssh -n multinode-853477-m02 sudo cat                                   | multinode-853477 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:52 UTC | 01 Apr 24 18:52 UTC |
	|         | /home/docker/cp-test_multinode-853477-m03_multinode-853477-m02.txt                      |                  |         |                |                     |                     |
	| node    | multinode-853477 node stop m03                                                          | multinode-853477 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:52 UTC | 01 Apr 24 18:52 UTC |
	| node    | multinode-853477 node start                                                             | multinode-853477 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:52 UTC | 01 Apr 24 18:52 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |                |                     |                     |
	| node    | list -p multinode-853477                                                                | multinode-853477 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:52 UTC |                     |
	| stop    | -p multinode-853477                                                                     | multinode-853477 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:52 UTC |                     |
	| start   | -p multinode-853477                                                                     | multinode-853477 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:54 UTC | 01 Apr 24 18:57 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |                |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |                |                     |                     |
	| node    | list -p multinode-853477                                                                | multinode-853477 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:57 UTC |                     |
	| node    | multinode-853477 node delete                                                            | multinode-853477 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:57 UTC | 01 Apr 24 18:57 UTC |
	|         | m03                                                                                     |                  |         |                |                     |                     |
	| stop    | multinode-853477 stop                                                                   | multinode-853477 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:57 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/01 18:54:35
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 18:54:35.112290   43137 out.go:291] Setting OutFile to fd 1 ...
	I0401 18:54:35.112408   43137 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 18:54:35.112416   43137 out.go:304] Setting ErrFile to fd 2...
	I0401 18:54:35.112420   43137 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 18:54:35.112584   43137 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-10493/.minikube/bin
	I0401 18:54:35.113115   43137 out.go:298] Setting JSON to false
	I0401 18:54:35.114013   43137 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5827,"bootTime":1711991848,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 18:54:35.114074   43137 start.go:139] virtualization: kvm guest
	I0401 18:54:35.116421   43137 out.go:177] * [multinode-853477] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 18:54:35.117859   43137 out.go:177]   - MINIKUBE_LOCATION=18233
	I0401 18:54:35.117865   43137 notify.go:220] Checking for updates...
	I0401 18:54:35.119437   43137 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 18:54:35.120870   43137 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 18:54:35.122667   43137 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-10493/.minikube
	I0401 18:54:35.124294   43137 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 18:54:35.125553   43137 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 18:54:35.127011   43137 config.go:182] Loaded profile config "multinode-853477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 18:54:35.127092   43137 driver.go:392] Setting default libvirt URI to qemu:///system
	I0401 18:54:35.127510   43137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:54:35.127572   43137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:54:35.142256   43137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33125
	I0401 18:54:35.142690   43137 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:54:35.143168   43137 main.go:141] libmachine: Using API Version  1
	I0401 18:54:35.143190   43137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:54:35.143510   43137 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:54:35.143663   43137 main.go:141] libmachine: (multinode-853477) Calling .DriverName
	I0401 18:54:35.177565   43137 out.go:177] * Using the kvm2 driver based on existing profile
	I0401 18:54:35.178699   43137 start.go:297] selected driver: kvm2
	I0401 18:54:35.178713   43137 start.go:901] validating driver "kvm2" against &{Name:multinode-853477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.29.3 ClusterName:multinode-853477 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.161 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.239 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.115 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 18:54:35.178875   43137 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 18:54:35.179313   43137 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 18:54:35.179401   43137 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18233-10493/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0401 18:54:35.193694   43137 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0401 18:54:35.194371   43137 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 18:54:35.194433   43137 cni.go:84] Creating CNI manager for ""
	I0401 18:54:35.194445   43137 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0401 18:54:35.194501   43137 start.go:340] cluster config:
	{Name:multinode-853477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-853477 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.161 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.239 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.115 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 18:54:35.194628   43137 iso.go:125] acquiring lock: {Name:mka511ffe42ecd86bd7f46e7a17ddcdd3e5e4327 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 18:54:35.196224   43137 out.go:177] * Starting "multinode-853477" primary control-plane node in "multinode-853477" cluster
	I0401 18:54:35.197364   43137 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0401 18:54:35.197386   43137 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0401 18:54:35.197392   43137 cache.go:56] Caching tarball of preloaded images
	I0401 18:54:35.197453   43137 preload.go:173] Found /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 18:54:35.197465   43137 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0401 18:54:35.197578   43137 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/multinode-853477/config.json ...
	I0401 18:54:35.197795   43137 start.go:360] acquireMachinesLock for multinode-853477: {Name:mk6b7472209a8db5f40be4c2f0565da7e0094c19 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 18:54:35.197834   43137 start.go:364] duration metric: took 22.189µs to acquireMachinesLock for "multinode-853477"
	I0401 18:54:35.197848   43137 start.go:96] Skipping create...Using existing machine configuration
	I0401 18:54:35.197860   43137 fix.go:54] fixHost starting: 
	I0401 18:54:35.198098   43137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:54:35.198127   43137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:54:35.211547   43137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36887
	I0401 18:54:35.211883   43137 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:54:35.212336   43137 main.go:141] libmachine: Using API Version  1
	I0401 18:54:35.212356   43137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:54:35.212683   43137 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:54:35.212879   43137 main.go:141] libmachine: (multinode-853477) Calling .DriverName
	I0401 18:54:35.213030   43137 main.go:141] libmachine: (multinode-853477) Calling .GetState
	I0401 18:54:35.214546   43137 fix.go:112] recreateIfNeeded on multinode-853477: state=Running err=<nil>
	W0401 18:54:35.214572   43137 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 18:54:35.216486   43137 out.go:177] * Updating the running kvm2 "multinode-853477" VM ...
	I0401 18:54:35.217625   43137 machine.go:94] provisionDockerMachine start ...
	I0401 18:54:35.217661   43137 main.go:141] libmachine: (multinode-853477) Calling .DriverName
	I0401 18:54:35.217863   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHHostname
	I0401 18:54:35.220207   43137 main.go:141] libmachine: (multinode-853477) DBG | domain multinode-853477 has defined MAC address 52:54:00:e9:6f:8b in network mk-multinode-853477
	I0401 18:54:35.220646   43137 main.go:141] libmachine: (multinode-853477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:6f:8b", ip: ""} in network mk-multinode-853477: {Iface:virbr1 ExpiryTime:2024-04-01 19:49:43 +0000 UTC Type:0 Mac:52:54:00:e9:6f:8b Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:multinode-853477 Clientid:01:52:54:00:e9:6f:8b}
	I0401 18:54:35.220671   43137 main.go:141] libmachine: (multinode-853477) DBG | domain multinode-853477 has defined IP address 192.168.39.161 and MAC address 52:54:00:e9:6f:8b in network mk-multinode-853477
	I0401 18:54:35.220797   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHPort
	I0401 18:54:35.220942   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHKeyPath
	I0401 18:54:35.221055   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHKeyPath
	I0401 18:54:35.221177   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHUsername
	I0401 18:54:35.221318   43137 main.go:141] libmachine: Using SSH client type: native
	I0401 18:54:35.221524   43137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.161 22 <nil> <nil>}
	I0401 18:54:35.221536   43137 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 18:54:35.339251   43137 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-853477
	
	I0401 18:54:35.339273   43137 main.go:141] libmachine: (multinode-853477) Calling .GetMachineName
	I0401 18:54:35.339481   43137 buildroot.go:166] provisioning hostname "multinode-853477"
	I0401 18:54:35.339501   43137 main.go:141] libmachine: (multinode-853477) Calling .GetMachineName
	I0401 18:54:35.339675   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHHostname
	I0401 18:54:35.342322   43137 main.go:141] libmachine: (multinode-853477) DBG | domain multinode-853477 has defined MAC address 52:54:00:e9:6f:8b in network mk-multinode-853477
	I0401 18:54:35.342755   43137 main.go:141] libmachine: (multinode-853477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:6f:8b", ip: ""} in network mk-multinode-853477: {Iface:virbr1 ExpiryTime:2024-04-01 19:49:43 +0000 UTC Type:0 Mac:52:54:00:e9:6f:8b Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:multinode-853477 Clientid:01:52:54:00:e9:6f:8b}
	I0401 18:54:35.342782   43137 main.go:141] libmachine: (multinode-853477) DBG | domain multinode-853477 has defined IP address 192.168.39.161 and MAC address 52:54:00:e9:6f:8b in network mk-multinode-853477
	I0401 18:54:35.342983   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHPort
	I0401 18:54:35.343150   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHKeyPath
	I0401 18:54:35.343292   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHKeyPath
	I0401 18:54:35.343422   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHUsername
	I0401 18:54:35.343561   43137 main.go:141] libmachine: Using SSH client type: native
	I0401 18:54:35.343709   43137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.161 22 <nil> <nil>}
	I0401 18:54:35.343721   43137 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-853477 && echo "multinode-853477" | sudo tee /etc/hostname
	I0401 18:54:35.471352   43137 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-853477
	
	I0401 18:54:35.471393   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHHostname
	I0401 18:54:35.473941   43137 main.go:141] libmachine: (multinode-853477) DBG | domain multinode-853477 has defined MAC address 52:54:00:e9:6f:8b in network mk-multinode-853477
	I0401 18:54:35.474300   43137 main.go:141] libmachine: (multinode-853477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:6f:8b", ip: ""} in network mk-multinode-853477: {Iface:virbr1 ExpiryTime:2024-04-01 19:49:43 +0000 UTC Type:0 Mac:52:54:00:e9:6f:8b Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:multinode-853477 Clientid:01:52:54:00:e9:6f:8b}
	I0401 18:54:35.474341   43137 main.go:141] libmachine: (multinode-853477) DBG | domain multinode-853477 has defined IP address 192.168.39.161 and MAC address 52:54:00:e9:6f:8b in network mk-multinode-853477
	I0401 18:54:35.474482   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHPort
	I0401 18:54:35.474644   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHKeyPath
	I0401 18:54:35.474786   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHKeyPath
	I0401 18:54:35.474899   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHUsername
	I0401 18:54:35.475037   43137 main.go:141] libmachine: Using SSH client type: native
	I0401 18:54:35.475233   43137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.161 22 <nil> <nil>}
	I0401 18:54:35.475257   43137 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-853477' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-853477/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-853477' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 18:54:35.591329   43137 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 18:54:35.591363   43137 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18233-10493/.minikube CaCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18233-10493/.minikube}
	I0401 18:54:35.591389   43137 buildroot.go:174] setting up certificates
	I0401 18:54:35.591398   43137 provision.go:84] configureAuth start
	I0401 18:54:35.591407   43137 main.go:141] libmachine: (multinode-853477) Calling .GetMachineName
	I0401 18:54:35.591660   43137 main.go:141] libmachine: (multinode-853477) Calling .GetIP
	I0401 18:54:35.594092   43137 main.go:141] libmachine: (multinode-853477) DBG | domain multinode-853477 has defined MAC address 52:54:00:e9:6f:8b in network mk-multinode-853477
	I0401 18:54:35.594446   43137 main.go:141] libmachine: (multinode-853477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:6f:8b", ip: ""} in network mk-multinode-853477: {Iface:virbr1 ExpiryTime:2024-04-01 19:49:43 +0000 UTC Type:0 Mac:52:54:00:e9:6f:8b Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:multinode-853477 Clientid:01:52:54:00:e9:6f:8b}
	I0401 18:54:35.594481   43137 main.go:141] libmachine: (multinode-853477) DBG | domain multinode-853477 has defined IP address 192.168.39.161 and MAC address 52:54:00:e9:6f:8b in network mk-multinode-853477
	I0401 18:54:35.594548   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHHostname
	I0401 18:54:35.596526   43137 main.go:141] libmachine: (multinode-853477) DBG | domain multinode-853477 has defined MAC address 52:54:00:e9:6f:8b in network mk-multinode-853477
	I0401 18:54:35.596810   43137 main.go:141] libmachine: (multinode-853477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:6f:8b", ip: ""} in network mk-multinode-853477: {Iface:virbr1 ExpiryTime:2024-04-01 19:49:43 +0000 UTC Type:0 Mac:52:54:00:e9:6f:8b Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:multinode-853477 Clientid:01:52:54:00:e9:6f:8b}
	I0401 18:54:35.596840   43137 main.go:141] libmachine: (multinode-853477) DBG | domain multinode-853477 has defined IP address 192.168.39.161 and MAC address 52:54:00:e9:6f:8b in network mk-multinode-853477
	I0401 18:54:35.596979   43137 provision.go:143] copyHostCerts
	I0401 18:54:35.597010   43137 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem
	I0401 18:54:35.597043   43137 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem, removing ...
	I0401 18:54:35.597051   43137 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem
	I0401 18:54:35.597129   43137 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem (1082 bytes)
	I0401 18:54:35.597215   43137 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem
	I0401 18:54:35.597233   43137 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem, removing ...
	I0401 18:54:35.597238   43137 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem
	I0401 18:54:35.597261   43137 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem (1123 bytes)
	I0401 18:54:35.597315   43137 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem
	I0401 18:54:35.597337   43137 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem, removing ...
	I0401 18:54:35.597344   43137 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem
	I0401 18:54:35.597364   43137 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem (1679 bytes)
	I0401 18:54:35.597419   43137 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem org=jenkins.multinode-853477 san=[127.0.0.1 192.168.39.161 localhost minikube multinode-853477]
	I0401 18:54:35.731835   43137 provision.go:177] copyRemoteCerts
	I0401 18:54:35.731901   43137 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 18:54:35.731924   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHHostname
	I0401 18:54:35.735061   43137 main.go:141] libmachine: (multinode-853477) DBG | domain multinode-853477 has defined MAC address 52:54:00:e9:6f:8b in network mk-multinode-853477
	I0401 18:54:35.735406   43137 main.go:141] libmachine: (multinode-853477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:6f:8b", ip: ""} in network mk-multinode-853477: {Iface:virbr1 ExpiryTime:2024-04-01 19:49:43 +0000 UTC Type:0 Mac:52:54:00:e9:6f:8b Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:multinode-853477 Clientid:01:52:54:00:e9:6f:8b}
	I0401 18:54:35.735447   43137 main.go:141] libmachine: (multinode-853477) DBG | domain multinode-853477 has defined IP address 192.168.39.161 and MAC address 52:54:00:e9:6f:8b in network mk-multinode-853477
	I0401 18:54:35.735622   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHPort
	I0401 18:54:35.735825   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHKeyPath
	I0401 18:54:35.735998   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHUsername
	I0401 18:54:35.736140   43137 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/multinode-853477/id_rsa Username:docker}
	I0401 18:54:35.825857   43137 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0401 18:54:35.825958   43137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 18:54:35.855911   43137 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0401 18:54:35.855959   43137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0401 18:54:35.882303   43137 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0401 18:54:35.882359   43137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 18:54:35.911941   43137 provision.go:87] duration metric: took 320.533658ms to configureAuth
	I0401 18:54:35.911961   43137 buildroot.go:189] setting minikube options for container-runtime
	I0401 18:54:35.912163   43137 config.go:182] Loaded profile config "multinode-853477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 18:54:35.912236   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHHostname
	I0401 18:54:35.914674   43137 main.go:141] libmachine: (multinode-853477) DBG | domain multinode-853477 has defined MAC address 52:54:00:e9:6f:8b in network mk-multinode-853477
	I0401 18:54:35.914987   43137 main.go:141] libmachine: (multinode-853477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:6f:8b", ip: ""} in network mk-multinode-853477: {Iface:virbr1 ExpiryTime:2024-04-01 19:49:43 +0000 UTC Type:0 Mac:52:54:00:e9:6f:8b Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:multinode-853477 Clientid:01:52:54:00:e9:6f:8b}
	I0401 18:54:35.915019   43137 main.go:141] libmachine: (multinode-853477) DBG | domain multinode-853477 has defined IP address 192.168.39.161 and MAC address 52:54:00:e9:6f:8b in network mk-multinode-853477
	I0401 18:54:35.915166   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHPort
	I0401 18:54:35.915390   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHKeyPath
	I0401 18:54:35.915547   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHKeyPath
	I0401 18:54:35.915682   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHUsername
	I0401 18:54:35.915835   43137 main.go:141] libmachine: Using SSH client type: native
	I0401 18:54:35.915988   43137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.161 22 <nil> <nil>}
	I0401 18:54:35.916004   43137 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 18:56:06.859140   43137 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 18:56:06.859174   43137 machine.go:97] duration metric: took 1m31.641534818s to provisionDockerMachine
	I0401 18:56:06.859190   43137 start.go:293] postStartSetup for "multinode-853477" (driver="kvm2")
	I0401 18:56:06.859206   43137 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 18:56:06.859225   43137 main.go:141] libmachine: (multinode-853477) Calling .DriverName
	I0401 18:56:06.859572   43137 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 18:56:06.859598   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHHostname
	I0401 18:56:06.862599   43137 main.go:141] libmachine: (multinode-853477) DBG | domain multinode-853477 has defined MAC address 52:54:00:e9:6f:8b in network mk-multinode-853477
	I0401 18:56:06.863063   43137 main.go:141] libmachine: (multinode-853477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:6f:8b", ip: ""} in network mk-multinode-853477: {Iface:virbr1 ExpiryTime:2024-04-01 19:49:43 +0000 UTC Type:0 Mac:52:54:00:e9:6f:8b Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:multinode-853477 Clientid:01:52:54:00:e9:6f:8b}
	I0401 18:56:06.863088   43137 main.go:141] libmachine: (multinode-853477) DBG | domain multinode-853477 has defined IP address 192.168.39.161 and MAC address 52:54:00:e9:6f:8b in network mk-multinode-853477
	I0401 18:56:06.863205   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHPort
	I0401 18:56:06.863372   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHKeyPath
	I0401 18:56:06.863523   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHUsername
	I0401 18:56:06.863690   43137 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/multinode-853477/id_rsa Username:docker}
	I0401 18:56:06.949537   43137 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 18:56:06.954249   43137 command_runner.go:130] > NAME=Buildroot
	I0401 18:56:06.954267   43137 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0401 18:56:06.954271   43137 command_runner.go:130] > ID=buildroot
	I0401 18:56:06.954276   43137 command_runner.go:130] > VERSION_ID=2023.02.9
	I0401 18:56:06.954290   43137 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0401 18:56:06.954326   43137 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 18:56:06.954352   43137 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/addons for local assets ...
	I0401 18:56:06.954412   43137 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/files for local assets ...
	I0401 18:56:06.954508   43137 filesync.go:149] local asset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> 177512.pem in /etc/ssl/certs
	I0401 18:56:06.954518   43137 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> /etc/ssl/certs/177512.pem
	I0401 18:56:06.954616   43137 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 18:56:06.964741   43137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /etc/ssl/certs/177512.pem (1708 bytes)
	I0401 18:56:06.991158   43137 start.go:296] duration metric: took 131.953919ms for postStartSetup
	I0401 18:56:06.991192   43137 fix.go:56] duration metric: took 1m31.793331155s for fixHost
	I0401 18:56:06.991215   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHHostname
	I0401 18:56:06.993807   43137 main.go:141] libmachine: (multinode-853477) DBG | domain multinode-853477 has defined MAC address 52:54:00:e9:6f:8b in network mk-multinode-853477
	I0401 18:56:06.994093   43137 main.go:141] libmachine: (multinode-853477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:6f:8b", ip: ""} in network mk-multinode-853477: {Iface:virbr1 ExpiryTime:2024-04-01 19:49:43 +0000 UTC Type:0 Mac:52:54:00:e9:6f:8b Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:multinode-853477 Clientid:01:52:54:00:e9:6f:8b}
	I0401 18:56:06.994113   43137 main.go:141] libmachine: (multinode-853477) DBG | domain multinode-853477 has defined IP address 192.168.39.161 and MAC address 52:54:00:e9:6f:8b in network mk-multinode-853477
	I0401 18:56:06.994263   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHPort
	I0401 18:56:06.994444   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHKeyPath
	I0401 18:56:06.994604   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHKeyPath
	I0401 18:56:06.994725   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHUsername
	I0401 18:56:06.994854   43137 main.go:141] libmachine: Using SSH client type: native
	I0401 18:56:06.995002   43137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.161 22 <nil> <nil>}
	I0401 18:56:06.995012   43137 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 18:56:07.106523   43137 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711997767.083421289
	
	I0401 18:56:07.106540   43137 fix.go:216] guest clock: 1711997767.083421289
	I0401 18:56:07.106546   43137 fix.go:229] Guest: 2024-04-01 18:56:07.083421289 +0000 UTC Remote: 2024-04-01 18:56:06.991196377 +0000 UTC m=+91.927467503 (delta=92.224912ms)
	I0401 18:56:07.106563   43137 fix.go:200] guest clock delta is within tolerance: 92.224912ms
	I0401 18:56:07.106569   43137 start.go:83] releasing machines lock for "multinode-853477", held for 1m31.908725389s
	I0401 18:56:07.106585   43137 main.go:141] libmachine: (multinode-853477) Calling .DriverName
	I0401 18:56:07.106802   43137 main.go:141] libmachine: (multinode-853477) Calling .GetIP
	I0401 18:56:07.109429   43137 main.go:141] libmachine: (multinode-853477) DBG | domain multinode-853477 has defined MAC address 52:54:00:e9:6f:8b in network mk-multinode-853477
	I0401 18:56:07.109879   43137 main.go:141] libmachine: (multinode-853477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:6f:8b", ip: ""} in network mk-multinode-853477: {Iface:virbr1 ExpiryTime:2024-04-01 19:49:43 +0000 UTC Type:0 Mac:52:54:00:e9:6f:8b Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:multinode-853477 Clientid:01:52:54:00:e9:6f:8b}
	I0401 18:56:07.109919   43137 main.go:141] libmachine: (multinode-853477) DBG | domain multinode-853477 has defined IP address 192.168.39.161 and MAC address 52:54:00:e9:6f:8b in network mk-multinode-853477
	I0401 18:56:07.110034   43137 main.go:141] libmachine: (multinode-853477) Calling .DriverName
	I0401 18:56:07.110525   43137 main.go:141] libmachine: (multinode-853477) Calling .DriverName
	I0401 18:56:07.110699   43137 main.go:141] libmachine: (multinode-853477) Calling .DriverName
	I0401 18:56:07.110814   43137 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 18:56:07.110853   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHHostname
	I0401 18:56:07.110893   43137 ssh_runner.go:195] Run: cat /version.json
	I0401 18:56:07.110915   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHHostname
	I0401 18:56:07.113510   43137 main.go:141] libmachine: (multinode-853477) DBG | domain multinode-853477 has defined MAC address 52:54:00:e9:6f:8b in network mk-multinode-853477
	I0401 18:56:07.113701   43137 main.go:141] libmachine: (multinode-853477) DBG | domain multinode-853477 has defined MAC address 52:54:00:e9:6f:8b in network mk-multinode-853477
	I0401 18:56:07.113967   43137 main.go:141] libmachine: (multinode-853477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:6f:8b", ip: ""} in network mk-multinode-853477: {Iface:virbr1 ExpiryTime:2024-04-01 19:49:43 +0000 UTC Type:0 Mac:52:54:00:e9:6f:8b Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:multinode-853477 Clientid:01:52:54:00:e9:6f:8b}
	I0401 18:56:07.114009   43137 main.go:141] libmachine: (multinode-853477) DBG | domain multinode-853477 has defined IP address 192.168.39.161 and MAC address 52:54:00:e9:6f:8b in network mk-multinode-853477
	I0401 18:56:07.114095   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHPort
	I0401 18:56:07.114236   43137 main.go:141] libmachine: (multinode-853477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:6f:8b", ip: ""} in network mk-multinode-853477: {Iface:virbr1 ExpiryTime:2024-04-01 19:49:43 +0000 UTC Type:0 Mac:52:54:00:e9:6f:8b Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:multinode-853477 Clientid:01:52:54:00:e9:6f:8b}
	I0401 18:56:07.114253   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHKeyPath
	I0401 18:56:07.114269   43137 main.go:141] libmachine: (multinode-853477) DBG | domain multinode-853477 has defined IP address 192.168.39.161 and MAC address 52:54:00:e9:6f:8b in network mk-multinode-853477
	I0401 18:56:07.114388   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHUsername
	I0401 18:56:07.114437   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHPort
	I0401 18:56:07.114533   43137 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/multinode-853477/id_rsa Username:docker}
	I0401 18:56:07.114615   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHKeyPath
	I0401 18:56:07.114730   43137 main.go:141] libmachine: (multinode-853477) Calling .GetSSHUsername
	I0401 18:56:07.114885   43137 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/multinode-853477/id_rsa Username:docker}
	I0401 18:56:07.194236   43137 command_runner.go:130] > {"iso_version": "v1.33.0-1711559712-18485", "kicbase_version": "v0.0.43-beta.0", "minikube_version": "v1.33.0-beta.0", "commit": "db97f5257476488cfa11a4cd2d95d2aa6fbd9d33"}
	I0401 18:56:07.194517   43137 ssh_runner.go:195] Run: systemctl --version
	I0401 18:56:07.221043   43137 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0401 18:56:07.221103   43137 command_runner.go:130] > systemd 252 (252)
	I0401 18:56:07.221127   43137 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0401 18:56:07.221175   43137 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 18:56:07.381795   43137 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 18:56:07.390145   43137 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0401 18:56:07.390582   43137 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 18:56:07.390644   43137 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 18:56:07.400788   43137 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 18:56:07.400807   43137 start.go:494] detecting cgroup driver to use...
	I0401 18:56:07.400868   43137 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 18:56:07.418360   43137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 18:56:07.433118   43137 docker.go:217] disabling cri-docker service (if available) ...
	I0401 18:56:07.433176   43137 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 18:56:07.447362   43137 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 18:56:07.461611   43137 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 18:56:07.615784   43137 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 18:56:07.779232   43137 docker.go:233] disabling docker service ...
	I0401 18:56:07.779307   43137 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 18:56:07.800556   43137 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 18:56:07.816488   43137 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 18:56:07.973654   43137 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 18:56:08.130631   43137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 18:56:08.146670   43137 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 18:56:08.166362   43137 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0401 18:56:08.166647   43137 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0401 18:56:08.166697   43137 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:56:08.179499   43137 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 18:56:08.179562   43137 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:56:08.191264   43137 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:56:08.203054   43137 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:56:08.215458   43137 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 18:56:08.227590   43137 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:56:08.240432   43137 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:56:08.252349   43137 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 18:56:08.264258   43137 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 18:56:08.274767   43137 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0401 18:56:08.274957   43137 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 18:56:08.285638   43137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 18:56:08.430186   43137 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 18:56:09.520561   43137 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.090344963s)
	I0401 18:56:09.520588   43137 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 18:56:09.520637   43137 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 18:56:09.526097   43137 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0401 18:56:09.526112   43137 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0401 18:56:09.526119   43137 command_runner.go:130] > Device: 0,22	Inode: 1339        Links: 1
	I0401 18:56:09.526136   43137 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0401 18:56:09.526142   43137 command_runner.go:130] > Access: 2024-04-01 18:56:09.447321892 +0000
	I0401 18:56:09.526156   43137 command_runner.go:130] > Modify: 2024-04-01 18:56:09.386320402 +0000
	I0401 18:56:09.526171   43137 command_runner.go:130] > Change: 2024-04-01 18:56:09.386320402 +0000
	I0401 18:56:09.526180   43137 command_runner.go:130] >  Birth: -
	I0401 18:56:09.526262   43137 start.go:562] Will wait 60s for crictl version
	I0401 18:56:09.526316   43137 ssh_runner.go:195] Run: which crictl
	I0401 18:56:09.530812   43137 command_runner.go:130] > /usr/bin/crictl
	I0401 18:56:09.530860   43137 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 18:56:09.570886   43137 command_runner.go:130] > Version:  0.1.0
	I0401 18:56:09.570939   43137 command_runner.go:130] > RuntimeName:  cri-o
	I0401 18:56:09.570970   43137 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0401 18:56:09.570982   43137 command_runner.go:130] > RuntimeApiVersion:  v1
	I0401 18:56:09.572364   43137 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 18:56:09.572439   43137 ssh_runner.go:195] Run: crio --version
	I0401 18:56:09.604038   43137 command_runner.go:130] > crio version 1.29.1
	I0401 18:56:09.604056   43137 command_runner.go:130] > Version:        1.29.1
	I0401 18:56:09.604061   43137 command_runner.go:130] > GitCommit:      unknown
	I0401 18:56:09.604067   43137 command_runner.go:130] > GitCommitDate:  unknown
	I0401 18:56:09.604073   43137 command_runner.go:130] > GitTreeState:   clean
	I0401 18:56:09.604086   43137 command_runner.go:130] > BuildDate:      2024-03-27T22:46:22Z
	I0401 18:56:09.604100   43137 command_runner.go:130] > GoVersion:      go1.21.6
	I0401 18:56:09.604106   43137 command_runner.go:130] > Compiler:       gc
	I0401 18:56:09.604112   43137 command_runner.go:130] > Platform:       linux/amd64
	I0401 18:56:09.604120   43137 command_runner.go:130] > Linkmode:       dynamic
	I0401 18:56:09.604127   43137 command_runner.go:130] > BuildTags:      
	I0401 18:56:09.604134   43137 command_runner.go:130] >   containers_image_ostree_stub
	I0401 18:56:09.604141   43137 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0401 18:56:09.604147   43137 command_runner.go:130] >   btrfs_noversion
	I0401 18:56:09.604154   43137 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0401 18:56:09.604161   43137 command_runner.go:130] >   libdm_no_deferred_remove
	I0401 18:56:09.604167   43137 command_runner.go:130] >   seccomp
	I0401 18:56:09.604174   43137 command_runner.go:130] > LDFlags:          unknown
	I0401 18:56:09.604180   43137 command_runner.go:130] > SeccompEnabled:   true
	I0401 18:56:09.604191   43137 command_runner.go:130] > AppArmorEnabled:  false
	I0401 18:56:09.604264   43137 ssh_runner.go:195] Run: crio --version
	I0401 18:56:09.637119   43137 command_runner.go:130] > crio version 1.29.1
	I0401 18:56:09.637149   43137 command_runner.go:130] > Version:        1.29.1
	I0401 18:56:09.637158   43137 command_runner.go:130] > GitCommit:      unknown
	I0401 18:56:09.637166   43137 command_runner.go:130] > GitCommitDate:  unknown
	I0401 18:56:09.637173   43137 command_runner.go:130] > GitTreeState:   clean
	I0401 18:56:09.637183   43137 command_runner.go:130] > BuildDate:      2024-03-27T22:46:22Z
	I0401 18:56:09.637191   43137 command_runner.go:130] > GoVersion:      go1.21.6
	I0401 18:56:09.637199   43137 command_runner.go:130] > Compiler:       gc
	I0401 18:56:09.637205   43137 command_runner.go:130] > Platform:       linux/amd64
	I0401 18:56:09.637211   43137 command_runner.go:130] > Linkmode:       dynamic
	I0401 18:56:09.637218   43137 command_runner.go:130] > BuildTags:      
	I0401 18:56:09.637223   43137 command_runner.go:130] >   containers_image_ostree_stub
	I0401 18:56:09.637227   43137 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0401 18:56:09.637232   43137 command_runner.go:130] >   btrfs_noversion
	I0401 18:56:09.637237   43137 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0401 18:56:09.637242   43137 command_runner.go:130] >   libdm_no_deferred_remove
	I0401 18:56:09.637246   43137 command_runner.go:130] >   seccomp
	I0401 18:56:09.637250   43137 command_runner.go:130] > LDFlags:          unknown
	I0401 18:56:09.637258   43137 command_runner.go:130] > SeccompEnabled:   true
	I0401 18:56:09.637266   43137 command_runner.go:130] > AppArmorEnabled:  false
	I0401 18:56:09.640198   43137 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0401 18:56:09.641861   43137 main.go:141] libmachine: (multinode-853477) Calling .GetIP
	I0401 18:56:09.644340   43137 main.go:141] libmachine: (multinode-853477) DBG | domain multinode-853477 has defined MAC address 52:54:00:e9:6f:8b in network mk-multinode-853477
	I0401 18:56:09.644692   43137 main.go:141] libmachine: (multinode-853477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:6f:8b", ip: ""} in network mk-multinode-853477: {Iface:virbr1 ExpiryTime:2024-04-01 19:49:43 +0000 UTC Type:0 Mac:52:54:00:e9:6f:8b Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:multinode-853477 Clientid:01:52:54:00:e9:6f:8b}
	I0401 18:56:09.644721   43137 main.go:141] libmachine: (multinode-853477) DBG | domain multinode-853477 has defined IP address 192.168.39.161 and MAC address 52:54:00:e9:6f:8b in network mk-multinode-853477
	I0401 18:56:09.644869   43137 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0401 18:56:09.650038   43137 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0401 18:56:09.650310   43137 kubeadm.go:877] updating cluster {Name:multinode-853477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
29.3 ClusterName:multinode-853477 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.161 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.239 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.115 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fa
lse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 18:56:09.650434   43137 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0401 18:56:09.650474   43137 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 18:56:09.699280   43137 command_runner.go:130] > {
	I0401 18:56:09.699305   43137 command_runner.go:130] >   "images": [
	I0401 18:56:09.699309   43137 command_runner.go:130] >     {
	I0401 18:56:09.699317   43137 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0401 18:56:09.699321   43137 command_runner.go:130] >       "repoTags": [
	I0401 18:56:09.699327   43137 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0401 18:56:09.699330   43137 command_runner.go:130] >       ],
	I0401 18:56:09.699334   43137 command_runner.go:130] >       "repoDigests": [
	I0401 18:56:09.699341   43137 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0401 18:56:09.699349   43137 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0401 18:56:09.699359   43137 command_runner.go:130] >       ],
	I0401 18:56:09.699372   43137 command_runner.go:130] >       "size": "65291810",
	I0401 18:56:09.699376   43137 command_runner.go:130] >       "uid": null,
	I0401 18:56:09.699380   43137 command_runner.go:130] >       "username": "",
	I0401 18:56:09.699387   43137 command_runner.go:130] >       "spec": null,
	I0401 18:56:09.699391   43137 command_runner.go:130] >       "pinned": false
	I0401 18:56:09.699395   43137 command_runner.go:130] >     },
	I0401 18:56:09.699398   43137 command_runner.go:130] >     {
	I0401 18:56:09.699404   43137 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0401 18:56:09.699408   43137 command_runner.go:130] >       "repoTags": [
	I0401 18:56:09.699413   43137 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0401 18:56:09.699417   43137 command_runner.go:130] >       ],
	I0401 18:56:09.699421   43137 command_runner.go:130] >       "repoDigests": [
	I0401 18:56:09.699428   43137 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0401 18:56:09.699441   43137 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0401 18:56:09.699445   43137 command_runner.go:130] >       ],
	I0401 18:56:09.699485   43137 command_runner.go:130] >       "size": "1363676",
	I0401 18:56:09.699524   43137 command_runner.go:130] >       "uid": null,
	I0401 18:56:09.699546   43137 command_runner.go:130] >       "username": "",
	I0401 18:56:09.699557   43137 command_runner.go:130] >       "spec": null,
	I0401 18:56:09.699564   43137 command_runner.go:130] >       "pinned": false
	I0401 18:56:09.699574   43137 command_runner.go:130] >     },
	I0401 18:56:09.699580   43137 command_runner.go:130] >     {
	I0401 18:56:09.699589   43137 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0401 18:56:09.699597   43137 command_runner.go:130] >       "repoTags": [
	I0401 18:56:09.699602   43137 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0401 18:56:09.699608   43137 command_runner.go:130] >       ],
	I0401 18:56:09.699612   43137 command_runner.go:130] >       "repoDigests": [
	I0401 18:56:09.699619   43137 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0401 18:56:09.699627   43137 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0401 18:56:09.699632   43137 command_runner.go:130] >       ],
	I0401 18:56:09.699640   43137 command_runner.go:130] >       "size": "31470524",
	I0401 18:56:09.699646   43137 command_runner.go:130] >       "uid": null,
	I0401 18:56:09.699654   43137 command_runner.go:130] >       "username": "",
	I0401 18:56:09.699660   43137 command_runner.go:130] >       "spec": null,
	I0401 18:56:09.699667   43137 command_runner.go:130] >       "pinned": false
	I0401 18:56:09.699683   43137 command_runner.go:130] >     },
	I0401 18:56:09.699690   43137 command_runner.go:130] >     {
	I0401 18:56:09.699698   43137 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0401 18:56:09.699705   43137 command_runner.go:130] >       "repoTags": [
	I0401 18:56:09.699716   43137 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0401 18:56:09.699725   43137 command_runner.go:130] >       ],
	I0401 18:56:09.699732   43137 command_runner.go:130] >       "repoDigests": [
	I0401 18:56:09.699748   43137 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0401 18:56:09.699769   43137 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0401 18:56:09.699781   43137 command_runner.go:130] >       ],
	I0401 18:56:09.699787   43137 command_runner.go:130] >       "size": "61245718",
	I0401 18:56:09.699794   43137 command_runner.go:130] >       "uid": null,
	I0401 18:56:09.699801   43137 command_runner.go:130] >       "username": "nonroot",
	I0401 18:56:09.699811   43137 command_runner.go:130] >       "spec": null,
	I0401 18:56:09.699817   43137 command_runner.go:130] >       "pinned": false
	I0401 18:56:09.699822   43137 command_runner.go:130] >     },
	I0401 18:56:09.699825   43137 command_runner.go:130] >     {
	I0401 18:56:09.699835   43137 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0401 18:56:09.699846   43137 command_runner.go:130] >       "repoTags": [
	I0401 18:56:09.699854   43137 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0401 18:56:09.699863   43137 command_runner.go:130] >       ],
	I0401 18:56:09.699870   43137 command_runner.go:130] >       "repoDigests": [
	I0401 18:56:09.699884   43137 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0401 18:56:09.699898   43137 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0401 18:56:09.699907   43137 command_runner.go:130] >       ],
	I0401 18:56:09.699914   43137 command_runner.go:130] >       "size": "150779692",
	I0401 18:56:09.699922   43137 command_runner.go:130] >       "uid": {
	I0401 18:56:09.699926   43137 command_runner.go:130] >         "value": "0"
	I0401 18:56:09.699931   43137 command_runner.go:130] >       },
	I0401 18:56:09.699938   43137 command_runner.go:130] >       "username": "",
	I0401 18:56:09.699945   43137 command_runner.go:130] >       "spec": null,
	I0401 18:56:09.699953   43137 command_runner.go:130] >       "pinned": false
	I0401 18:56:09.699961   43137 command_runner.go:130] >     },
	I0401 18:56:09.699966   43137 command_runner.go:130] >     {
	I0401 18:56:09.699978   43137 command_runner.go:130] >       "id": "39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533",
	I0401 18:56:09.699985   43137 command_runner.go:130] >       "repoTags": [
	I0401 18:56:09.700000   43137 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.29.3"
	I0401 18:56:09.700009   43137 command_runner.go:130] >       ],
	I0401 18:56:09.700015   43137 command_runner.go:130] >       "repoDigests": [
	I0401 18:56:09.700025   43137 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:21be2c03b528e582a63a41d8270f469ad1b24e2f6ba8238386768fc981ca1322",
	I0401 18:56:09.700040   43137 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c"
	I0401 18:56:09.700047   43137 command_runner.go:130] >       ],
	I0401 18:56:09.700054   43137 command_runner.go:130] >       "size": "128508878",
	I0401 18:56:09.700063   43137 command_runner.go:130] >       "uid": {
	I0401 18:56:09.700070   43137 command_runner.go:130] >         "value": "0"
	I0401 18:56:09.700078   43137 command_runner.go:130] >       },
	I0401 18:56:09.700084   43137 command_runner.go:130] >       "username": "",
	I0401 18:56:09.700094   43137 command_runner.go:130] >       "spec": null,
	I0401 18:56:09.700101   43137 command_runner.go:130] >       "pinned": false
	I0401 18:56:09.700107   43137 command_runner.go:130] >     },
	I0401 18:56:09.700111   43137 command_runner.go:130] >     {
	I0401 18:56:09.700120   43137 command_runner.go:130] >       "id": "6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3",
	I0401 18:56:09.700130   43137 command_runner.go:130] >       "repoTags": [
	I0401 18:56:09.700140   43137 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.29.3"
	I0401 18:56:09.700147   43137 command_runner.go:130] >       ],
	I0401 18:56:09.700154   43137 command_runner.go:130] >       "repoDigests": [
	I0401 18:56:09.700170   43137 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:495e03d609009733264502138f33ab4ebff55e4ccc34b51fce1dc48eba5aa606",
	I0401 18:56:09.700181   43137 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104"
	I0401 18:56:09.700190   43137 command_runner.go:130] >       ],
	I0401 18:56:09.700197   43137 command_runner.go:130] >       "size": "123142962",
	I0401 18:56:09.700204   43137 command_runner.go:130] >       "uid": {
	I0401 18:56:09.700208   43137 command_runner.go:130] >         "value": "0"
	I0401 18:56:09.700213   43137 command_runner.go:130] >       },
	I0401 18:56:09.700223   43137 command_runner.go:130] >       "username": "",
	I0401 18:56:09.700229   43137 command_runner.go:130] >       "spec": null,
	I0401 18:56:09.700239   43137 command_runner.go:130] >       "pinned": false
	I0401 18:56:09.700244   43137 command_runner.go:130] >     },
	I0401 18:56:09.700253   43137 command_runner.go:130] >     {
	I0401 18:56:09.700263   43137 command_runner.go:130] >       "id": "a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392",
	I0401 18:56:09.700272   43137 command_runner.go:130] >       "repoTags": [
	I0401 18:56:09.700323   43137 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.29.3"
	I0401 18:56:09.700396   43137 command_runner.go:130] >       ],
	I0401 18:56:09.700442   43137 command_runner.go:130] >       "repoDigests": [
	I0401 18:56:09.700477   43137 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:d137dd922e588abc7b0e2f20afd338065e9abccdecfe705abfb19f588fbac11d",
	I0401 18:56:09.700493   43137 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863"
	I0401 18:56:09.700500   43137 command_runner.go:130] >       ],
	I0401 18:56:09.700510   43137 command_runner.go:130] >       "size": "83634073",
	I0401 18:56:09.700515   43137 command_runner.go:130] >       "uid": null,
	I0401 18:56:09.700520   43137 command_runner.go:130] >       "username": "",
	I0401 18:56:09.700524   43137 command_runner.go:130] >       "spec": null,
	I0401 18:56:09.700527   43137 command_runner.go:130] >       "pinned": false
	I0401 18:56:09.700531   43137 command_runner.go:130] >     },
	I0401 18:56:09.700534   43137 command_runner.go:130] >     {
	I0401 18:56:09.700539   43137 command_runner.go:130] >       "id": "8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b",
	I0401 18:56:09.700546   43137 command_runner.go:130] >       "repoTags": [
	I0401 18:56:09.700554   43137 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.29.3"
	I0401 18:56:09.700557   43137 command_runner.go:130] >       ],
	I0401 18:56:09.700561   43137 command_runner.go:130] >       "repoDigests": [
	I0401 18:56:09.700569   43137 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a",
	I0401 18:56:09.700577   43137 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:c6dae5df00e42512d2baa3e1e74efbf08bddd595e930123f6021f715198b8e88"
	I0401 18:56:09.700582   43137 command_runner.go:130] >       ],
	I0401 18:56:09.700588   43137 command_runner.go:130] >       "size": "60724018",
	I0401 18:56:09.700595   43137 command_runner.go:130] >       "uid": {
	I0401 18:56:09.700601   43137 command_runner.go:130] >         "value": "0"
	I0401 18:56:09.700604   43137 command_runner.go:130] >       },
	I0401 18:56:09.700608   43137 command_runner.go:130] >       "username": "",
	I0401 18:56:09.700611   43137 command_runner.go:130] >       "spec": null,
	I0401 18:56:09.700614   43137 command_runner.go:130] >       "pinned": false
	I0401 18:56:09.700618   43137 command_runner.go:130] >     },
	I0401 18:56:09.700621   43137 command_runner.go:130] >     {
	I0401 18:56:09.700626   43137 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0401 18:56:09.700629   43137 command_runner.go:130] >       "repoTags": [
	I0401 18:56:09.700634   43137 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0401 18:56:09.700637   43137 command_runner.go:130] >       ],
	I0401 18:56:09.700641   43137 command_runner.go:130] >       "repoDigests": [
	I0401 18:56:09.700647   43137 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0401 18:56:09.700655   43137 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0401 18:56:09.700662   43137 command_runner.go:130] >       ],
	I0401 18:56:09.700675   43137 command_runner.go:130] >       "size": "750414",
	I0401 18:56:09.700686   43137 command_runner.go:130] >       "uid": {
	I0401 18:56:09.700691   43137 command_runner.go:130] >         "value": "65535"
	I0401 18:56:09.700703   43137 command_runner.go:130] >       },
	I0401 18:56:09.700714   43137 command_runner.go:130] >       "username": "",
	I0401 18:56:09.700729   43137 command_runner.go:130] >       "spec": null,
	I0401 18:56:09.700736   43137 command_runner.go:130] >       "pinned": true
	I0401 18:56:09.700740   43137 command_runner.go:130] >     }
	I0401 18:56:09.700743   43137 command_runner.go:130] >   ]
	I0401 18:56:09.700746   43137 command_runner.go:130] > }
	I0401 18:56:09.701609   43137 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 18:56:09.701623   43137 crio.go:433] Images already preloaded, skipping extraction
	I0401 18:56:09.701679   43137 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 18:56:09.739358   43137 command_runner.go:130] > {
	I0401 18:56:09.739386   43137 command_runner.go:130] >   "images": [
	I0401 18:56:09.739390   43137 command_runner.go:130] >     {
	I0401 18:56:09.739398   43137 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0401 18:56:09.739402   43137 command_runner.go:130] >       "repoTags": [
	I0401 18:56:09.739408   43137 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0401 18:56:09.739412   43137 command_runner.go:130] >       ],
	I0401 18:56:09.739415   43137 command_runner.go:130] >       "repoDigests": [
	I0401 18:56:09.739429   43137 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0401 18:56:09.739436   43137 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0401 18:56:09.739440   43137 command_runner.go:130] >       ],
	I0401 18:56:09.739444   43137 command_runner.go:130] >       "size": "65291810",
	I0401 18:56:09.739448   43137 command_runner.go:130] >       "uid": null,
	I0401 18:56:09.739451   43137 command_runner.go:130] >       "username": "",
	I0401 18:56:09.739458   43137 command_runner.go:130] >       "spec": null,
	I0401 18:56:09.739464   43137 command_runner.go:130] >       "pinned": false
	I0401 18:56:09.739468   43137 command_runner.go:130] >     },
	I0401 18:56:09.739472   43137 command_runner.go:130] >     {
	I0401 18:56:09.739478   43137 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0401 18:56:09.739484   43137 command_runner.go:130] >       "repoTags": [
	I0401 18:56:09.739490   43137 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0401 18:56:09.739493   43137 command_runner.go:130] >       ],
	I0401 18:56:09.739498   43137 command_runner.go:130] >       "repoDigests": [
	I0401 18:56:09.739505   43137 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0401 18:56:09.739515   43137 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0401 18:56:09.739523   43137 command_runner.go:130] >       ],
	I0401 18:56:09.739539   43137 command_runner.go:130] >       "size": "1363676",
	I0401 18:56:09.739545   43137 command_runner.go:130] >       "uid": null,
	I0401 18:56:09.739556   43137 command_runner.go:130] >       "username": "",
	I0401 18:56:09.739564   43137 command_runner.go:130] >       "spec": null,
	I0401 18:56:09.739570   43137 command_runner.go:130] >       "pinned": false
	I0401 18:56:09.739579   43137 command_runner.go:130] >     },
	I0401 18:56:09.739584   43137 command_runner.go:130] >     {
	I0401 18:56:09.739591   43137 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0401 18:56:09.739595   43137 command_runner.go:130] >       "repoTags": [
	I0401 18:56:09.739600   43137 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0401 18:56:09.739603   43137 command_runner.go:130] >       ],
	I0401 18:56:09.739607   43137 command_runner.go:130] >       "repoDigests": [
	I0401 18:56:09.739615   43137 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0401 18:56:09.739623   43137 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0401 18:56:09.739630   43137 command_runner.go:130] >       ],
	I0401 18:56:09.739636   43137 command_runner.go:130] >       "size": "31470524",
	I0401 18:56:09.739647   43137 command_runner.go:130] >       "uid": null,
	I0401 18:56:09.739653   43137 command_runner.go:130] >       "username": "",
	I0401 18:56:09.739659   43137 command_runner.go:130] >       "spec": null,
	I0401 18:56:09.739668   43137 command_runner.go:130] >       "pinned": false
	I0401 18:56:09.739673   43137 command_runner.go:130] >     },
	I0401 18:56:09.739678   43137 command_runner.go:130] >     {
	I0401 18:56:09.739690   43137 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0401 18:56:09.739694   43137 command_runner.go:130] >       "repoTags": [
	I0401 18:56:09.739700   43137 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0401 18:56:09.739704   43137 command_runner.go:130] >       ],
	I0401 18:56:09.739707   43137 command_runner.go:130] >       "repoDigests": [
	I0401 18:56:09.739718   43137 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0401 18:56:09.739741   43137 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0401 18:56:09.739751   43137 command_runner.go:130] >       ],
	I0401 18:56:09.739758   43137 command_runner.go:130] >       "size": "61245718",
	I0401 18:56:09.739768   43137 command_runner.go:130] >       "uid": null,
	I0401 18:56:09.739778   43137 command_runner.go:130] >       "username": "nonroot",
	I0401 18:56:09.739788   43137 command_runner.go:130] >       "spec": null,
	I0401 18:56:09.739796   43137 command_runner.go:130] >       "pinned": false
	I0401 18:56:09.739812   43137 command_runner.go:130] >     },
	I0401 18:56:09.739827   43137 command_runner.go:130] >     {
	I0401 18:56:09.739837   43137 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0401 18:56:09.739843   43137 command_runner.go:130] >       "repoTags": [
	I0401 18:56:09.739860   43137 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0401 18:56:09.739869   43137 command_runner.go:130] >       ],
	I0401 18:56:09.739875   43137 command_runner.go:130] >       "repoDigests": [
	I0401 18:56:09.739886   43137 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0401 18:56:09.739901   43137 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0401 18:56:09.739910   43137 command_runner.go:130] >       ],
	I0401 18:56:09.739920   43137 command_runner.go:130] >       "size": "150779692",
	I0401 18:56:09.739929   43137 command_runner.go:130] >       "uid": {
	I0401 18:56:09.739938   43137 command_runner.go:130] >         "value": "0"
	I0401 18:56:09.739947   43137 command_runner.go:130] >       },
	I0401 18:56:09.739954   43137 command_runner.go:130] >       "username": "",
	I0401 18:56:09.739962   43137 command_runner.go:130] >       "spec": null,
	I0401 18:56:09.739969   43137 command_runner.go:130] >       "pinned": false
	I0401 18:56:09.739975   43137 command_runner.go:130] >     },
	I0401 18:56:09.739983   43137 command_runner.go:130] >     {
	I0401 18:56:09.739996   43137 command_runner.go:130] >       "id": "39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533",
	I0401 18:56:09.740006   43137 command_runner.go:130] >       "repoTags": [
	I0401 18:56:09.740014   43137 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.29.3"
	I0401 18:56:09.740022   43137 command_runner.go:130] >       ],
	I0401 18:56:09.740032   43137 command_runner.go:130] >       "repoDigests": [
	I0401 18:56:09.740045   43137 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:21be2c03b528e582a63a41d8270f469ad1b24e2f6ba8238386768fc981ca1322",
	I0401 18:56:09.740056   43137 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c"
	I0401 18:56:09.740064   43137 command_runner.go:130] >       ],
	I0401 18:56:09.740075   43137 command_runner.go:130] >       "size": "128508878",
	I0401 18:56:09.740084   43137 command_runner.go:130] >       "uid": {
	I0401 18:56:09.740094   43137 command_runner.go:130] >         "value": "0"
	I0401 18:56:09.740099   43137 command_runner.go:130] >       },
	I0401 18:56:09.740108   43137 command_runner.go:130] >       "username": "",
	I0401 18:56:09.740116   43137 command_runner.go:130] >       "spec": null,
	I0401 18:56:09.740123   43137 command_runner.go:130] >       "pinned": false
	I0401 18:56:09.740128   43137 command_runner.go:130] >     },
	I0401 18:56:09.740133   43137 command_runner.go:130] >     {
	I0401 18:56:09.740147   43137 command_runner.go:130] >       "id": "6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3",
	I0401 18:56:09.740158   43137 command_runner.go:130] >       "repoTags": [
	I0401 18:56:09.740167   43137 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.29.3"
	I0401 18:56:09.740175   43137 command_runner.go:130] >       ],
	I0401 18:56:09.740182   43137 command_runner.go:130] >       "repoDigests": [
	I0401 18:56:09.740197   43137 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:495e03d609009733264502138f33ab4ebff55e4ccc34b51fce1dc48eba5aa606",
	I0401 18:56:09.740212   43137 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104"
	I0401 18:56:09.740221   43137 command_runner.go:130] >       ],
	I0401 18:56:09.740226   43137 command_runner.go:130] >       "size": "123142962",
	I0401 18:56:09.740236   43137 command_runner.go:130] >       "uid": {
	I0401 18:56:09.740243   43137 command_runner.go:130] >         "value": "0"
	I0401 18:56:09.740252   43137 command_runner.go:130] >       },
	I0401 18:56:09.740259   43137 command_runner.go:130] >       "username": "",
	I0401 18:56:09.740268   43137 command_runner.go:130] >       "spec": null,
	I0401 18:56:09.740296   43137 command_runner.go:130] >       "pinned": false
	I0401 18:56:09.740311   43137 command_runner.go:130] >     },
	I0401 18:56:09.740317   43137 command_runner.go:130] >     {
	I0401 18:56:09.740325   43137 command_runner.go:130] >       "id": "a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392",
	I0401 18:56:09.740334   43137 command_runner.go:130] >       "repoTags": [
	I0401 18:56:09.740346   43137 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.29.3"
	I0401 18:56:09.740355   43137 command_runner.go:130] >       ],
	I0401 18:56:09.740370   43137 command_runner.go:130] >       "repoDigests": [
	I0401 18:56:09.740401   43137 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:d137dd922e588abc7b0e2f20afd338065e9abccdecfe705abfb19f588fbac11d",
	I0401 18:56:09.740416   43137 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863"
	I0401 18:56:09.740424   43137 command_runner.go:130] >       ],
	I0401 18:56:09.740431   43137 command_runner.go:130] >       "size": "83634073",
	I0401 18:56:09.740439   43137 command_runner.go:130] >       "uid": null,
	I0401 18:56:09.740449   43137 command_runner.go:130] >       "username": "",
	I0401 18:56:09.740457   43137 command_runner.go:130] >       "spec": null,
	I0401 18:56:09.740467   43137 command_runner.go:130] >       "pinned": false
	I0401 18:56:09.740476   43137 command_runner.go:130] >     },
	I0401 18:56:09.740484   43137 command_runner.go:130] >     {
	I0401 18:56:09.740493   43137 command_runner.go:130] >       "id": "8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b",
	I0401 18:56:09.740509   43137 command_runner.go:130] >       "repoTags": [
	I0401 18:56:09.740526   43137 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.29.3"
	I0401 18:56:09.740532   43137 command_runner.go:130] >       ],
	I0401 18:56:09.740544   43137 command_runner.go:130] >       "repoDigests": [
	I0401 18:56:09.740556   43137 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a",
	I0401 18:56:09.740569   43137 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:c6dae5df00e42512d2baa3e1e74efbf08bddd595e930123f6021f715198b8e88"
	I0401 18:56:09.740575   43137 command_runner.go:130] >       ],
	I0401 18:56:09.740582   43137 command_runner.go:130] >       "size": "60724018",
	I0401 18:56:09.740589   43137 command_runner.go:130] >       "uid": {
	I0401 18:56:09.740595   43137 command_runner.go:130] >         "value": "0"
	I0401 18:56:09.740601   43137 command_runner.go:130] >       },
	I0401 18:56:09.740606   43137 command_runner.go:130] >       "username": "",
	I0401 18:56:09.740611   43137 command_runner.go:130] >       "spec": null,
	I0401 18:56:09.740615   43137 command_runner.go:130] >       "pinned": false
	I0401 18:56:09.740618   43137 command_runner.go:130] >     },
	I0401 18:56:09.740626   43137 command_runner.go:130] >     {
	I0401 18:56:09.740635   43137 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0401 18:56:09.740646   43137 command_runner.go:130] >       "repoTags": [
	I0401 18:56:09.740653   43137 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0401 18:56:09.740658   43137 command_runner.go:130] >       ],
	I0401 18:56:09.740666   43137 command_runner.go:130] >       "repoDigests": [
	I0401 18:56:09.740677   43137 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0401 18:56:09.740696   43137 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0401 18:56:09.740705   43137 command_runner.go:130] >       ],
	I0401 18:56:09.740711   43137 command_runner.go:130] >       "size": "750414",
	I0401 18:56:09.740718   43137 command_runner.go:130] >       "uid": {
	I0401 18:56:09.740722   43137 command_runner.go:130] >         "value": "65535"
	I0401 18:56:09.740728   43137 command_runner.go:130] >       },
	I0401 18:56:09.740733   43137 command_runner.go:130] >       "username": "",
	I0401 18:56:09.740739   43137 command_runner.go:130] >       "spec": null,
	I0401 18:56:09.740747   43137 command_runner.go:130] >       "pinned": true
	I0401 18:56:09.740752   43137 command_runner.go:130] >     }
	I0401 18:56:09.740757   43137 command_runner.go:130] >   ]
	I0401 18:56:09.740763   43137 command_runner.go:130] > }
	I0401 18:56:09.740915   43137 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 18:56:09.740927   43137 cache_images.go:84] Images are preloaded, skipping loading
	I0401 18:56:09.740933   43137 kubeadm.go:928] updating node { 192.168.39.161 8443 v1.29.3 crio true true} ...
	I0401 18:56:09.741056   43137 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-853477 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.161
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:multinode-853477 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 18:56:09.741141   43137 ssh_runner.go:195] Run: crio config
	I0401 18:56:09.777666   43137 command_runner.go:130] ! time="2024-04-01 18:56:09.754434169Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0401 18:56:09.788522   43137 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0401 18:56:09.798885   43137 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0401 18:56:09.798910   43137 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0401 18:56:09.798921   43137 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0401 18:56:09.798926   43137 command_runner.go:130] > #
	I0401 18:56:09.798935   43137 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0401 18:56:09.798943   43137 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0401 18:56:09.798952   43137 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0401 18:56:09.798973   43137 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0401 18:56:09.798983   43137 command_runner.go:130] > # reload'.
	I0401 18:56:09.798994   43137 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0401 18:56:09.799006   43137 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0401 18:56:09.799016   43137 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0401 18:56:09.799029   43137 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0401 18:56:09.799039   43137 command_runner.go:130] > [crio]
	I0401 18:56:09.799049   43137 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0401 18:56:09.799056   43137 command_runner.go:130] > # containers images, in this directory.
	I0401 18:56:09.799063   43137 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0401 18:56:09.799075   43137 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0401 18:56:09.799084   43137 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0401 18:56:09.799097   43137 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0401 18:56:09.799103   43137 command_runner.go:130] > # imagestore = ""
	I0401 18:56:09.799109   43137 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0401 18:56:09.799118   43137 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0401 18:56:09.799125   43137 command_runner.go:130] > storage_driver = "overlay"
	I0401 18:56:09.799130   43137 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0401 18:56:09.799138   43137 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0401 18:56:09.799145   43137 command_runner.go:130] > storage_option = [
	I0401 18:56:09.799150   43137 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0401 18:56:09.799155   43137 command_runner.go:130] > ]
	I0401 18:56:09.799162   43137 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0401 18:56:09.799170   43137 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0401 18:56:09.799177   43137 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0401 18:56:09.799182   43137 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0401 18:56:09.799190   43137 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0401 18:56:09.799197   43137 command_runner.go:130] > # always happen on a node reboot
	I0401 18:56:09.799202   43137 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0401 18:56:09.799215   43137 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0401 18:56:09.799223   43137 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0401 18:56:09.799228   43137 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0401 18:56:09.799236   43137 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0401 18:56:09.799243   43137 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0401 18:56:09.799253   43137 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0401 18:56:09.799259   43137 command_runner.go:130] > # internal_wipe = true
	I0401 18:56:09.799267   43137 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0401 18:56:09.799276   43137 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0401 18:56:09.799282   43137 command_runner.go:130] > # internal_repair = false
	I0401 18:56:09.799288   43137 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0401 18:56:09.799301   43137 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0401 18:56:09.799308   43137 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0401 18:56:09.799319   43137 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0401 18:56:09.799330   43137 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0401 18:56:09.799336   43137 command_runner.go:130] > [crio.api]
	I0401 18:56:09.799342   43137 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0401 18:56:09.799349   43137 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0401 18:56:09.799354   43137 command_runner.go:130] > # IP address on which the stream server will listen.
	I0401 18:56:09.799366   43137 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0401 18:56:09.799375   43137 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0401 18:56:09.799382   43137 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0401 18:56:09.799386   43137 command_runner.go:130] > # stream_port = "0"
	I0401 18:56:09.799393   43137 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0401 18:56:09.799400   43137 command_runner.go:130] > # stream_enable_tls = false
	I0401 18:56:09.799406   43137 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0401 18:56:09.799413   43137 command_runner.go:130] > # stream_idle_timeout = ""
	I0401 18:56:09.799419   43137 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0401 18:56:09.799427   43137 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0401 18:56:09.799433   43137 command_runner.go:130] > # minutes.
	I0401 18:56:09.799437   43137 command_runner.go:130] > # stream_tls_cert = ""
	I0401 18:56:09.799446   43137 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0401 18:56:09.799454   43137 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0401 18:56:09.799459   43137 command_runner.go:130] > # stream_tls_key = ""
	I0401 18:56:09.799464   43137 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0401 18:56:09.799473   43137 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0401 18:56:09.799493   43137 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0401 18:56:09.799503   43137 command_runner.go:130] > # stream_tls_ca = ""
	I0401 18:56:09.799510   43137 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0401 18:56:09.799514   43137 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0401 18:56:09.799521   43137 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0401 18:56:09.799528   43137 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0401 18:56:09.799534   43137 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0401 18:56:09.799541   43137 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0401 18:56:09.799545   43137 command_runner.go:130] > [crio.runtime]
	I0401 18:56:09.799550   43137 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0401 18:56:09.799558   43137 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0401 18:56:09.799564   43137 command_runner.go:130] > # "nofile=1024:2048"
	I0401 18:56:09.799570   43137 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0401 18:56:09.799577   43137 command_runner.go:130] > # default_ulimits = [
	I0401 18:56:09.799580   43137 command_runner.go:130] > # ]
	I0401 18:56:09.799588   43137 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0401 18:56:09.799594   43137 command_runner.go:130] > # no_pivot = false
	I0401 18:56:09.799602   43137 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0401 18:56:09.799611   43137 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0401 18:56:09.799623   43137 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0401 18:56:09.799636   43137 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0401 18:56:09.799647   43137 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0401 18:56:09.799659   43137 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0401 18:56:09.799667   43137 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0401 18:56:09.799678   43137 command_runner.go:130] > # Cgroup setting for conmon
	I0401 18:56:09.799690   43137 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0401 18:56:09.799700   43137 command_runner.go:130] > conmon_cgroup = "pod"
	I0401 18:56:09.799709   43137 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0401 18:56:09.799719   43137 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0401 18:56:09.799729   43137 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0401 18:56:09.799737   43137 command_runner.go:130] > conmon_env = [
	I0401 18:56:09.799746   43137 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0401 18:56:09.799753   43137 command_runner.go:130] > ]
	I0401 18:56:09.799759   43137 command_runner.go:130] > # Additional environment variables to set for all the
	I0401 18:56:09.799764   43137 command_runner.go:130] > # containers. These are overridden if set in the
	I0401 18:56:09.799772   43137 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0401 18:56:09.799776   43137 command_runner.go:130] > # default_env = [
	I0401 18:56:09.799780   43137 command_runner.go:130] > # ]
	I0401 18:56:09.799787   43137 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0401 18:56:09.799794   43137 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0401 18:56:09.799800   43137 command_runner.go:130] > # selinux = false
	I0401 18:56:09.799806   43137 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0401 18:56:09.799814   43137 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0401 18:56:09.799822   43137 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0401 18:56:09.799828   43137 command_runner.go:130] > # seccomp_profile = ""
	I0401 18:56:09.799834   43137 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0401 18:56:09.799841   43137 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0401 18:56:09.799858   43137 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0401 18:56:09.799866   43137 command_runner.go:130] > # which might increase security.
	I0401 18:56:09.799870   43137 command_runner.go:130] > # This option is currently deprecated,
	I0401 18:56:09.799878   43137 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0401 18:56:09.799883   43137 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0401 18:56:09.799889   43137 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0401 18:56:09.799897   43137 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0401 18:56:09.799908   43137 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0401 18:56:09.799922   43137 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0401 18:56:09.799929   43137 command_runner.go:130] > # This option supports live configuration reload.
	I0401 18:56:09.799934   43137 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0401 18:56:09.799942   43137 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0401 18:56:09.799946   43137 command_runner.go:130] > # the cgroup blockio controller.
	I0401 18:56:09.799952   43137 command_runner.go:130] > # blockio_config_file = ""
	I0401 18:56:09.799959   43137 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0401 18:56:09.799965   43137 command_runner.go:130] > # blockio parameters.
	I0401 18:56:09.799969   43137 command_runner.go:130] > # blockio_reload = false
	I0401 18:56:09.799980   43137 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0401 18:56:09.799986   43137 command_runner.go:130] > # irqbalance daemon.
	I0401 18:56:09.799991   43137 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0401 18:56:09.800000   43137 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0401 18:56:09.800007   43137 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0401 18:56:09.800015   43137 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0401 18:56:09.800021   43137 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0401 18:56:09.800027   43137 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0401 18:56:09.800033   43137 command_runner.go:130] > # This option supports live configuration reload.
	I0401 18:56:09.800036   43137 command_runner.go:130] > # rdt_config_file = ""
	I0401 18:56:09.800044   43137 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0401 18:56:09.800049   43137 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0401 18:56:09.800080   43137 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0401 18:56:09.800088   43137 command_runner.go:130] > # separate_pull_cgroup = ""
	I0401 18:56:09.800093   43137 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0401 18:56:09.800099   43137 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0401 18:56:09.800105   43137 command_runner.go:130] > # will be added.
	I0401 18:56:09.800109   43137 command_runner.go:130] > # default_capabilities = [
	I0401 18:56:09.800115   43137 command_runner.go:130] > # 	"CHOWN",
	I0401 18:56:09.800119   43137 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0401 18:56:09.800124   43137 command_runner.go:130] > # 	"FSETID",
	I0401 18:56:09.800128   43137 command_runner.go:130] > # 	"FOWNER",
	I0401 18:56:09.800134   43137 command_runner.go:130] > # 	"SETGID",
	I0401 18:56:09.800138   43137 command_runner.go:130] > # 	"SETUID",
	I0401 18:56:09.800144   43137 command_runner.go:130] > # 	"SETPCAP",
	I0401 18:56:09.800148   43137 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0401 18:56:09.800154   43137 command_runner.go:130] > # 	"KILL",
	I0401 18:56:09.800167   43137 command_runner.go:130] > # ]
	I0401 18:56:09.800177   43137 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0401 18:56:09.800185   43137 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0401 18:56:09.800194   43137 command_runner.go:130] > # add_inheritable_capabilities = false
	I0401 18:56:09.800203   43137 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0401 18:56:09.800211   43137 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0401 18:56:09.800214   43137 command_runner.go:130] > default_sysctls = [
	I0401 18:56:09.800221   43137 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0401 18:56:09.800224   43137 command_runner.go:130] > ]
	I0401 18:56:09.800229   43137 command_runner.go:130] > # List of devices on the host that a
	I0401 18:56:09.800237   43137 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0401 18:56:09.800243   43137 command_runner.go:130] > # allowed_devices = [
	I0401 18:56:09.800247   43137 command_runner.go:130] > # 	"/dev/fuse",
	I0401 18:56:09.800252   43137 command_runner.go:130] > # ]
	I0401 18:56:09.800256   43137 command_runner.go:130] > # List of additional devices. specified as
	I0401 18:56:09.800264   43137 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0401 18:56:09.800271   43137 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0401 18:56:09.800276   43137 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0401 18:56:09.800283   43137 command_runner.go:130] > # additional_devices = [
	I0401 18:56:09.800286   43137 command_runner.go:130] > # ]
	I0401 18:56:09.800294   43137 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0401 18:56:09.800298   43137 command_runner.go:130] > # cdi_spec_dirs = [
	I0401 18:56:09.800302   43137 command_runner.go:130] > # 	"/etc/cdi",
	I0401 18:56:09.800305   43137 command_runner.go:130] > # 	"/var/run/cdi",
	I0401 18:56:09.800311   43137 command_runner.go:130] > # ]
	I0401 18:56:09.800317   43137 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0401 18:56:09.800326   43137 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0401 18:56:09.800332   43137 command_runner.go:130] > # Defaults to false.
	I0401 18:56:09.800337   43137 command_runner.go:130] > # device_ownership_from_security_context = false
	I0401 18:56:09.800345   43137 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0401 18:56:09.800353   43137 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0401 18:56:09.800362   43137 command_runner.go:130] > # hooks_dir = [
	I0401 18:56:09.800369   43137 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0401 18:56:09.800372   43137 command_runner.go:130] > # ]
	I0401 18:56:09.800378   43137 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0401 18:56:09.800387   43137 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0401 18:56:09.800398   43137 command_runner.go:130] > # its default mounts from the following two files:
	I0401 18:56:09.800404   43137 command_runner.go:130] > #
	I0401 18:56:09.800410   43137 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0401 18:56:09.800419   43137 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0401 18:56:09.800424   43137 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0401 18:56:09.800429   43137 command_runner.go:130] > #
	I0401 18:56:09.800435   43137 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0401 18:56:09.800443   43137 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0401 18:56:09.800452   43137 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0401 18:56:09.800461   43137 command_runner.go:130] > #      only add mounts it finds in this file.
	I0401 18:56:09.800465   43137 command_runner.go:130] > #
	I0401 18:56:09.800470   43137 command_runner.go:130] > # default_mounts_file = ""
	I0401 18:56:09.800477   43137 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0401 18:56:09.800483   43137 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0401 18:56:09.800489   43137 command_runner.go:130] > pids_limit = 1024
	I0401 18:56:09.800495   43137 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0401 18:56:09.800503   43137 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0401 18:56:09.800509   43137 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0401 18:56:09.800519   43137 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0401 18:56:09.800526   43137 command_runner.go:130] > # log_size_max = -1
	I0401 18:56:09.800536   43137 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0401 18:56:09.800545   43137 command_runner.go:130] > # log_to_journald = false
	I0401 18:56:09.800558   43137 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0401 18:56:09.800568   43137 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0401 18:56:09.800579   43137 command_runner.go:130] > # Path to directory for container attach sockets.
	I0401 18:56:09.800590   43137 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0401 18:56:09.800600   43137 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0401 18:56:09.800609   43137 command_runner.go:130] > # bind_mount_prefix = ""
	I0401 18:56:09.800621   43137 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0401 18:56:09.800638   43137 command_runner.go:130] > # read_only = false
	I0401 18:56:09.800654   43137 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0401 18:56:09.800667   43137 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0401 18:56:09.800676   43137 command_runner.go:130] > # live configuration reload.
	I0401 18:56:09.800682   43137 command_runner.go:130] > # log_level = "info"
	I0401 18:56:09.800690   43137 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0401 18:56:09.800701   43137 command_runner.go:130] > # This option supports live configuration reload.
	I0401 18:56:09.800720   43137 command_runner.go:130] > # log_filter = ""
	I0401 18:56:09.800732   43137 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0401 18:56:09.800744   43137 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0401 18:56:09.800751   43137 command_runner.go:130] > # separated by comma.
	I0401 18:56:09.800762   43137 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0401 18:56:09.800772   43137 command_runner.go:130] > # uid_mappings = ""
	I0401 18:56:09.800781   43137 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0401 18:56:09.800793   43137 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0401 18:56:09.800802   43137 command_runner.go:130] > # separated by comma.
	I0401 18:56:09.800816   43137 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0401 18:56:09.800830   43137 command_runner.go:130] > # gid_mappings = ""
	I0401 18:56:09.800843   43137 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0401 18:56:09.800860   43137 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0401 18:56:09.800872   43137 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0401 18:56:09.800886   43137 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0401 18:56:09.800895   43137 command_runner.go:130] > # minimum_mappable_uid = -1
	I0401 18:56:09.800908   43137 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0401 18:56:09.800920   43137 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0401 18:56:09.800933   43137 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0401 18:56:09.800947   43137 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0401 18:56:09.800957   43137 command_runner.go:130] > # minimum_mappable_gid = -1
	I0401 18:56:09.800969   43137 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0401 18:56:09.800982   43137 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0401 18:56:09.800993   43137 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0401 18:56:09.801002   43137 command_runner.go:130] > # ctr_stop_timeout = 30
	I0401 18:56:09.801013   43137 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0401 18:56:09.801025   43137 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0401 18:56:09.801036   43137 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0401 18:56:09.801048   43137 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0401 18:56:09.801057   43137 command_runner.go:130] > drop_infra_ctr = false
	I0401 18:56:09.801070   43137 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0401 18:56:09.801081   43137 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0401 18:56:09.801095   43137 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0401 18:56:09.801105   43137 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0401 18:56:09.801115   43137 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0401 18:56:09.801127   43137 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0401 18:56:09.801146   43137 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0401 18:56:09.801157   43137 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0401 18:56:09.801167   43137 command_runner.go:130] > # shared_cpuset = ""
	I0401 18:56:09.801176   43137 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0401 18:56:09.801187   43137 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0401 18:56:09.801193   43137 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0401 18:56:09.801204   43137 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0401 18:56:09.801213   43137 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0401 18:56:09.801221   43137 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0401 18:56:09.801237   43137 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0401 18:56:09.801247   43137 command_runner.go:130] > # enable_criu_support = false
	I0401 18:56:09.801255   43137 command_runner.go:130] > # Enable/disable the generation of the container,
	I0401 18:56:09.801267   43137 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0401 18:56:09.801277   43137 command_runner.go:130] > # enable_pod_events = false
	I0401 18:56:09.801285   43137 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0401 18:56:09.801297   43137 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0401 18:56:09.801308   43137 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0401 18:56:09.801316   43137 command_runner.go:130] > # default_runtime = "runc"
	I0401 18:56:09.801326   43137 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0401 18:56:09.801338   43137 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0401 18:56:09.801354   43137 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0401 18:56:09.801365   43137 command_runner.go:130] > # creation as a file is not desired either.
	I0401 18:56:09.801376   43137 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0401 18:56:09.801388   43137 command_runner.go:130] > # the hostname is being managed dynamically.
	I0401 18:56:09.801400   43137 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0401 18:56:09.801408   43137 command_runner.go:130] > # ]
	I0401 18:56:09.801420   43137 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0401 18:56:09.801432   43137 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0401 18:56:09.801443   43137 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0401 18:56:09.801454   43137 command_runner.go:130] > # Each entry in the table should follow the format:
	I0401 18:56:09.801458   43137 command_runner.go:130] > #
	I0401 18:56:09.801467   43137 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0401 18:56:09.801474   43137 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0401 18:56:09.801520   43137 command_runner.go:130] > # runtime_type = "oci"
	I0401 18:56:09.801534   43137 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0401 18:56:09.801539   43137 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0401 18:56:09.801548   43137 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0401 18:56:09.801553   43137 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0401 18:56:09.801559   43137 command_runner.go:130] > # monitor_env = []
	I0401 18:56:09.801564   43137 command_runner.go:130] > # privileged_without_host_devices = false
	I0401 18:56:09.801570   43137 command_runner.go:130] > # allowed_annotations = []
	I0401 18:56:09.801575   43137 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0401 18:56:09.801581   43137 command_runner.go:130] > # Where:
	I0401 18:56:09.801586   43137 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0401 18:56:09.801594   43137 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0401 18:56:09.801603   43137 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0401 18:56:09.801612   43137 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0401 18:56:09.801624   43137 command_runner.go:130] > #   in $PATH.
	I0401 18:56:09.801636   43137 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0401 18:56:09.801660   43137 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0401 18:56:09.801673   43137 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0401 18:56:09.801682   43137 command_runner.go:130] > #   state.
	I0401 18:56:09.801694   43137 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0401 18:56:09.801705   43137 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0401 18:56:09.801718   43137 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0401 18:56:09.801729   43137 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0401 18:56:09.801741   43137 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0401 18:56:09.801754   43137 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0401 18:56:09.801764   43137 command_runner.go:130] > #   The currently recognized values are:
	I0401 18:56:09.801777   43137 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0401 18:56:09.801791   43137 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0401 18:56:09.801800   43137 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0401 18:56:09.801808   43137 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0401 18:56:09.801818   43137 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0401 18:56:09.801827   43137 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0401 18:56:09.801835   43137 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0401 18:56:09.801844   43137 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0401 18:56:09.801850   43137 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0401 18:56:09.801862   43137 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0401 18:56:09.801868   43137 command_runner.go:130] > #   deprecated option "conmon".
	I0401 18:56:09.801875   43137 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0401 18:56:09.801882   43137 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0401 18:56:09.801894   43137 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0401 18:56:09.801902   43137 command_runner.go:130] > #   should be moved to the container's cgroup
	I0401 18:56:09.801911   43137 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0401 18:56:09.801918   43137 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0401 18:56:09.801924   43137 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0401 18:56:09.801932   43137 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0401 18:56:09.801937   43137 command_runner.go:130] > #
	I0401 18:56:09.801941   43137 command_runner.go:130] > # Using the seccomp notifier feature:
	I0401 18:56:09.801949   43137 command_runner.go:130] > #
	I0401 18:56:09.801955   43137 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0401 18:56:09.801963   43137 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0401 18:56:09.801966   43137 command_runner.go:130] > #
	I0401 18:56:09.801975   43137 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0401 18:56:09.801983   43137 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0401 18:56:09.801986   43137 command_runner.go:130] > #
	I0401 18:56:09.801995   43137 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0401 18:56:09.802001   43137 command_runner.go:130] > # feature.
	I0401 18:56:09.802004   43137 command_runner.go:130] > #
	I0401 18:56:09.802012   43137 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0401 18:56:09.802020   43137 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0401 18:56:09.802029   43137 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0401 18:56:09.802037   43137 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0401 18:56:09.802043   43137 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0401 18:56:09.802048   43137 command_runner.go:130] > #
	I0401 18:56:09.802053   43137 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0401 18:56:09.802059   43137 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0401 18:56:09.802064   43137 command_runner.go:130] > #
	I0401 18:56:09.802070   43137 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0401 18:56:09.802083   43137 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0401 18:56:09.802088   43137 command_runner.go:130] > #
	I0401 18:56:09.802096   43137 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0401 18:56:09.802105   43137 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0401 18:56:09.802110   43137 command_runner.go:130] > # limitation.
	I0401 18:56:09.802116   43137 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0401 18:56:09.802122   43137 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0401 18:56:09.802126   43137 command_runner.go:130] > runtime_type = "oci"
	I0401 18:56:09.802137   43137 command_runner.go:130] > runtime_root = "/run/runc"
	I0401 18:56:09.802141   43137 command_runner.go:130] > runtime_config_path = ""
	I0401 18:56:09.802149   43137 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0401 18:56:09.802153   43137 command_runner.go:130] > monitor_cgroup = "pod"
	I0401 18:56:09.802159   43137 command_runner.go:130] > monitor_exec_cgroup = ""
	I0401 18:56:09.802163   43137 command_runner.go:130] > monitor_env = [
	I0401 18:56:09.802171   43137 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0401 18:56:09.802176   43137 command_runner.go:130] > ]
	I0401 18:56:09.802181   43137 command_runner.go:130] > privileged_without_host_devices = false
	I0401 18:56:09.802189   43137 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0401 18:56:09.802196   43137 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0401 18:56:09.802202   43137 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0401 18:56:09.802211   43137 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0401 18:56:09.802223   43137 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0401 18:56:09.802230   43137 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0401 18:56:09.802241   43137 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0401 18:56:09.802250   43137 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0401 18:56:09.802258   43137 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0401 18:56:09.802265   43137 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0401 18:56:09.802271   43137 command_runner.go:130] > # Example:
	I0401 18:56:09.802276   43137 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0401 18:56:09.802283   43137 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0401 18:56:09.802288   43137 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0401 18:56:09.802295   43137 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0401 18:56:09.802299   43137 command_runner.go:130] > # cpuset = 0
	I0401 18:56:09.802303   43137 command_runner.go:130] > # cpushares = "0-1"
	I0401 18:56:09.802306   43137 command_runner.go:130] > # Where:
	I0401 18:56:09.802311   43137 command_runner.go:130] > # The workload name is workload-type.
	I0401 18:56:09.802320   43137 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0401 18:56:09.802327   43137 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0401 18:56:09.802336   43137 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0401 18:56:09.802346   43137 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0401 18:56:09.802354   43137 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0401 18:56:09.802359   43137 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0401 18:56:09.802366   43137 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0401 18:56:09.802373   43137 command_runner.go:130] > # Default value is set to true
	I0401 18:56:09.802381   43137 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0401 18:56:09.802389   43137 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0401 18:56:09.802394   43137 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0401 18:56:09.802401   43137 command_runner.go:130] > # Default value is set to 'false'
	I0401 18:56:09.802405   43137 command_runner.go:130] > # disable_hostport_mapping = false
	I0401 18:56:09.802411   43137 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0401 18:56:09.802414   43137 command_runner.go:130] > #
	I0401 18:56:09.802420   43137 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0401 18:56:09.802425   43137 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0401 18:56:09.802431   43137 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0401 18:56:09.802436   43137 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0401 18:56:09.802443   43137 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0401 18:56:09.802446   43137 command_runner.go:130] > [crio.image]
	I0401 18:56:09.802451   43137 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0401 18:56:09.802455   43137 command_runner.go:130] > # default_transport = "docker://"
	I0401 18:56:09.802460   43137 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0401 18:56:09.802466   43137 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0401 18:56:09.802470   43137 command_runner.go:130] > # global_auth_file = ""
	I0401 18:56:09.802474   43137 command_runner.go:130] > # The image used to instantiate infra containers.
	I0401 18:56:09.802479   43137 command_runner.go:130] > # This option supports live configuration reload.
	I0401 18:56:09.802483   43137 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0401 18:56:09.802488   43137 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0401 18:56:09.802494   43137 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0401 18:56:09.802498   43137 command_runner.go:130] > # This option supports live configuration reload.
	I0401 18:56:09.802502   43137 command_runner.go:130] > # pause_image_auth_file = ""
	I0401 18:56:09.802507   43137 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0401 18:56:09.802512   43137 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0401 18:56:09.802518   43137 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0401 18:56:09.802523   43137 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0401 18:56:09.802527   43137 command_runner.go:130] > # pause_command = "/pause"
	I0401 18:56:09.802532   43137 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0401 18:56:09.802537   43137 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0401 18:56:09.802543   43137 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0401 18:56:09.802549   43137 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0401 18:56:09.802554   43137 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0401 18:56:09.802560   43137 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0401 18:56:09.802569   43137 command_runner.go:130] > # pinned_images = [
	I0401 18:56:09.802572   43137 command_runner.go:130] > # ]
	I0401 18:56:09.802578   43137 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0401 18:56:09.802583   43137 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0401 18:56:09.802589   43137 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0401 18:56:09.802597   43137 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0401 18:56:09.802603   43137 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0401 18:56:09.802610   43137 command_runner.go:130] > # signature_policy = ""
	I0401 18:56:09.802616   43137 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0401 18:56:09.802629   43137 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0401 18:56:09.802641   43137 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0401 18:56:09.802656   43137 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0401 18:56:09.802668   43137 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0401 18:56:09.802679   43137 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0401 18:56:09.802691   43137 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0401 18:56:09.802703   43137 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0401 18:56:09.802712   43137 command_runner.go:130] > # changing them here.
	I0401 18:56:09.802716   43137 command_runner.go:130] > # insecure_registries = [
	I0401 18:56:09.802722   43137 command_runner.go:130] > # ]
	I0401 18:56:09.802728   43137 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0401 18:56:09.802736   43137 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0401 18:56:09.802743   43137 command_runner.go:130] > # image_volumes = "mkdir"
	I0401 18:56:09.802748   43137 command_runner.go:130] > # Temporary directory to use for storing big files
	I0401 18:56:09.802754   43137 command_runner.go:130] > # big_files_temporary_dir = ""
	I0401 18:56:09.802760   43137 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0401 18:56:09.802766   43137 command_runner.go:130] > # CNI plugins.
	I0401 18:56:09.802770   43137 command_runner.go:130] > [crio.network]
	I0401 18:56:09.802778   43137 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0401 18:56:09.802784   43137 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0401 18:56:09.802795   43137 command_runner.go:130] > # cni_default_network = ""
	I0401 18:56:09.802804   43137 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0401 18:56:09.802808   43137 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0401 18:56:09.802815   43137 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0401 18:56:09.802819   43137 command_runner.go:130] > # plugin_dirs = [
	I0401 18:56:09.802825   43137 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0401 18:56:09.802828   43137 command_runner.go:130] > # ]
	I0401 18:56:09.802840   43137 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0401 18:56:09.802847   43137 command_runner.go:130] > [crio.metrics]
	I0401 18:56:09.802851   43137 command_runner.go:130] > # Globally enable or disable metrics support.
	I0401 18:56:09.802859   43137 command_runner.go:130] > enable_metrics = true
	I0401 18:56:09.802865   43137 command_runner.go:130] > # Specify enabled metrics collectors.
	I0401 18:56:09.802870   43137 command_runner.go:130] > # Per default all metrics are enabled.
	I0401 18:56:09.802878   43137 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0401 18:56:09.802886   43137 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0401 18:56:09.802892   43137 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0401 18:56:09.802898   43137 command_runner.go:130] > # metrics_collectors = [
	I0401 18:56:09.802902   43137 command_runner.go:130] > # 	"operations",
	I0401 18:56:09.802910   43137 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0401 18:56:09.802917   43137 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0401 18:56:09.802921   43137 command_runner.go:130] > # 	"operations_errors",
	I0401 18:56:09.802927   43137 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0401 18:56:09.802931   43137 command_runner.go:130] > # 	"image_pulls_by_name",
	I0401 18:56:09.802937   43137 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0401 18:56:09.802943   43137 command_runner.go:130] > # 	"image_pulls_failures",
	I0401 18:56:09.802949   43137 command_runner.go:130] > # 	"image_pulls_successes",
	I0401 18:56:09.802953   43137 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0401 18:56:09.802960   43137 command_runner.go:130] > # 	"image_layer_reuse",
	I0401 18:56:09.802964   43137 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0401 18:56:09.802970   43137 command_runner.go:130] > # 	"containers_oom_total",
	I0401 18:56:09.802974   43137 command_runner.go:130] > # 	"containers_oom",
	I0401 18:56:09.802978   43137 command_runner.go:130] > # 	"processes_defunct",
	I0401 18:56:09.802984   43137 command_runner.go:130] > # 	"operations_total",
	I0401 18:56:09.802988   43137 command_runner.go:130] > # 	"operations_latency_seconds",
	I0401 18:56:09.802994   43137 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0401 18:56:09.802999   43137 command_runner.go:130] > # 	"operations_errors_total",
	I0401 18:56:09.803005   43137 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0401 18:56:09.803010   43137 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0401 18:56:09.803014   43137 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0401 18:56:09.803021   43137 command_runner.go:130] > # 	"image_pulls_success_total",
	I0401 18:56:09.803025   43137 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0401 18:56:09.803032   43137 command_runner.go:130] > # 	"containers_oom_count_total",
	I0401 18:56:09.803037   43137 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0401 18:56:09.803049   43137 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0401 18:56:09.803055   43137 command_runner.go:130] > # ]
	I0401 18:56:09.803060   43137 command_runner.go:130] > # The port on which the metrics server will listen.
	I0401 18:56:09.803064   43137 command_runner.go:130] > # metrics_port = 9090
	I0401 18:56:09.803072   43137 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0401 18:56:09.803077   43137 command_runner.go:130] > # metrics_socket = ""
	I0401 18:56:09.803082   43137 command_runner.go:130] > # The certificate for the secure metrics server.
	I0401 18:56:09.803090   43137 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0401 18:56:09.803099   43137 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0401 18:56:09.803106   43137 command_runner.go:130] > # certificate on any modification event.
	I0401 18:56:09.803110   43137 command_runner.go:130] > # metrics_cert = ""
	I0401 18:56:09.803116   43137 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0401 18:56:09.803123   43137 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0401 18:56:09.803127   43137 command_runner.go:130] > # metrics_key = ""
	I0401 18:56:09.803135   43137 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0401 18:56:09.803141   43137 command_runner.go:130] > [crio.tracing]
	I0401 18:56:09.803146   43137 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0401 18:56:09.803150   43137 command_runner.go:130] > # enable_tracing = false
	I0401 18:56:09.803157   43137 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0401 18:56:09.803162   43137 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0401 18:56:09.803171   43137 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0401 18:56:09.803178   43137 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0401 18:56:09.803182   43137 command_runner.go:130] > # CRI-O NRI configuration.
	I0401 18:56:09.803187   43137 command_runner.go:130] > [crio.nri]
	I0401 18:56:09.803192   43137 command_runner.go:130] > # Globally enable or disable NRI.
	I0401 18:56:09.803199   43137 command_runner.go:130] > # enable_nri = false
	I0401 18:56:09.803205   43137 command_runner.go:130] > # NRI socket to listen on.
	I0401 18:56:09.803212   43137 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0401 18:56:09.803216   43137 command_runner.go:130] > # NRI plugin directory to use.
	I0401 18:56:09.803227   43137 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0401 18:56:09.803234   43137 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0401 18:56:09.803239   43137 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0401 18:56:09.803246   43137 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0401 18:56:09.803253   43137 command_runner.go:130] > # nri_disable_connections = false
	I0401 18:56:09.803258   43137 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0401 18:56:09.803265   43137 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0401 18:56:09.803274   43137 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0401 18:56:09.803282   43137 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0401 18:56:09.803287   43137 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0401 18:56:09.803293   43137 command_runner.go:130] > [crio.stats]
	I0401 18:56:09.803299   43137 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0401 18:56:09.803306   43137 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0401 18:56:09.803310   43137 command_runner.go:130] > # stats_collection_period = 0
	I0401 18:56:09.803442   43137 cni.go:84] Creating CNI manager for ""
	I0401 18:56:09.803454   43137 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0401 18:56:09.803463   43137 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 18:56:09.803481   43137 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.161 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-853477 NodeName:multinode-853477 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.161"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.161 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 18:56:09.803621   43137 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.161
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-853477"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.161
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.161"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 18:56:09.803696   43137 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0401 18:56:09.815494   43137 command_runner.go:130] > kubeadm
	I0401 18:56:09.815510   43137 command_runner.go:130] > kubectl
	I0401 18:56:09.815514   43137 command_runner.go:130] > kubelet
	I0401 18:56:09.815532   43137 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 18:56:09.815580   43137 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 18:56:09.827098   43137 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0401 18:56:09.845512   43137 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 18:56:09.863536   43137 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0401 18:56:09.882379   43137 ssh_runner.go:195] Run: grep 192.168.39.161	control-plane.minikube.internal$ /etc/hosts
	I0401 18:56:09.886443   43137 command_runner.go:130] > 192.168.39.161	control-plane.minikube.internal
	I0401 18:56:09.886701   43137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 18:56:10.029311   43137 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 18:56:10.048364   43137 certs.go:68] Setting up /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/multinode-853477 for IP: 192.168.39.161
	I0401 18:56:10.048384   43137 certs.go:194] generating shared ca certs ...
	I0401 18:56:10.048403   43137 certs.go:226] acquiring lock for ca certs: {Name:mk348b3e250c104b662139cd7212c6c6dfda3180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:56:10.048543   43137 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key
	I0401 18:56:10.048584   43137 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key
	I0401 18:56:10.048593   43137 certs.go:256] generating profile certs ...
	I0401 18:56:10.048690   43137 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/multinode-853477/client.key
	I0401 18:56:10.048746   43137 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/multinode-853477/apiserver.key.fc9b9454
	I0401 18:56:10.048778   43137 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/multinode-853477/proxy-client.key
	I0401 18:56:10.048788   43137 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0401 18:56:10.048803   43137 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0401 18:56:10.048815   43137 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0401 18:56:10.048834   43137 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0401 18:56:10.048852   43137 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/multinode-853477/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0401 18:56:10.048868   43137 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/multinode-853477/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0401 18:56:10.048881   43137 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/multinode-853477/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0401 18:56:10.048892   43137 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/multinode-853477/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0401 18:56:10.048935   43137 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem (1338 bytes)
	W0401 18:56:10.048963   43137 certs.go:480] ignoring /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751_empty.pem, impossibly tiny 0 bytes
	I0401 18:56:10.048973   43137 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem (1675 bytes)
	I0401 18:56:10.048997   43137 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem (1082 bytes)
	I0401 18:56:10.049025   43137 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem (1123 bytes)
	I0401 18:56:10.049045   43137 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem (1679 bytes)
	I0401 18:56:10.049085   43137 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem (1708 bytes)
	I0401 18:56:10.049109   43137 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem -> /usr/share/ca-certificates/17751.pem
	I0401 18:56:10.049122   43137 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> /usr/share/ca-certificates/177512.pem
	I0401 18:56:10.049134   43137 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0401 18:56:10.049658   43137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 18:56:10.078349   43137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 18:56:10.104974   43137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 18:56:10.140833   43137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 18:56:10.173801   43137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/multinode-853477/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0401 18:56:10.199484   43137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/multinode-853477/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 18:56:10.225370   43137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/multinode-853477/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 18:56:10.252393   43137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/multinode-853477/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 18:56:10.278526   43137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem --> /usr/share/ca-certificates/17751.pem (1338 bytes)
	I0401 18:56:10.314120   43137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /usr/share/ca-certificates/177512.pem (1708 bytes)
	I0401 18:56:10.352012   43137 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 18:56:10.377969   43137 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0401 18:56:10.395047   43137 ssh_runner.go:195] Run: openssl version
	I0401 18:56:10.401245   43137 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0401 18:56:10.401449   43137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 18:56:10.412805   43137 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 18:56:10.417375   43137 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr  1 18:07 /usr/share/ca-certificates/minikubeCA.pem
	I0401 18:56:10.417527   43137 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 18:07 /usr/share/ca-certificates/minikubeCA.pem
	I0401 18:56:10.417570   43137 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 18:56:10.423886   43137 command_runner.go:130] > b5213941
	I0401 18:56:10.424190   43137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 18:56:10.434283   43137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17751.pem && ln -fs /usr/share/ca-certificates/17751.pem /etc/ssl/certs/17751.pem"
	I0401 18:56:10.445286   43137 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17751.pem
	I0401 18:56:10.450072   43137 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr  1 18:15 /usr/share/ca-certificates/17751.pem
	I0401 18:56:10.450125   43137 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 18:15 /usr/share/ca-certificates/17751.pem
	I0401 18:56:10.450161   43137 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17751.pem
	I0401 18:56:10.456110   43137 command_runner.go:130] > 51391683
	I0401 18:56:10.456283   43137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17751.pem /etc/ssl/certs/51391683.0"
	I0401 18:56:10.465671   43137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177512.pem && ln -fs /usr/share/ca-certificates/177512.pem /etc/ssl/certs/177512.pem"
	I0401 18:56:10.476658   43137 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177512.pem
	I0401 18:56:10.481397   43137 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr  1 18:15 /usr/share/ca-certificates/177512.pem
	I0401 18:56:10.481419   43137 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 18:15 /usr/share/ca-certificates/177512.pem
	I0401 18:56:10.481451   43137 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177512.pem
	I0401 18:56:10.487151   43137 command_runner.go:130] > 3ec20f2e
	I0401 18:56:10.487205   43137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177512.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 18:56:10.498693   43137 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 18:56:10.503432   43137 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 18:56:10.503446   43137 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0401 18:56:10.503451   43137 command_runner.go:130] > Device: 253,1	Inode: 7339526     Links: 1
	I0401 18:56:10.503458   43137 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0401 18:56:10.503464   43137 command_runner.go:130] > Access: 2024-04-01 18:49:59.744156811 +0000
	I0401 18:56:10.503468   43137 command_runner.go:130] > Modify: 2024-04-01 18:49:59.744156811 +0000
	I0401 18:56:10.503473   43137 command_runner.go:130] > Change: 2024-04-01 18:49:59.744156811 +0000
	I0401 18:56:10.503481   43137 command_runner.go:130] >  Birth: 2024-04-01 18:49:59.744156811 +0000
	I0401 18:56:10.503521   43137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 18:56:10.509209   43137 command_runner.go:130] > Certificate will not expire
	I0401 18:56:10.509401   43137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 18:56:10.515106   43137 command_runner.go:130] > Certificate will not expire
	I0401 18:56:10.515171   43137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 18:56:10.520994   43137 command_runner.go:130] > Certificate will not expire
	I0401 18:56:10.521047   43137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 18:56:10.526518   43137 command_runner.go:130] > Certificate will not expire
	I0401 18:56:10.526683   43137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 18:56:10.532413   43137 command_runner.go:130] > Certificate will not expire
	I0401 18:56:10.532615   43137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 18:56:10.538350   43137 command_runner.go:130] > Certificate will not expire
	I0401 18:56:10.538403   43137 kubeadm.go:391] StartCluster: {Name:multinode-853477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.
3 ClusterName:multinode-853477 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.161 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.239 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.115 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 18:56:10.538517   43137 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 18:56:10.538566   43137 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 18:56:10.579309   43137 command_runner.go:130] > f363d25764f108d74d1c4cbdede73e53197698e8cfc9cef20d968a540108693c
	I0401 18:56:10.579331   43137 command_runner.go:130] > 5a98e75b61219ad53b41f90e4b2b7d39ae5d12800f8785637118d47269176df5
	I0401 18:56:10.579337   43137 command_runner.go:130] > ceef8d6cd3cb9dd8e9f3d0597ce26adfd43b02b4adba15febb6c5a429b172af6
	I0401 18:56:10.579347   43137 command_runner.go:130] > f6ce7b69665bbc01d73b598a2d86525641c7d4fbe714ef3997a3688d286471c4
	I0401 18:56:10.579352   43137 command_runner.go:130] > eb60348bd91879fc1995b558036ae53948482d80d31ba95e51d89b06b08a34ef
	I0401 18:56:10.579358   43137 command_runner.go:130] > 2c0ce953a27c267af6dc36c244ee162929b89360da2de35e1bcc350e83cd008c
	I0401 18:56:10.579363   43137 command_runner.go:130] > a358f83537522b0aa3022d82ed82c21a975b8c4647196a4c3761ee917e86e184
	I0401 18:56:10.579369   43137 command_runner.go:130] > 4004daf6fb9e1f819bba0832635a01a785b9e1cbaa7ceefb622a2956cfe7dac8
	I0401 18:56:10.579388   43137 cri.go:89] found id: "f363d25764f108d74d1c4cbdede73e53197698e8cfc9cef20d968a540108693c"
	I0401 18:56:10.579403   43137 cri.go:89] found id: "5a98e75b61219ad53b41f90e4b2b7d39ae5d12800f8785637118d47269176df5"
	I0401 18:56:10.579406   43137 cri.go:89] found id: "ceef8d6cd3cb9dd8e9f3d0597ce26adfd43b02b4adba15febb6c5a429b172af6"
	I0401 18:56:10.579409   43137 cri.go:89] found id: "f6ce7b69665bbc01d73b598a2d86525641c7d4fbe714ef3997a3688d286471c4"
	I0401 18:56:10.579412   43137 cri.go:89] found id: "eb60348bd91879fc1995b558036ae53948482d80d31ba95e51d89b06b08a34ef"
	I0401 18:56:10.579416   43137 cri.go:89] found id: "2c0ce953a27c267af6dc36c244ee162929b89360da2de35e1bcc350e83cd008c"
	I0401 18:56:10.579418   43137 cri.go:89] found id: "a358f83537522b0aa3022d82ed82c21a975b8c4647196a4c3761ee917e86e184"
	I0401 18:56:10.579421   43137 cri.go:89] found id: "4004daf6fb9e1f819bba0832635a01a785b9e1cbaa7ceefb622a2956cfe7dac8"
	I0401 18:56:10.579424   43137 cri.go:89] found id: ""
	I0401 18:56:10.579464   43137 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 01 19:00:02 multinode-853477 crio[2869]: time="2024-04-01 19:00:02.971141830Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1e5a2de3-5b0d-4130-9b15-230448e411b7 name=/runtime.v1.RuntimeService/Version
	Apr 01 19:00:02 multinode-853477 crio[2869]: time="2024-04-01 19:00:02.972974570Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e53a6906-581e-4431-b427-3a1340cdacf8 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:00:02 multinode-853477 crio[2869]: time="2024-04-01 19:00:02.973387710Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711998002973363050,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e53a6906-581e-4431-b427-3a1340cdacf8 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:00:02 multinode-853477 crio[2869]: time="2024-04-01 19:00:02.974549058Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0ea0ea6d-2d72-453f-8ff7-aa83df702802 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:00:02 multinode-853477 crio[2869]: time="2024-04-01 19:00:02.974633613Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0ea0ea6d-2d72-453f-8ff7-aa83df702802 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:00:02 multinode-853477 crio[2869]: time="2024-04-01 19:00:02.975089022Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70f3c596e43d9e8d3b31cf53143f5cc478bffad2a4248cca14dc5cd4776f909e,PodSandboxId:9e5ac50e79a76696340adc94dc3ddf7189f2222f44961e793bfb7b7c2b6cad78,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1711997811674671722,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-pdvlk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db1681a5-1807-454a-9b1f-90edc80f2243,},Annotations:map[string]string{io.kubernetes.container.hash: fb426c0a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ecfdc0dbda808c8c4ead8c7452d6524da5a13144aba9f938c64d8fa5c5ed1f0,PodSandboxId:2f4f34ebfbf9f62db6d01afc251a6ba79230c0adfa2b5f8c84ab2b82d1a39e6d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1711997778129800477,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9rlkp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dfd3904-101a-4734-abf3-8cb24d0a5e04,},Annotations:map[string]string{io.kubernetes.container.hash: 5f44814b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:849e9f37f84373779afaa5cd86916e8b33f29beca08bd4e6763958559d31542d,PodSandboxId:5f2a65f14a8f49bac67b099fd12587d500026829604f3efbace2fa5dc9bc174d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711997778109957480,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lxn6t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a5c765-3e3b-408f-8e4a-c53083b879f3,},Annotations:map[string]string{io.kubernetes.container.hash: 8592d515,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbeb52ccbffef630a4b0f60491151c9767b53a67fac75e4820327919e1082ade,PodSandboxId:2dac7b61a67ebfb5ab27367bd26e0c0d12346b18ad2d2b93b9715c4d87139ee0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711997778088197053,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5a8401b-2fe6-4724-99e0-63a1b6cb4367,},A
nnotations:map[string]string{io.kubernetes.container.hash: e4c65c26,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:598be87df6b76ed97503616e0de22130de187736a15f217c9492fd3bcbe3165f,PodSandboxId:7fbecc3bc99f6ddbe93c98f59dc5b1eaa7824b2c8da6919c8f2465ed3b60037f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711997777989901184,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jkvlp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3c447a9-e35f-4cf8-95db-abfbb425cab3,},Annotations:map[string]string{io.k
ubernetes.container.hash: 9ee8a2b0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02dd78d1553972c1e494a9d605cec48c8b6c014d95916dc796c582d670024b66,PodSandboxId:c5c68f96d10469734e2ce62551fa212bc0624069060475a6bf5d71b64913d27d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711997773097623562,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b9674072226eb1dc8bbe6dde036b55a,},Annotations:map[string]string{io.kubernetes.container.hash: 53b97f00,io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9db106b7e04aca816f3897bd76a858c1184d517e4cb4e5a76c9b39ecfe288833,PodSandboxId:9294efd42dca6896c1c403d936dea125bb679a7b5eaf55740f4c0ec44f36c524,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711997773088154555,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecfdaf127945ce28382d8f90ac75c026,},Annotations:map[string]string{io.kubernetes.container.hash: f92c5150,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10cd91b3bcb1d75ffab7fd0f2f49522fbc9f9df61971c55fb1e6debfe05b20ae,PodSandboxId:ef69bd499d25ddd6ab0ef6465a620bfaa940a872b7f0d5a5c6d4a2f1cfca4445,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711997773102314345,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28a933531bbfe72a832eefa37f2dd17c,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d3e943c35850d286cf09b65639b8d61930d5765722ef41eea300de98f1c435b,PodSandboxId:edfc68dba135ffbdc317d12ba5552a9bbe7dca7a9f7d6e8148d4f6cf1391a2ee,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711997772968553144,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe554da581d909271f7689d472dd2373,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53017c2864ba3ca6f57fbfe5944fd8f393050a0bb68c442eb4f27bf434c640ef,PodSandboxId:62f47c59bbfa39e1bc767709d6843ee547ff6fcd0bca672229b763606063b3df,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1711997470537506807,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-pdvlk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db1681a5-1807-454a-9b1f-90edc80f2243,},Annotations:map[string]string{io.kubernetes.container.hash: fb426c0a,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f363d25764f108d74d1c4cbdede73e53197698e8cfc9cef20d968a540108693c,PodSandboxId:7bee69ef0da5ad552234d2defa9c6e8528220a57a16b3ff76f744f5f7614f96a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1711997426138483793,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5a8401b-2fe6-4724-99e0-63a1b6cb4367,},Annotations:map[string]string{io.kubernetes.container.hash: e4c65c26,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a98e75b61219ad53b41f90e4b2b7d39ae5d12800f8785637118d47269176df5,PodSandboxId:75744372fad26de7b6e1b6c839fc23b2ff308507c86fdca8567273156dbda995,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711997425615045842,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lxn6t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a5c765-3e3b-408f-8e4a-c53083b879f3,},Annotations:map[string]string{io.kubernetes.container.hash: 8592d515,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceef8d6cd3cb9dd8e9f3d0597ce26adfd43b02b4adba15febb6c5a429b172af6,PodSandboxId:77c08d4d25f512e9dfdf6ef3f9d48537f063a828df94fe22c0ae7d0dddb8cae0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1711997423893313774,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9rlkp,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 1dfd3904-101a-4734-abf3-8cb24d0a5e04,},Annotations:map[string]string{io.kubernetes.container.hash: 5f44814b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6ce7b69665bbc01d73b598a2d86525641c7d4fbe714ef3997a3688d286471c4,PodSandboxId:128a8579b95a9768f81397b4d324b60498232e0267943a31e4c4c96dbefd2fd8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1711997423695561198,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jkvlp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3c447a9-e35f-4cf8-95
db-abfbb425cab3,},Annotations:map[string]string{io.kubernetes.container.hash: 9ee8a2b0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c0ce953a27c267af6dc36c244ee162929b89360da2de35e1bcc350e83cd008c,PodSandboxId:90fafaa15e6fa316c3a0a0cee56c57f10126e6ee997df72f659139377156b3c8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1711997404250433158,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b9674072226eb1dc8bbe6dde036b55a,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 53b97f00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb60348bd91879fc1995b558036ae53948482d80d31ba95e51d89b06b08a34ef,PodSandboxId:9c287c311be2a1629eaaa8319f309fec795e952b03175efcd304daa90298f755,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1711997404275109580,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe554da581d909271f7689d472dd2373,},Annotations:map
[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a358f83537522b0aa3022d82ed82c21a975b8c4647196a4c3761ee917e86e184,PodSandboxId:bf98c87f2f39aa7c3585e2ac2500f198edf1b142a9efb4ed874f5a29bfdcf084,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711997404188637125,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecfdaf127945ce28382d8f90ac75c026,},Annotations:map[string]string{
io.kubernetes.container.hash: f92c5150,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4004daf6fb9e1f819bba0832635a01a785b9e1cbaa7ceefb622a2956cfe7dac8,PodSandboxId:559da33691144c5132081fb906e2cdeb7f31734801c64dd7ed7d29b9dd0145c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1711997404136502948,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28a933531bbfe72a832eefa37f2dd17c,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0ea0ea6d-2d72-453f-8ff7-aa83df702802 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:00:03 multinode-853477 crio[2869]: time="2024-04-01 19:00:03.023973715Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9518e677-aafd-4589-a12b-5a176aa77216 name=/runtime.v1.RuntimeService/Version
	Apr 01 19:00:03 multinode-853477 crio[2869]: time="2024-04-01 19:00:03.024074243Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9518e677-aafd-4589-a12b-5a176aa77216 name=/runtime.v1.RuntimeService/Version
	Apr 01 19:00:03 multinode-853477 crio[2869]: time="2024-04-01 19:00:03.025339955Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=43d3696f-0568-4af0-ab42-34748338e1f8 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:00:03 multinode-853477 crio[2869]: time="2024-04-01 19:00:03.026107197Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711998003025928317,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=43d3696f-0568-4af0-ab42-34748338e1f8 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:00:03 multinode-853477 crio[2869]: time="2024-04-01 19:00:03.026600203Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4c7d57ac-eaaf-46dc-9dcd-b11666b689c6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:00:03 multinode-853477 crio[2869]: time="2024-04-01 19:00:03.026675642Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4c7d57ac-eaaf-46dc-9dcd-b11666b689c6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:00:03 multinode-853477 crio[2869]: time="2024-04-01 19:00:03.027309057Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70f3c596e43d9e8d3b31cf53143f5cc478bffad2a4248cca14dc5cd4776f909e,PodSandboxId:9e5ac50e79a76696340adc94dc3ddf7189f2222f44961e793bfb7b7c2b6cad78,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1711997811674671722,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-pdvlk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db1681a5-1807-454a-9b1f-90edc80f2243,},Annotations:map[string]string{io.kubernetes.container.hash: fb426c0a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ecfdc0dbda808c8c4ead8c7452d6524da5a13144aba9f938c64d8fa5c5ed1f0,PodSandboxId:2f4f34ebfbf9f62db6d01afc251a6ba79230c0adfa2b5f8c84ab2b82d1a39e6d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1711997778129800477,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9rlkp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dfd3904-101a-4734-abf3-8cb24d0a5e04,},Annotations:map[string]string{io.kubernetes.container.hash: 5f44814b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:849e9f37f84373779afaa5cd86916e8b33f29beca08bd4e6763958559d31542d,PodSandboxId:5f2a65f14a8f49bac67b099fd12587d500026829604f3efbace2fa5dc9bc174d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711997778109957480,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lxn6t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a5c765-3e3b-408f-8e4a-c53083b879f3,},Annotations:map[string]string{io.kubernetes.container.hash: 8592d515,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbeb52ccbffef630a4b0f60491151c9767b53a67fac75e4820327919e1082ade,PodSandboxId:2dac7b61a67ebfb5ab27367bd26e0c0d12346b18ad2d2b93b9715c4d87139ee0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711997778088197053,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5a8401b-2fe6-4724-99e0-63a1b6cb4367,},A
nnotations:map[string]string{io.kubernetes.container.hash: e4c65c26,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:598be87df6b76ed97503616e0de22130de187736a15f217c9492fd3bcbe3165f,PodSandboxId:7fbecc3bc99f6ddbe93c98f59dc5b1eaa7824b2c8da6919c8f2465ed3b60037f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711997777989901184,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jkvlp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3c447a9-e35f-4cf8-95db-abfbb425cab3,},Annotations:map[string]string{io.k
ubernetes.container.hash: 9ee8a2b0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02dd78d1553972c1e494a9d605cec48c8b6c014d95916dc796c582d670024b66,PodSandboxId:c5c68f96d10469734e2ce62551fa212bc0624069060475a6bf5d71b64913d27d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711997773097623562,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b9674072226eb1dc8bbe6dde036b55a,},Annotations:map[string]string{io.kubernetes.container.hash: 53b97f00,io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9db106b7e04aca816f3897bd76a858c1184d517e4cb4e5a76c9b39ecfe288833,PodSandboxId:9294efd42dca6896c1c403d936dea125bb679a7b5eaf55740f4c0ec44f36c524,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711997773088154555,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecfdaf127945ce28382d8f90ac75c026,},Annotations:map[string]string{io.kubernetes.container.hash: f92c5150,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10cd91b3bcb1d75ffab7fd0f2f49522fbc9f9df61971c55fb1e6debfe05b20ae,PodSandboxId:ef69bd499d25ddd6ab0ef6465a620bfaa940a872b7f0d5a5c6d4a2f1cfca4445,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711997773102314345,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28a933531bbfe72a832eefa37f2dd17c,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d3e943c35850d286cf09b65639b8d61930d5765722ef41eea300de98f1c435b,PodSandboxId:edfc68dba135ffbdc317d12ba5552a9bbe7dca7a9f7d6e8148d4f6cf1391a2ee,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711997772968553144,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe554da581d909271f7689d472dd2373,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53017c2864ba3ca6f57fbfe5944fd8f393050a0bb68c442eb4f27bf434c640ef,PodSandboxId:62f47c59bbfa39e1bc767709d6843ee547ff6fcd0bca672229b763606063b3df,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1711997470537506807,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-pdvlk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db1681a5-1807-454a-9b1f-90edc80f2243,},Annotations:map[string]string{io.kubernetes.container.hash: fb426c0a,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f363d25764f108d74d1c4cbdede73e53197698e8cfc9cef20d968a540108693c,PodSandboxId:7bee69ef0da5ad552234d2defa9c6e8528220a57a16b3ff76f744f5f7614f96a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1711997426138483793,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5a8401b-2fe6-4724-99e0-63a1b6cb4367,},Annotations:map[string]string{io.kubernetes.container.hash: e4c65c26,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a98e75b61219ad53b41f90e4b2b7d39ae5d12800f8785637118d47269176df5,PodSandboxId:75744372fad26de7b6e1b6c839fc23b2ff308507c86fdca8567273156dbda995,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711997425615045842,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lxn6t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a5c765-3e3b-408f-8e4a-c53083b879f3,},Annotations:map[string]string{io.kubernetes.container.hash: 8592d515,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceef8d6cd3cb9dd8e9f3d0597ce26adfd43b02b4adba15febb6c5a429b172af6,PodSandboxId:77c08d4d25f512e9dfdf6ef3f9d48537f063a828df94fe22c0ae7d0dddb8cae0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1711997423893313774,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9rlkp,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 1dfd3904-101a-4734-abf3-8cb24d0a5e04,},Annotations:map[string]string{io.kubernetes.container.hash: 5f44814b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6ce7b69665bbc01d73b598a2d86525641c7d4fbe714ef3997a3688d286471c4,PodSandboxId:128a8579b95a9768f81397b4d324b60498232e0267943a31e4c4c96dbefd2fd8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1711997423695561198,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jkvlp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3c447a9-e35f-4cf8-95
db-abfbb425cab3,},Annotations:map[string]string{io.kubernetes.container.hash: 9ee8a2b0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c0ce953a27c267af6dc36c244ee162929b89360da2de35e1bcc350e83cd008c,PodSandboxId:90fafaa15e6fa316c3a0a0cee56c57f10126e6ee997df72f659139377156b3c8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1711997404250433158,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b9674072226eb1dc8bbe6dde036b55a,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 53b97f00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb60348bd91879fc1995b558036ae53948482d80d31ba95e51d89b06b08a34ef,PodSandboxId:9c287c311be2a1629eaaa8319f309fec795e952b03175efcd304daa90298f755,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1711997404275109580,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe554da581d909271f7689d472dd2373,},Annotations:map
[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a358f83537522b0aa3022d82ed82c21a975b8c4647196a4c3761ee917e86e184,PodSandboxId:bf98c87f2f39aa7c3585e2ac2500f198edf1b142a9efb4ed874f5a29bfdcf084,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711997404188637125,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecfdaf127945ce28382d8f90ac75c026,},Annotations:map[string]string{
io.kubernetes.container.hash: f92c5150,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4004daf6fb9e1f819bba0832635a01a785b9e1cbaa7ceefb622a2956cfe7dac8,PodSandboxId:559da33691144c5132081fb906e2cdeb7f31734801c64dd7ed7d29b9dd0145c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1711997404136502948,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28a933531bbfe72a832eefa37f2dd17c,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4c7d57ac-eaaf-46dc-9dcd-b11666b689c6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:00:03 multinode-853477 crio[2869]: time="2024-04-01 19:00:03.070816220Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=b33c8b02-2303-4a01-ac3b-2b044611dc21 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 01 19:00:03 multinode-853477 crio[2869]: time="2024-04-01 19:00:03.071200337Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:9e5ac50e79a76696340adc94dc3ddf7189f2222f44961e793bfb7b7c2b6cad78,Metadata:&PodSandboxMetadata{Name:busybox-7fdf7869d9-pdvlk,Uid:db1681a5-1807-454a-9b1f-90edc80f2243,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1711997811505799980,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7fdf7869d9-pdvlk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db1681a5-1807-454a-9b1f-90edc80f2243,pod-template-hash: 7fdf7869d9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-01T18:56:17.291012048Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5f2a65f14a8f49bac67b099fd12587d500026829604f3efbace2fa5dc9bc174d,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-lxn6t,Uid:48a5c765-3e3b-408f-8e4a-c53083b879f3,Namespace:kube-system,Attempt:
1,},State:SANDBOX_READY,CreatedAt:1711997777752056943,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-lxn6t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a5c765-3e3b-408f-8e4a-c53083b879f3,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-01T18:56:17.291001946Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7fbecc3bc99f6ddbe93c98f59dc5b1eaa7824b2c8da6919c8f2465ed3b60037f,Metadata:&PodSandboxMetadata{Name:kube-proxy-jkvlp,Uid:c3c447a9-e35f-4cf8-95db-abfbb425cab3,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1711997777674671795,Labels:map[string]string{controller-revision-hash: 7659797656,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-jkvlp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3c447a9-e35f-4cf8-95db-abfbb425cab3,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]stri
ng{kubernetes.io/config.seen: 2024-04-01T18:56:17.291016197Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2f4f34ebfbf9f62db6d01afc251a6ba79230c0adfa2b5f8c84ab2b82d1a39e6d,Metadata:&PodSandboxMetadata{Name:kindnet-9rlkp,Uid:1dfd3904-101a-4734-abf3-8cb24d0a5e04,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1711997777642980744,Labels:map[string]string{app: kindnet,controller-revision-hash: bb65b84c4,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-9rlkp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dfd3904-101a-4734-abf3-8cb24d0a5e04,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-01T18:56:17.291013164Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2dac7b61a67ebfb5ab27367bd26e0c0d12346b18ad2d2b93b9715c4d87139ee0,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:a5a8401b-2fe6-4724-99e0-63a1b6cb4367,Namespace:kube-system,Attempt:1,},Sta
te:SANDBOX_READY,CreatedAt:1711997777637192976,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5a8401b-2fe6-4724-99e0-63a1b6cb4367,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/t
mp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-04-01T18:56:17.291011104Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ef69bd499d25ddd6ab0ef6465a620bfaa940a872b7f0d5a5c6d4a2f1cfca4445,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-853477,Uid:28a933531bbfe72a832eefa37f2dd17c,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1711997772792578325,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28a933531bbfe72a832eefa37f2dd17c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 28a933531bbfe72a832eefa37f2dd17c,kubernetes.io/config.seen: 2024-04-01T18:56:12.289325067Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9294efd42dca6896c1c403d936dea125bb679a7b5eaf55740f4c0ec44f36c524,Metadata:&PodSandboxMetadata{Name:kube-apiserver-mul
tinode-853477,Uid:ecfdaf127945ce28382d8f90ac75c026,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1711997772788252438,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecfdaf127945ce28382d8f90ac75c026,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.161:8443,kubernetes.io/config.hash: ecfdaf127945ce28382d8f90ac75c026,kubernetes.io/config.seen: 2024-04-01T18:56:12.289323296Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c5c68f96d10469734e2ce62551fa212bc0624069060475a6bf5d71b64913d27d,Metadata:&PodSandboxMetadata{Name:etcd-multinode-853477,Uid:2b9674072226eb1dc8bbe6dde036b55a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1711997772787443618,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kuberne
tes.pod.name: etcd-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b9674072226eb1dc8bbe6dde036b55a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.161:2379,kubernetes.io/config.hash: 2b9674072226eb1dc8bbe6dde036b55a,kubernetes.io/config.seen: 2024-04-01T18:56:12.289319436Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:edfc68dba135ffbdc317d12ba5552a9bbe7dca7a9f7d6e8148d4f6cf1391a2ee,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-853477,Uid:fe554da581d909271f7689d472dd2373,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1711997772782425330,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe554da581d909271f7689d472dd2373,tier: control-plane,},Annotations:map[string]string{kuber
netes.io/config.hash: fe554da581d909271f7689d472dd2373,kubernetes.io/config.seen: 2024-04-01T18:56:12.289324307Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:62f47c59bbfa39e1bc767709d6843ee547ff6fcd0bca672229b763606063b3df,Metadata:&PodSandboxMetadata{Name:busybox-7fdf7869d9-pdvlk,Uid:db1681a5-1807-454a-9b1f-90edc80f2243,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1711997469383091464,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7fdf7869d9-pdvlk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db1681a5-1807-454a-9b1f-90edc80f2243,pod-template-hash: 7fdf7869d9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-01T18:51:09.074703181Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7bee69ef0da5ad552234d2defa9c6e8528220a57a16b3ff76f744f5f7614f96a,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:a5a8401b-2fe6-4724-99e0-63a1b6cb4367,Namespace:kube-system,Attemp
t:0,},State:SANDBOX_NOTREADY,CreatedAt:1711997426049900731,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5a8401b-2fe6-4724-99e0-63a1b6cb4367,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\
"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-04-01T18:50:25.743191938Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:75744372fad26de7b6e1b6c839fc23b2ff308507c86fdca8567273156dbda995,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-lxn6t,Uid:48a5c765-3e3b-408f-8e4a-c53083b879f3,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1711997425474151043,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-lxn6t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a5c765-3e3b-408f-8e4a-c53083b879f3,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-01T18:50:25.167900907Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:128a8579b95a9768f81397b4d324b60498232e0267943a31e4c4c96dbefd2fd8,Metadata:&PodSandboxMetadata{Name:kube-proxy-jkvlp,Uid:c3c447a9-e35f-4cf8-95db-abfbb425cab3,Namespace:
kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1711997423314206120,Labels:map[string]string{controller-revision-hash: 7659797656,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-jkvlp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3c447a9-e35f-4cf8-95db-abfbb425cab3,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-01T18:50:22.987436760Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:77c08d4d25f512e9dfdf6ef3f9d48537f063a828df94fe22c0ae7d0dddb8cae0,Metadata:&PodSandboxMetadata{Name:kindnet-9rlkp,Uid:1dfd3904-101a-4734-abf3-8cb24d0a5e04,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1711997423309528832,Labels:map[string]string{app: kindnet,controller-revision-hash: bb65b84c4,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-9rlkp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dfd3904-101a-4734-abf3-8cb24d0a5e04,k8s-app: kindnet
,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-01T18:50:22.982325066Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:559da33691144c5132081fb906e2cdeb7f31734801c64dd7ed7d29b9dd0145c5,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-853477,Uid:28a933531bbfe72a832eefa37f2dd17c,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1711997403990068271,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28a933531bbfe72a832eefa37f2dd17c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 28a933531bbfe72a832eefa37f2dd17c,kubernetes.io/config.seen: 2024-04-01T18:50:03.521162070Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9c287c311be2a1629eaaa8319f309fec795e952b03175efcd304daa90298f755,Metadata:&PodSandboxMetadata{N
ame:kube-controller-manager-multinode-853477,Uid:fe554da581d909271f7689d472dd2373,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1711997403981966427,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe554da581d909271f7689d472dd2373,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: fe554da581d909271f7689d472dd2373,kubernetes.io/config.seen: 2024-04-01T18:50:03.521161259Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bf98c87f2f39aa7c3585e2ac2500f198edf1b142a9efb4ed874f5a29bfdcf084,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-853477,Uid:ecfdaf127945ce28382d8f90ac75c026,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1711997403978364122,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.na
me: kube-apiserver-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecfdaf127945ce28382d8f90ac75c026,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.161:8443,kubernetes.io/config.hash: ecfdaf127945ce28382d8f90ac75c026,kubernetes.io/config.seen: 2024-04-01T18:50:03.521160122Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:90fafaa15e6fa316c3a0a0cee56c57f10126e6ee997df72f659139377156b3c8,Metadata:&PodSandboxMetadata{Name:etcd-multinode-853477,Uid:2b9674072226eb1dc8bbe6dde036b55a,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1711997403976123081,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b9674072226eb1dc8bbe6dde036b55a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: ht
tps://192.168.39.161:2379,kubernetes.io/config.hash: 2b9674072226eb1dc8bbe6dde036b55a,kubernetes.io/config.seen: 2024-04-01T18:50:03.521156532Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=b33c8b02-2303-4a01-ac3b-2b044611dc21 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 01 19:00:03 multinode-853477 crio[2869]: time="2024-04-01 19:00:03.072256416Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=38d9a658-3e09-4456-9d9e-ef2699720f79 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:00:03 multinode-853477 crio[2869]: time="2024-04-01 19:00:03.072316381Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=38d9a658-3e09-4456-9d9e-ef2699720f79 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:00:03 multinode-853477 crio[2869]: time="2024-04-01 19:00:03.072633694Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70f3c596e43d9e8d3b31cf53143f5cc478bffad2a4248cca14dc5cd4776f909e,PodSandboxId:9e5ac50e79a76696340adc94dc3ddf7189f2222f44961e793bfb7b7c2b6cad78,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1711997811674671722,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-pdvlk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db1681a5-1807-454a-9b1f-90edc80f2243,},Annotations:map[string]string{io.kubernetes.container.hash: fb426c0a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ecfdc0dbda808c8c4ead8c7452d6524da5a13144aba9f938c64d8fa5c5ed1f0,PodSandboxId:2f4f34ebfbf9f62db6d01afc251a6ba79230c0adfa2b5f8c84ab2b82d1a39e6d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1711997778129800477,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9rlkp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dfd3904-101a-4734-abf3-8cb24d0a5e04,},Annotations:map[string]string{io.kubernetes.container.hash: 5f44814b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:849e9f37f84373779afaa5cd86916e8b33f29beca08bd4e6763958559d31542d,PodSandboxId:5f2a65f14a8f49bac67b099fd12587d500026829604f3efbace2fa5dc9bc174d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711997778109957480,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lxn6t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a5c765-3e3b-408f-8e4a-c53083b879f3,},Annotations:map[string]string{io.kubernetes.container.hash: 8592d515,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbeb52ccbffef630a4b0f60491151c9767b53a67fac75e4820327919e1082ade,PodSandboxId:2dac7b61a67ebfb5ab27367bd26e0c0d12346b18ad2d2b93b9715c4d87139ee0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711997778088197053,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5a8401b-2fe6-4724-99e0-63a1b6cb4367,},A
nnotations:map[string]string{io.kubernetes.container.hash: e4c65c26,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:598be87df6b76ed97503616e0de22130de187736a15f217c9492fd3bcbe3165f,PodSandboxId:7fbecc3bc99f6ddbe93c98f59dc5b1eaa7824b2c8da6919c8f2465ed3b60037f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711997777989901184,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jkvlp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3c447a9-e35f-4cf8-95db-abfbb425cab3,},Annotations:map[string]string{io.k
ubernetes.container.hash: 9ee8a2b0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02dd78d1553972c1e494a9d605cec48c8b6c014d95916dc796c582d670024b66,PodSandboxId:c5c68f96d10469734e2ce62551fa212bc0624069060475a6bf5d71b64913d27d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711997773097623562,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b9674072226eb1dc8bbe6dde036b55a,},Annotations:map[string]string{io.kubernetes.container.hash: 53b97f00,io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9db106b7e04aca816f3897bd76a858c1184d517e4cb4e5a76c9b39ecfe288833,PodSandboxId:9294efd42dca6896c1c403d936dea125bb679a7b5eaf55740f4c0ec44f36c524,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711997773088154555,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecfdaf127945ce28382d8f90ac75c026,},Annotations:map[string]string{io.kubernetes.container.hash: f92c5150,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10cd91b3bcb1d75ffab7fd0f2f49522fbc9f9df61971c55fb1e6debfe05b20ae,PodSandboxId:ef69bd499d25ddd6ab0ef6465a620bfaa940a872b7f0d5a5c6d4a2f1cfca4445,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711997773102314345,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28a933531bbfe72a832eefa37f2dd17c,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d3e943c35850d286cf09b65639b8d61930d5765722ef41eea300de98f1c435b,PodSandboxId:edfc68dba135ffbdc317d12ba5552a9bbe7dca7a9f7d6e8148d4f6cf1391a2ee,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711997772968553144,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe554da581d909271f7689d472dd2373,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53017c2864ba3ca6f57fbfe5944fd8f393050a0bb68c442eb4f27bf434c640ef,PodSandboxId:62f47c59bbfa39e1bc767709d6843ee547ff6fcd0bca672229b763606063b3df,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1711997470537506807,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-pdvlk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db1681a5-1807-454a-9b1f-90edc80f2243,},Annotations:map[string]string{io.kubernetes.container.hash: fb426c0a,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f363d25764f108d74d1c4cbdede73e53197698e8cfc9cef20d968a540108693c,PodSandboxId:7bee69ef0da5ad552234d2defa9c6e8528220a57a16b3ff76f744f5f7614f96a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1711997426138483793,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5a8401b-2fe6-4724-99e0-63a1b6cb4367,},Annotations:map[string]string{io.kubernetes.container.hash: e4c65c26,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a98e75b61219ad53b41f90e4b2b7d39ae5d12800f8785637118d47269176df5,PodSandboxId:75744372fad26de7b6e1b6c839fc23b2ff308507c86fdca8567273156dbda995,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711997425615045842,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lxn6t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a5c765-3e3b-408f-8e4a-c53083b879f3,},Annotations:map[string]string{io.kubernetes.container.hash: 8592d515,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceef8d6cd3cb9dd8e9f3d0597ce26adfd43b02b4adba15febb6c5a429b172af6,PodSandboxId:77c08d4d25f512e9dfdf6ef3f9d48537f063a828df94fe22c0ae7d0dddb8cae0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1711997423893313774,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9rlkp,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 1dfd3904-101a-4734-abf3-8cb24d0a5e04,},Annotations:map[string]string{io.kubernetes.container.hash: 5f44814b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6ce7b69665bbc01d73b598a2d86525641c7d4fbe714ef3997a3688d286471c4,PodSandboxId:128a8579b95a9768f81397b4d324b60498232e0267943a31e4c4c96dbefd2fd8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1711997423695561198,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jkvlp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3c447a9-e35f-4cf8-95
db-abfbb425cab3,},Annotations:map[string]string{io.kubernetes.container.hash: 9ee8a2b0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c0ce953a27c267af6dc36c244ee162929b89360da2de35e1bcc350e83cd008c,PodSandboxId:90fafaa15e6fa316c3a0a0cee56c57f10126e6ee997df72f659139377156b3c8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1711997404250433158,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b9674072226eb1dc8bbe6dde036b55a,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 53b97f00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb60348bd91879fc1995b558036ae53948482d80d31ba95e51d89b06b08a34ef,PodSandboxId:9c287c311be2a1629eaaa8319f309fec795e952b03175efcd304daa90298f755,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1711997404275109580,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe554da581d909271f7689d472dd2373,},Annotations:map
[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a358f83537522b0aa3022d82ed82c21a975b8c4647196a4c3761ee917e86e184,PodSandboxId:bf98c87f2f39aa7c3585e2ac2500f198edf1b142a9efb4ed874f5a29bfdcf084,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711997404188637125,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecfdaf127945ce28382d8f90ac75c026,},Annotations:map[string]string{
io.kubernetes.container.hash: f92c5150,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4004daf6fb9e1f819bba0832635a01a785b9e1cbaa7ceefb622a2956cfe7dac8,PodSandboxId:559da33691144c5132081fb906e2cdeb7f31734801c64dd7ed7d29b9dd0145c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1711997404136502948,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28a933531bbfe72a832eefa37f2dd17c,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=38d9a658-3e09-4456-9d9e-ef2699720f79 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:00:03 multinode-853477 crio[2869]: time="2024-04-01 19:00:03.079105553Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f6474add-f001-4f9a-abe3-d36ed707f389 name=/runtime.v1.RuntimeService/Version
	Apr 01 19:00:03 multinode-853477 crio[2869]: time="2024-04-01 19:00:03.079166531Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f6474add-f001-4f9a-abe3-d36ed707f389 name=/runtime.v1.RuntimeService/Version
	Apr 01 19:00:03 multinode-853477 crio[2869]: time="2024-04-01 19:00:03.080368920Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=42fce046-38a4-491b-9128-caf861f0b330 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:00:03 multinode-853477 crio[2869]: time="2024-04-01 19:00:03.080825781Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711998003080805858,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=42fce046-38a4-491b-9128-caf861f0b330 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:00:03 multinode-853477 crio[2869]: time="2024-04-01 19:00:03.081685049Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=83f335d4-997c-4e70-a88b-6145c6ca122a name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:00:03 multinode-853477 crio[2869]: time="2024-04-01 19:00:03.081831770Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=83f335d4-997c-4e70-a88b-6145c6ca122a name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:00:03 multinode-853477 crio[2869]: time="2024-04-01 19:00:03.082177244Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:70f3c596e43d9e8d3b31cf53143f5cc478bffad2a4248cca14dc5cd4776f909e,PodSandboxId:9e5ac50e79a76696340adc94dc3ddf7189f2222f44961e793bfb7b7c2b6cad78,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1711997811674671722,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-pdvlk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db1681a5-1807-454a-9b1f-90edc80f2243,},Annotations:map[string]string{io.kubernetes.container.hash: fb426c0a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ecfdc0dbda808c8c4ead8c7452d6524da5a13144aba9f938c64d8fa5c5ed1f0,PodSandboxId:2f4f34ebfbf9f62db6d01afc251a6ba79230c0adfa2b5f8c84ab2b82d1a39e6d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1711997778129800477,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9rlkp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dfd3904-101a-4734-abf3-8cb24d0a5e04,},Annotations:map[string]string{io.kubernetes.container.hash: 5f44814b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:849e9f37f84373779afaa5cd86916e8b33f29beca08bd4e6763958559d31542d,PodSandboxId:5f2a65f14a8f49bac67b099fd12587d500026829604f3efbace2fa5dc9bc174d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711997778109957480,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lxn6t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a5c765-3e3b-408f-8e4a-c53083b879f3,},Annotations:map[string]string{io.kubernetes.container.hash: 8592d515,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbeb52ccbffef630a4b0f60491151c9767b53a67fac75e4820327919e1082ade,PodSandboxId:2dac7b61a67ebfb5ab27367bd26e0c0d12346b18ad2d2b93b9715c4d87139ee0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711997778088197053,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5a8401b-2fe6-4724-99e0-63a1b6cb4367,},A
nnotations:map[string]string{io.kubernetes.container.hash: e4c65c26,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:598be87df6b76ed97503616e0de22130de187736a15f217c9492fd3bcbe3165f,PodSandboxId:7fbecc3bc99f6ddbe93c98f59dc5b1eaa7824b2c8da6919c8f2465ed3b60037f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711997777989901184,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jkvlp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3c447a9-e35f-4cf8-95db-abfbb425cab3,},Annotations:map[string]string{io.k
ubernetes.container.hash: 9ee8a2b0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02dd78d1553972c1e494a9d605cec48c8b6c014d95916dc796c582d670024b66,PodSandboxId:c5c68f96d10469734e2ce62551fa212bc0624069060475a6bf5d71b64913d27d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711997773097623562,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b9674072226eb1dc8bbe6dde036b55a,},Annotations:map[string]string{io.kubernetes.container.hash: 53b97f00,io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9db106b7e04aca816f3897bd76a858c1184d517e4cb4e5a76c9b39ecfe288833,PodSandboxId:9294efd42dca6896c1c403d936dea125bb679a7b5eaf55740f4c0ec44f36c524,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711997773088154555,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecfdaf127945ce28382d8f90ac75c026,},Annotations:map[string]string{io.kubernetes.container.hash: f92c5150,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10cd91b3bcb1d75ffab7fd0f2f49522fbc9f9df61971c55fb1e6debfe05b20ae,PodSandboxId:ef69bd499d25ddd6ab0ef6465a620bfaa940a872b7f0d5a5c6d4a2f1cfca4445,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711997773102314345,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28a933531bbfe72a832eefa37f2dd17c,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d3e943c35850d286cf09b65639b8d61930d5765722ef41eea300de98f1c435b,PodSandboxId:edfc68dba135ffbdc317d12ba5552a9bbe7dca7a9f7d6e8148d4f6cf1391a2ee,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711997772968553144,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe554da581d909271f7689d472dd2373,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53017c2864ba3ca6f57fbfe5944fd8f393050a0bb68c442eb4f27bf434c640ef,PodSandboxId:62f47c59bbfa39e1bc767709d6843ee547ff6fcd0bca672229b763606063b3df,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1711997470537506807,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-pdvlk,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db1681a5-1807-454a-9b1f-90edc80f2243,},Annotations:map[string]string{io.kubernetes.container.hash: fb426c0a,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f363d25764f108d74d1c4cbdede73e53197698e8cfc9cef20d968a540108693c,PodSandboxId:7bee69ef0da5ad552234d2defa9c6e8528220a57a16b3ff76f744f5f7614f96a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1711997426138483793,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5a8401b-2fe6-4724-99e0-63a1b6cb4367,},Annotations:map[string]string{io.kubernetes.container.hash: e4c65c26,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a98e75b61219ad53b41f90e4b2b7d39ae5d12800f8785637118d47269176df5,PodSandboxId:75744372fad26de7b6e1b6c839fc23b2ff308507c86fdca8567273156dbda995,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711997425615045842,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lxn6t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48a5c765-3e3b-408f-8e4a-c53083b879f3,},Annotations:map[string]string{io.kubernetes.container.hash: 8592d515,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceef8d6cd3cb9dd8e9f3d0597ce26adfd43b02b4adba15febb6c5a429b172af6,PodSandboxId:77c08d4d25f512e9dfdf6ef3f9d48537f063a828df94fe22c0ae7d0dddb8cae0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1711997423893313774,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9rlkp,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 1dfd3904-101a-4734-abf3-8cb24d0a5e04,},Annotations:map[string]string{io.kubernetes.container.hash: 5f44814b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6ce7b69665bbc01d73b598a2d86525641c7d4fbe714ef3997a3688d286471c4,PodSandboxId:128a8579b95a9768f81397b4d324b60498232e0267943a31e4c4c96dbefd2fd8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1711997423695561198,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jkvlp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3c447a9-e35f-4cf8-95
db-abfbb425cab3,},Annotations:map[string]string{io.kubernetes.container.hash: 9ee8a2b0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c0ce953a27c267af6dc36c244ee162929b89360da2de35e1bcc350e83cd008c,PodSandboxId:90fafaa15e6fa316c3a0a0cee56c57f10126e6ee997df72f659139377156b3c8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1711997404250433158,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b9674072226eb1dc8bbe6dde036b55a,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 53b97f00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb60348bd91879fc1995b558036ae53948482d80d31ba95e51d89b06b08a34ef,PodSandboxId:9c287c311be2a1629eaaa8319f309fec795e952b03175efcd304daa90298f755,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1711997404275109580,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe554da581d909271f7689d472dd2373,},Annotations:map
[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a358f83537522b0aa3022d82ed82c21a975b8c4647196a4c3761ee917e86e184,PodSandboxId:bf98c87f2f39aa7c3585e2ac2500f198edf1b142a9efb4ed874f5a29bfdcf084,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711997404188637125,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecfdaf127945ce28382d8f90ac75c026,},Annotations:map[string]string{
io.kubernetes.container.hash: f92c5150,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4004daf6fb9e1f819bba0832635a01a785b9e1cbaa7ceefb622a2956cfe7dac8,PodSandboxId:559da33691144c5132081fb906e2cdeb7f31734801c64dd7ed7d29b9dd0145c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1711997404136502948,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-853477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28a933531bbfe72a832eefa37f2dd17c,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=83f335d4-997c-4e70-a88b-6145c6ca122a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	70f3c596e43d9       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   9e5ac50e79a76       busybox-7fdf7869d9-pdvlk
	2ecfdc0dbda80       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      3 minutes ago       Running             kindnet-cni               1                   2f4f34ebfbf9f       kindnet-9rlkp
	849e9f37f8437       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Running             coredns                   1                   5f2a65f14a8f4       coredns-76f75df574-lxn6t
	bbeb52ccbffef       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       1                   2dac7b61a67eb       storage-provisioner
	598be87df6b76       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      3 minutes ago       Running             kube-proxy                1                   7fbecc3bc99f6       kube-proxy-jkvlp
	10cd91b3bcb1d       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      3 minutes ago       Running             kube-scheduler            1                   ef69bd499d25d       kube-scheduler-multinode-853477
	02dd78d155397       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      3 minutes ago       Running             etcd                      1                   c5c68f96d1046       etcd-multinode-853477
	9db106b7e04ac       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      3 minutes ago       Running             kube-apiserver            1                   9294efd42dca6       kube-apiserver-multinode-853477
	2d3e943c35850       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      3 minutes ago       Running             kube-controller-manager   1                   edfc68dba135f       kube-controller-manager-multinode-853477
	53017c2864ba3       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   8 minutes ago       Exited              busybox                   0                   62f47c59bbfa3       busybox-7fdf7869d9-pdvlk
	f363d25764f10       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner       0                   7bee69ef0da5a       storage-provisioner
	5a98e75b61219       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      9 minutes ago       Exited              coredns                   0                   75744372fad26       coredns-76f75df574-lxn6t
	ceef8d6cd3cb9       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      9 minutes ago       Exited              kindnet-cni               0                   77c08d4d25f51       kindnet-9rlkp
	f6ce7b69665bb       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      9 minutes ago       Exited              kube-proxy                0                   128a8579b95a9       kube-proxy-jkvlp
	eb60348bd9187       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      9 minutes ago       Exited              kube-controller-manager   0                   9c287c311be2a       kube-controller-manager-multinode-853477
	2c0ce953a27c2       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      9 minutes ago       Exited              etcd                      0                   90fafaa15e6fa       etcd-multinode-853477
	a358f83537522       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      9 minutes ago       Exited              kube-apiserver            0                   bf98c87f2f39a       kube-apiserver-multinode-853477
	4004daf6fb9e1       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      9 minutes ago       Exited              kube-scheduler            0                   559da33691144       kube-scheduler-multinode-853477
	
	
	==> coredns [5a98e75b61219ad53b41f90e4b2b7d39ae5d12800f8785637118d47269176df5] <==
	[INFO] 10.244.0.3:59681 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001883091s
	[INFO] 10.244.0.3:44099 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00011196s
	[INFO] 10.244.0.3:48105 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000150131s
	[INFO] 10.244.0.3:39069 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001203414s
	[INFO] 10.244.0.3:43290 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081841s
	[INFO] 10.244.0.3:53564 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000132362s
	[INFO] 10.244.0.3:36682 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076503s
	[INFO] 10.244.1.2:35059 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134746s
	[INFO] 10.244.1.2:52173 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000129153s
	[INFO] 10.244.1.2:38193 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000108149s
	[INFO] 10.244.1.2:35922 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079203s
	[INFO] 10.244.0.3:56492 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134167s
	[INFO] 10.244.0.3:38054 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00006376s
	[INFO] 10.244.0.3:33632 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00006817s
	[INFO] 10.244.0.3:39046 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000146214s
	[INFO] 10.244.1.2:56858 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000286319s
	[INFO] 10.244.1.2:57788 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000157646s
	[INFO] 10.244.1.2:36645 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000118332s
	[INFO] 10.244.1.2:38920 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000128982s
	[INFO] 10.244.0.3:46539 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000170638s
	[INFO] 10.244.0.3:52506 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000104326s
	[INFO] 10.244.0.3:41906 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000043411s
	[INFO] 10.244.0.3:60508 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000067863s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [849e9f37f84373779afaa5cd86916e8b33f29beca08bd4e6763958559d31542d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:47010 - 15101 "HINFO IN 2694119675735516823.2287192293218032795. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009421557s
	
	
	==> describe nodes <==
	Name:               multinode-853477
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-853477
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2
	                    minikube.k8s.io/name=multinode-853477
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_01T18_50_10_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Apr 2024 18:50:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-853477
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Apr 2024 19:00:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Apr 2024 18:56:16 +0000   Mon, 01 Apr 2024 18:50:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Apr 2024 18:56:16 +0000   Mon, 01 Apr 2024 18:50:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Apr 2024 18:56:16 +0000   Mon, 01 Apr 2024 18:50:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Apr 2024 18:56:16 +0000   Mon, 01 Apr 2024 18:50:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.161
	  Hostname:    multinode-853477
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 60f04aaa253948babc505cf6ed118280
	  System UUID:                60f04aaa-2539-48ba-bc50-5cf6ed118280
	  Boot ID:                    765a3751-9a73-4256-bcc9-9917d17d9943
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-pdvlk                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m54s
	  kube-system                 coredns-76f75df574-lxn6t                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m40s
	  kube-system                 etcd-multinode-853477                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m53s
	  kube-system                 kindnet-9rlkp                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m41s
	  kube-system                 kube-apiserver-multinode-853477             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m53s
	  kube-system                 kube-controller-manager-multinode-853477    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m53s
	  kube-system                 kube-proxy-jkvlp                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m41s
	  kube-system                 kube-scheduler-multinode-853477             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m53s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m39s                  kube-proxy       
	  Normal  Starting                 3m44s                  kube-proxy       
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node multinode-853477 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node multinode-853477 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)      kubelet          Node multinode-853477 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     9m53s                  kubelet          Node multinode-853477 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  9m53s                  kubelet          Node multinode-853477 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m53s                  kubelet          Node multinode-853477 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  9m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m53s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           9m41s                  node-controller  Node multinode-853477 event: Registered Node multinode-853477 in Controller
	  Normal  NodeReady                9m38s                  kubelet          Node multinode-853477 status is now: NodeReady
	  Normal  Starting                 3m51s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m51s (x8 over 3m51s)  kubelet          Node multinode-853477 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m51s (x8 over 3m51s)  kubelet          Node multinode-853477 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m51s (x7 over 3m51s)  kubelet          Node multinode-853477 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m34s                  node-controller  Node multinode-853477 event: Registered Node multinode-853477 in Controller
	
	
	Name:               multinode-853477-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-853477-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2
	                    minikube.k8s.io/name=multinode-853477
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_01T18_56_58_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Apr 2024 18:56:56 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-853477-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Apr 2024 18:57:37 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 01 Apr 2024 18:57:27 +0000   Mon, 01 Apr 2024 18:58:19 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 01 Apr 2024 18:57:27 +0000   Mon, 01 Apr 2024 18:58:19 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 01 Apr 2024 18:57:27 +0000   Mon, 01 Apr 2024 18:58:19 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 01 Apr 2024 18:57:27 +0000   Mon, 01 Apr 2024 18:58:19 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.239
	  Hostname:    multinode-853477-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 922f3b6f48264cea829c2c8bc673d4e2
	  System UUID:                922f3b6f-4826-4cea-829c-2c8bc673d4e2
	  Boot ID:                    666fb760-6b9f-4c6a-93a3-1da4b31c77b2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-zh9pz    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m11s
	  kube-system                 kindnet-6wvv4               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m5s
	  kube-system                 kube-proxy-mthcv            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m1s                 kube-proxy       
	  Normal  Starting                 8m59s                kube-proxy       
	  Normal  NodeHasSufficientMemory  9m6s (x2 over 9m6s)  kubelet          Node multinode-853477-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m6s (x2 over 9m6s)  kubelet          Node multinode-853477-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m6s (x2 over 9m6s)  kubelet          Node multinode-853477-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m57s                kubelet          Node multinode-853477-m02 status is now: NodeReady
	  Normal  Starting                 3m7s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m7s (x2 over 3m7s)  kubelet          Node multinode-853477-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m7s (x2 over 3m7s)  kubelet          Node multinode-853477-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m7s (x2 over 3m7s)  kubelet          Node multinode-853477-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m59s                kubelet          Node multinode-853477-m02 status is now: NodeReady
	  Normal  NodeNotReady             104s                 node-controller  Node multinode-853477-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.069736] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.209711] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.119953] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.299284] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.950887] systemd-fstab-generator[764]: Ignoring "noauto" option for root device
	[  +0.066302] kauditd_printk_skb: 130 callbacks suppressed
	[Apr 1 18:50] systemd-fstab-generator[955]: Ignoring "noauto" option for root device
	[  +0.059213] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.729437] systemd-fstab-generator[1297]: Ignoring "noauto" option for root device
	[  +0.079584] kauditd_printk_skb: 69 callbacks suppressed
	[ +13.083501] systemd-fstab-generator[1490]: Ignoring "noauto" option for root device
	[  +0.149801] kauditd_printk_skb: 21 callbacks suppressed
	[Apr 1 18:51] kauditd_printk_skb: 84 callbacks suppressed
	[Apr 1 18:56] systemd-fstab-generator[2787]: Ignoring "noauto" option for root device
	[  +0.166476] systemd-fstab-generator[2799]: Ignoring "noauto" option for root device
	[  +0.198405] systemd-fstab-generator[2813]: Ignoring "noauto" option for root device
	[  +0.153504] systemd-fstab-generator[2825]: Ignoring "noauto" option for root device
	[  +0.303313] systemd-fstab-generator[2853]: Ignoring "noauto" option for root device
	[  +1.595630] systemd-fstab-generator[2955]: Ignoring "noauto" option for root device
	[  +2.117616] systemd-fstab-generator[3083]: Ignoring "noauto" option for root device
	[  +0.856237] kauditd_printk_skb: 144 callbacks suppressed
	[  +5.026442] kauditd_printk_skb: 45 callbacks suppressed
	[ +11.988650] kauditd_printk_skb: 17 callbacks suppressed
	[  +1.368296] systemd-fstab-generator[3895]: Ignoring "noauto" option for root device
	[ +20.302467] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [02dd78d1553972c1e494a9d605cec48c8b6c014d95916dc796c582d670024b66] <==
	{"level":"info","ts":"2024-04-01T18:56:13.856356Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-01T18:56:13.858817Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-01T18:56:13.859567Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"59d4e9d626571860 switched to configuration voters=(6473055670413760608)"}
	{"level":"info","ts":"2024-04-01T18:56:13.862886Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"641f62d988bc06c1","local-member-id":"59d4e9d626571860","added-peer-id":"59d4e9d626571860","added-peer-peer-urls":["https://192.168.39.161:2380"]}
	{"level":"info","ts":"2024-04-01T18:56:13.863396Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"641f62d988bc06c1","local-member-id":"59d4e9d626571860","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-01T18:56:13.864466Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-01T18:56:13.866119Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"59d4e9d626571860","initial-advertise-peer-urls":["https://192.168.39.161:2380"],"listen-peer-urls":["https://192.168.39.161:2380"],"advertise-client-urls":["https://192.168.39.161:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.161:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-01T18:56:13.866178Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-01T18:56:13.864496Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.161:2380"}
	{"level":"info","ts":"2024-04-01T18:56:13.866253Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.161:2380"}
	{"level":"info","ts":"2024-04-01T18:56:13.86584Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-01T18:56:15.226784Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"59d4e9d626571860 is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-01T18:56:15.22685Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"59d4e9d626571860 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-01T18:56:15.226894Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"59d4e9d626571860 received MsgPreVoteResp from 59d4e9d626571860 at term 2"}
	{"level":"info","ts":"2024-04-01T18:56:15.226908Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"59d4e9d626571860 became candidate at term 3"}
	{"level":"info","ts":"2024-04-01T18:56:15.226922Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"59d4e9d626571860 received MsgVoteResp from 59d4e9d626571860 at term 3"}
	{"level":"info","ts":"2024-04-01T18:56:15.22693Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"59d4e9d626571860 became leader at term 3"}
	{"level":"info","ts":"2024-04-01T18:56:15.226941Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 59d4e9d626571860 elected leader 59d4e9d626571860 at term 3"}
	{"level":"info","ts":"2024-04-01T18:56:15.232032Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"59d4e9d626571860","local-member-attributes":"{Name:multinode-853477 ClientURLs:[https://192.168.39.161:2379]}","request-path":"/0/members/59d4e9d626571860/attributes","cluster-id":"641f62d988bc06c1","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-01T18:56:15.232051Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-01T18:56:15.232071Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-01T18:56:15.233153Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-01T18:56:15.2332Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-01T18:56:15.237047Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.161:2379"}
	{"level":"info","ts":"2024-04-01T18:56:15.237133Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [2c0ce953a27c267af6dc36c244ee162929b89360da2de35e1bcc350e83cd008c] <==
	{"level":"info","ts":"2024-04-01T18:50:05.515041Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-01T18:50:05.515654Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"641f62d988bc06c1","local-member-id":"59d4e9d626571860","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-01T18:50:05.515879Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-01T18:50:05.515903Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-01T18:50:05.516002Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-01T18:50:05.51607Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-01T18:50:05.516145Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-01T18:50:05.517652Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-01T18:50:05.518861Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.161:2379"}
	{"level":"info","ts":"2024-04-01T18:50:58.175867Z","caller":"traceutil/trace.go:171","msg":"trace[1963012792] linearizableReadLoop","detail":"{readStateIndex:493; appliedIndex:492; }","duration":"174.434957ms","start":"2024-04-01T18:50:58.001395Z","end":"2024-04-01T18:50:58.17583Z","steps":["trace[1963012792] 'read index received'  (duration: 174.163562ms)","trace[1963012792] 'applied index is now lower than readState.Index'  (duration: 270.849µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-01T18:50:58.176038Z","caller":"traceutil/trace.go:171","msg":"trace[1605147475] transaction","detail":"{read_only:false; response_revision:477; number_of_response:1; }","duration":"174.756687ms","start":"2024-04-01T18:50:58.001269Z","end":"2024-04-01T18:50:58.176026Z","steps":["trace[1605147475] 'process raft request'  (duration: 174.323025ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-01T18:50:58.176224Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"174.790134ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/multinode-853477-m02.17c23c9e0e6f7542\" ","response":"range_response_count:1 size:741"}
	{"level":"info","ts":"2024-04-01T18:50:58.176296Z","caller":"traceutil/trace.go:171","msg":"trace[645952635] range","detail":"{range_begin:/registry/events/default/multinode-853477-m02.17c23c9e0e6f7542; range_end:; response_count:1; response_revision:477; }","duration":"174.908865ms","start":"2024-04-01T18:50:58.001374Z","end":"2024-04-01T18:50:58.176283Z","steps":["trace[645952635] 'agreement among raft nodes before linearized reading'  (duration: 174.764264ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-01T18:51:41.780665Z","caller":"traceutil/trace.go:171","msg":"trace[1837425999] transaction","detail":"{read_only:false; response_revision:599; number_of_response:1; }","duration":"227.376851ms","start":"2024-04-01T18:51:41.55326Z","end":"2024-04-01T18:51:41.780636Z","steps":["trace[1837425999] 'process raft request'  (duration: 131.305062ms)","trace[1837425999] 'compare'  (duration: 95.92531ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-01T18:51:41.781048Z","caller":"traceutil/trace.go:171","msg":"trace[368992295] transaction","detail":"{read_only:false; response_revision:600; number_of_response:1; }","duration":"186.72827ms","start":"2024-04-01T18:51:41.594271Z","end":"2024-04-01T18:51:41.780999Z","steps":["trace[368992295] 'process raft request'  (duration: 186.319705ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-01T18:54:36.054373Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-04-01T18:54:36.055638Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-853477","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.161:2380"],"advertise-client-urls":["https://192.168.39.161:2379"]}
	{"level":"warn","ts":"2024-04-01T18:54:36.055957Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-01T18:54:36.056058Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-01T18:54:36.137303Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.161:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-01T18:54:36.137618Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.161:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-01T18:54:36.138809Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"59d4e9d626571860","current-leader-member-id":"59d4e9d626571860"}
	{"level":"info","ts":"2024-04-01T18:54:36.141367Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.161:2380"}
	{"level":"info","ts":"2024-04-01T18:54:36.141579Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.161:2380"}
	{"level":"info","ts":"2024-04-01T18:54:36.141631Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-853477","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.161:2380"],"advertise-client-urls":["https://192.168.39.161:2379"]}
	
	
	==> kernel <==
	 19:00:03 up 10 min,  0 users,  load average: 0.06, 0.19, 0.12
	Linux multinode-853477 5.10.207 #1 SMP Wed Mar 27 22:02:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2ecfdc0dbda808c8c4ead8c7452d6524da5a13144aba9f938c64d8fa5c5ed1f0] <==
	I0401 18:58:59.365778       1 main.go:250] Node multinode-853477-m02 has CIDR [10.244.1.0/24] 
	I0401 18:59:09.370629       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0401 18:59:09.370681       1 main.go:227] handling current node
	I0401 18:59:09.370691       1 main.go:223] Handling node with IPs: map[192.168.39.239:{}]
	I0401 18:59:09.370697       1 main.go:250] Node multinode-853477-m02 has CIDR [10.244.1.0/24] 
	I0401 18:59:19.375550       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0401 18:59:19.375597       1 main.go:227] handling current node
	I0401 18:59:19.375607       1 main.go:223] Handling node with IPs: map[192.168.39.239:{}]
	I0401 18:59:19.375613       1 main.go:250] Node multinode-853477-m02 has CIDR [10.244.1.0/24] 
	I0401 18:59:29.385861       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0401 18:59:29.385993       1 main.go:227] handling current node
	I0401 18:59:29.386027       1 main.go:223] Handling node with IPs: map[192.168.39.239:{}]
	I0401 18:59:29.386046       1 main.go:250] Node multinode-853477-m02 has CIDR [10.244.1.0/24] 
	I0401 18:59:39.396577       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0401 18:59:39.396838       1 main.go:227] handling current node
	I0401 18:59:39.396886       1 main.go:223] Handling node with IPs: map[192.168.39.239:{}]
	I0401 18:59:39.396908       1 main.go:250] Node multinode-853477-m02 has CIDR [10.244.1.0/24] 
	I0401 18:59:49.405106       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0401 18:59:49.405152       1 main.go:227] handling current node
	I0401 18:59:49.405162       1 main.go:223] Handling node with IPs: map[192.168.39.239:{}]
	I0401 18:59:49.405167       1 main.go:250] Node multinode-853477-m02 has CIDR [10.244.1.0/24] 
	I0401 18:59:59.419476       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0401 18:59:59.419532       1 main.go:227] handling current node
	I0401 18:59:59.419542       1 main.go:223] Handling node with IPs: map[192.168.39.239:{}]
	I0401 18:59:59.419547       1 main.go:250] Node multinode-853477-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [ceef8d6cd3cb9dd8e9f3d0597ce26adfd43b02b4adba15febb6c5a429b172af6] <==
	I0401 18:53:54.995564       1 main.go:250] Node multinode-853477-m03 has CIDR [10.244.3.0/24] 
	I0401 18:54:05.007359       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0401 18:54:05.007451       1 main.go:227] handling current node
	I0401 18:54:05.007475       1 main.go:223] Handling node with IPs: map[192.168.39.239:{}]
	I0401 18:54:05.007493       1 main.go:250] Node multinode-853477-m02 has CIDR [10.244.1.0/24] 
	I0401 18:54:05.007657       1 main.go:223] Handling node with IPs: map[192.168.39.115:{}]
	I0401 18:54:05.007678       1 main.go:250] Node multinode-853477-m03 has CIDR [10.244.3.0/24] 
	I0401 18:54:15.021376       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0401 18:54:15.021426       1 main.go:227] handling current node
	I0401 18:54:15.021436       1 main.go:223] Handling node with IPs: map[192.168.39.239:{}]
	I0401 18:54:15.021441       1 main.go:250] Node multinode-853477-m02 has CIDR [10.244.1.0/24] 
	I0401 18:54:15.021538       1 main.go:223] Handling node with IPs: map[192.168.39.115:{}]
	I0401 18:54:15.021543       1 main.go:250] Node multinode-853477-m03 has CIDR [10.244.3.0/24] 
	I0401 18:54:25.035075       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0401 18:54:25.035121       1 main.go:227] handling current node
	I0401 18:54:25.035135       1 main.go:223] Handling node with IPs: map[192.168.39.239:{}]
	I0401 18:54:25.035141       1 main.go:250] Node multinode-853477-m02 has CIDR [10.244.1.0/24] 
	I0401 18:54:25.035373       1 main.go:223] Handling node with IPs: map[192.168.39.115:{}]
	I0401 18:54:25.035407       1 main.go:250] Node multinode-853477-m03 has CIDR [10.244.3.0/24] 
	I0401 18:54:35.050078       1 main.go:223] Handling node with IPs: map[192.168.39.161:{}]
	I0401 18:54:35.050152       1 main.go:227] handling current node
	I0401 18:54:35.050168       1 main.go:223] Handling node with IPs: map[192.168.39.239:{}]
	I0401 18:54:35.050177       1 main.go:250] Node multinode-853477-m02 has CIDR [10.244.1.0/24] 
	I0401 18:54:35.050320       1 main.go:223] Handling node with IPs: map[192.168.39.115:{}]
	I0401 18:54:35.050373       1 main.go:250] Node multinode-853477-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [9db106b7e04aca816f3897bd76a858c1184d517e4cb4e5a76c9b39ecfe288833] <==
	I0401 18:56:16.573596       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0401 18:56:16.646086       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0401 18:56:16.646209       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0401 18:56:16.672329       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0401 18:56:16.673067       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0401 18:56:16.673664       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0401 18:56:16.673783       1 aggregator.go:165] initial CRD sync complete...
	I0401 18:56:16.673818       1 autoregister_controller.go:141] Starting autoregister controller
	I0401 18:56:16.673839       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0401 18:56:16.673862       1 cache.go:39] Caches are synced for autoregister controller
	I0401 18:56:16.674111       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0401 18:56:16.674156       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0401 18:56:16.674236       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0401 18:56:16.674315       1 shared_informer.go:318] Caches are synced for configmaps
	E0401 18:56:16.684114       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0401 18:56:16.709600       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0401 18:56:16.716968       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0401 18:56:17.578516       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0401 18:56:19.053352       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0401 18:56:19.218327       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0401 18:56:19.227327       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0401 18:56:19.292546       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0401 18:56:19.301521       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0401 18:56:29.867380       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0401 18:56:29.923426       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [a358f83537522b0aa3022d82ed82c21a975b8c4647196a4c3761ee917e86e184] <==
	I0401 18:50:08.482626       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0401 18:50:08.496303       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.161]
	I0401 18:50:08.497350       1 controller.go:624] quota admission added evaluator for: endpoints
	I0401 18:50:08.504340       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0401 18:50:08.832409       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0401 18:50:09.938154       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0401 18:50:09.955552       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0401 18:50:09.970205       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0401 18:50:22.691440       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0401 18:50:22.896428       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0401 18:54:36.050442       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0401 18:54:36.085487       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 18:54:36.086182       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 18:54:36.086263       1 logging.go:59] [core] [Channel #10 SubChannel #11] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 18:54:36.086307       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 18:54:36.086378       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 18:54:36.086444       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 18:54:36.086517       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 18:54:36.086550       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 18:54:36.086620       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 18:54:36.086692       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 18:54:36.086806       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 18:54:36.086913       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 18:54:36.086982       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 18:54:36.087049       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [2d3e943c35850d286cf09b65639b8d61930d5765722ef41eea300de98f1c435b] <==
	I0401 18:56:58.958952       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="64.751µs"
	I0401 18:57:00.191506       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="43.305µs"
	I0401 18:57:04.600613       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-853477-m02"
	I0401 18:57:04.623684       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="87.633µs"
	I0401 18:57:04.641039       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="48.078µs"
	I0401 18:57:04.907241       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-zh9pz" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-7fdf7869d9-zh9pz"
	I0401 18:57:06.483447       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="11.226648ms"
	I0401 18:57:06.486242       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="113.865µs"
	I0401 18:57:27.275867       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-853477-m02"
	I0401 18:57:28.282487       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-853477-m03\" does not exist"
	I0401 18:57:28.282552       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-853477-m02"
	I0401 18:57:28.295232       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-853477-m03" podCIDRs=["10.244.2.0/24"]
	I0401 18:57:35.989998       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-853477-m03"
	I0401 18:57:41.798482       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-853477-m02"
	I0401 18:57:44.929916       1 event.go:376] "Event occurred" object="multinode-853477-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-853477-m03 event: Removing Node multinode-853477-m03 from Controller"
	I0401 18:58:19.950547       1 event.go:376] "Event occurred" object="multinode-853477-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-853477-m02 status is now: NodeNotReady"
	I0401 18:58:19.968079       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-zh9pz" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0401 18:58:19.975903       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="9.031426ms"
	I0401 18:58:19.977122       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="97.578µs"
	I0401 18:58:19.985466       1 event.go:376] "Event occurred" object="kube-system/kindnet-6wvv4" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0401 18:58:20.002624       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-mthcv" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0401 18:58:29.768247       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kube-proxy-hc9f2"
	I0401 18:58:29.794052       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kube-proxy-hc9f2"
	I0401 18:58:29.794790       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kindnet-tjr6s"
	I0401 18:58:29.816245       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kindnet-tjr6s"
	
	
	==> kube-controller-manager [eb60348bd91879fc1995b558036ae53948482d80d31ba95e51d89b06b08a34ef] <==
	I0401 18:51:11.431127       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="9.41657ms"
	I0401 18:51:11.431214       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="35.516µs"
	I0401 18:51:41.783692       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-853477-m03\" does not exist"
	I0401 18:51:41.784992       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-853477-m02"
	I0401 18:51:41.808327       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-853477-m03" podCIDRs=["10.244.2.0/24"]
	I0401 18:51:41.822108       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-tjr6s"
	I0401 18:51:41.834266       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-hc9f2"
	I0401 18:51:42.113598       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-853477-m03"
	I0401 18:51:42.113849       1 event.go:376] "Event occurred" object="multinode-853477-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-853477-m03 event: Registered Node multinode-853477-m03 in Controller"
	I0401 18:51:50.254954       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-853477-m03"
	I0401 18:52:20.742054       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-853477-m02"
	I0401 18:52:22.036080       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-853477-m02"
	I0401 18:52:22.041183       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-853477-m03\" does not exist"
	I0401 18:52:22.063869       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-853477-m03" podCIDRs=["10.244.3.0/24"]
	I0401 18:52:29.516523       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-853477-m02"
	I0401 18:53:12.165801       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-853477-m03"
	I0401 18:53:12.166968       1 event.go:376] "Event occurred" object="multinode-853477-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-853477-m02 status is now: NodeNotReady"
	I0401 18:53:12.182424       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-mthcv" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0401 18:53:12.199013       1 event.go:376] "Event occurred" object="kube-system/kindnet-6wvv4" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0401 18:53:12.210174       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-g2mfr" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0401 18:53:12.222249       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="11.662869ms"
	I0401 18:53:12.222413       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="90.582µs"
	I0401 18:53:17.222230       1 event.go:376] "Event occurred" object="multinode-853477-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-853477-m03 status is now: NodeNotReady"
	I0401 18:53:17.237365       1 event.go:376] "Event occurred" object="kube-system/kindnet-tjr6s" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0401 18:53:17.252605       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-hc9f2" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	
	
	==> kube-proxy [598be87df6b76ed97503616e0de22130de187736a15f217c9492fd3bcbe3165f] <==
	I0401 18:56:18.436447       1 server_others.go:72] "Using iptables proxy"
	I0401 18:56:18.481411       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.161"]
	I0401 18:56:18.589320       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0401 18:56:18.589454       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0401 18:56:18.589481       1 server_others.go:168] "Using iptables Proxier"
	I0401 18:56:18.616630       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0401 18:56:18.616975       1 server.go:865] "Version info" version="v1.29.3"
	I0401 18:56:18.616990       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 18:56:18.618864       1 config.go:188] "Starting service config controller"
	I0401 18:56:18.618899       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0401 18:56:18.618983       1 config.go:97] "Starting endpoint slice config controller"
	I0401 18:56:18.618990       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0401 18:56:18.619495       1 config.go:315] "Starting node config controller"
	I0401 18:56:18.619538       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0401 18:56:18.719954       1 shared_informer.go:318] Caches are synced for node config
	I0401 18:56:18.721317       1 shared_informer.go:318] Caches are synced for service config
	I0401 18:56:18.721500       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [f6ce7b69665bbc01d73b598a2d86525641c7d4fbe714ef3997a3688d286471c4] <==
	I0401 18:50:24.037579       1 server_others.go:72] "Using iptables proxy"
	I0401 18:50:24.076468       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.161"]
	I0401 18:50:24.216571       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0401 18:50:24.217203       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0401 18:50:24.217388       1 server_others.go:168] "Using iptables Proxier"
	I0401 18:50:24.261898       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0401 18:50:24.262170       1 server.go:865] "Version info" version="v1.29.3"
	I0401 18:50:24.262221       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 18:50:24.264146       1 config.go:188] "Starting service config controller"
	I0401 18:50:24.264201       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0401 18:50:24.264234       1 config.go:97] "Starting endpoint slice config controller"
	I0401 18:50:24.264251       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0401 18:50:24.265972       1 config.go:315] "Starting node config controller"
	I0401 18:50:24.266014       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0401 18:50:24.364568       1 shared_informer.go:318] Caches are synced for service config
	I0401 18:50:24.364531       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0401 18:50:24.366109       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [10cd91b3bcb1d75ffab7fd0f2f49522fbc9f9df61971c55fb1e6debfe05b20ae] <==
	I0401 18:56:14.465439       1 serving.go:380] Generated self-signed cert in-memory
	W0401 18:56:16.657185       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0401 18:56:16.657584       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found, role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found]
	W0401 18:56:16.657645       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0401 18:56:16.657672       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0401 18:56:16.679672       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0401 18:56:16.679842       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 18:56:16.684433       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0401 18:56:16.685343       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0401 18:56:16.685420       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0401 18:56:16.685566       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0401 18:56:16.786057       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [4004daf6fb9e1f819bba0832635a01a785b9e1cbaa7ceefb622a2956cfe7dac8] <==
	W0401 18:50:06.939056       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0401 18:50:06.939340       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0401 18:50:07.868555       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0401 18:50:07.868711       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0401 18:50:07.921818       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0401 18:50:07.921875       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0401 18:50:07.946287       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0401 18:50:07.946349       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0401 18:50:07.972653       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0401 18:50:07.972679       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0401 18:50:07.997032       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0401 18:50:07.997085       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0401 18:50:08.000808       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0401 18:50:08.000855       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0401 18:50:08.009108       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0401 18:50:08.009179       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0401 18:50:08.027527       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0401 18:50:08.027574       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0401 18:50:08.326018       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0401 18:50:08.326416       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0401 18:50:10.221298       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0401 18:54:36.049495       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0401 18:54:36.049661       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0401 18:54:36.050041       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0401 18:54:36.050248       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Apr 01 18:58:12 multinode-853477 kubelet[3090]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 01 18:58:12 multinode-853477 kubelet[3090]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 01 18:58:12 multinode-853477 kubelet[3090]: E0401 18:58:12.410936    3090 manager.go:1116] Failed to create existing container: /kubepods/burstable/podecfdaf127945ce28382d8f90ac75c026/crio-bf98c87f2f39aa7c3585e2ac2500f198edf1b142a9efb4ed874f5a29bfdcf084: Error finding container bf98c87f2f39aa7c3585e2ac2500f198edf1b142a9efb4ed874f5a29bfdcf084: Status 404 returned error can't find the container with id bf98c87f2f39aa7c3585e2ac2500f198edf1b142a9efb4ed874f5a29bfdcf084
	Apr 01 18:58:12 multinode-853477 kubelet[3090]: E0401 18:58:12.411309    3090 manager.go:1116] Failed to create existing container: /kubepods/burstable/podfe554da581d909271f7689d472dd2373/crio-9c287c311be2a1629eaaa8319f309fec795e952b03175efcd304daa90298f755: Error finding container 9c287c311be2a1629eaaa8319f309fec795e952b03175efcd304daa90298f755: Status 404 returned error can't find the container with id 9c287c311be2a1629eaaa8319f309fec795e952b03175efcd304daa90298f755
	Apr 01 18:58:12 multinode-853477 kubelet[3090]: E0401 18:58:12.411788    3090 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod28a933531bbfe72a832eefa37f2dd17c/crio-559da33691144c5132081fb906e2cdeb7f31734801c64dd7ed7d29b9dd0145c5: Error finding container 559da33691144c5132081fb906e2cdeb7f31734801c64dd7ed7d29b9dd0145c5: Status 404 returned error can't find the container with id 559da33691144c5132081fb906e2cdeb7f31734801c64dd7ed7d29b9dd0145c5
	Apr 01 18:58:12 multinode-853477 kubelet[3090]: E0401 18:58:12.412049    3090 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod48a5c765-3e3b-408f-8e4a-c53083b879f3/crio-75744372fad26de7b6e1b6c839fc23b2ff308507c86fdca8567273156dbda995: Error finding container 75744372fad26de7b6e1b6c839fc23b2ff308507c86fdca8567273156dbda995: Status 404 returned error can't find the container with id 75744372fad26de7b6e1b6c839fc23b2ff308507c86fdca8567273156dbda995
	Apr 01 18:58:12 multinode-853477 kubelet[3090]: E0401 18:58:12.414853    3090 manager.go:1116] Failed to create existing container: /kubepods/pod1dfd3904-101a-4734-abf3-8cb24d0a5e04/crio-77c08d4d25f512e9dfdf6ef3f9d48537f063a828df94fe22c0ae7d0dddb8cae0: Error finding container 77c08d4d25f512e9dfdf6ef3f9d48537f063a828df94fe22c0ae7d0dddb8cae0: Status 404 returned error can't find the container with id 77c08d4d25f512e9dfdf6ef3f9d48537f063a828df94fe22c0ae7d0dddb8cae0
	Apr 01 18:58:12 multinode-853477 kubelet[3090]: E0401 18:58:12.416233    3090 manager.go:1116] Failed to create existing container: /kubepods/besteffort/podc3c447a9-e35f-4cf8-95db-abfbb425cab3/crio-128a8579b95a9768f81397b4d324b60498232e0267943a31e4c4c96dbefd2fd8: Error finding container 128a8579b95a9768f81397b4d324b60498232e0267943a31e4c4c96dbefd2fd8: Status 404 returned error can't find the container with id 128a8579b95a9768f81397b4d324b60498232e0267943a31e4c4c96dbefd2fd8
	Apr 01 18:58:12 multinode-853477 kubelet[3090]: E0401 18:58:12.416498    3090 manager.go:1116] Failed to create existing container: /kubepods/besteffort/poda5a8401b-2fe6-4724-99e0-63a1b6cb4367/crio-7bee69ef0da5ad552234d2defa9c6e8528220a57a16b3ff76f744f5f7614f96a: Error finding container 7bee69ef0da5ad552234d2defa9c6e8528220a57a16b3ff76f744f5f7614f96a: Status 404 returned error can't find the container with id 7bee69ef0da5ad552234d2defa9c6e8528220a57a16b3ff76f744f5f7614f96a
	Apr 01 18:58:12 multinode-853477 kubelet[3090]: E0401 18:58:12.416754    3090 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod2b9674072226eb1dc8bbe6dde036b55a/crio-90fafaa15e6fa316c3a0a0cee56c57f10126e6ee997df72f659139377156b3c8: Error finding container 90fafaa15e6fa316c3a0a0cee56c57f10126e6ee997df72f659139377156b3c8: Status 404 returned error can't find the container with id 90fafaa15e6fa316c3a0a0cee56c57f10126e6ee997df72f659139377156b3c8
	Apr 01 18:58:12 multinode-853477 kubelet[3090]: E0401 18:58:12.416993    3090 manager.go:1116] Failed to create existing container: /kubepods/besteffort/poddb1681a5-1807-454a-9b1f-90edc80f2243/crio-62f47c59bbfa39e1bc767709d6843ee547ff6fcd0bca672229b763606063b3df: Error finding container 62f47c59bbfa39e1bc767709d6843ee547ff6fcd0bca672229b763606063b3df: Status 404 returned error can't find the container with id 62f47c59bbfa39e1bc767709d6843ee547ff6fcd0bca672229b763606063b3df
	Apr 01 18:59:12 multinode-853477 kubelet[3090]: E0401 18:59:12.373023    3090 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 01 18:59:12 multinode-853477 kubelet[3090]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 01 18:59:12 multinode-853477 kubelet[3090]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 01 18:59:12 multinode-853477 kubelet[3090]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 01 18:59:12 multinode-853477 kubelet[3090]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 01 18:59:12 multinode-853477 kubelet[3090]: E0401 18:59:12.409932    3090 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod28a933531bbfe72a832eefa37f2dd17c/crio-559da33691144c5132081fb906e2cdeb7f31734801c64dd7ed7d29b9dd0145c5: Error finding container 559da33691144c5132081fb906e2cdeb7f31734801c64dd7ed7d29b9dd0145c5: Status 404 returned error can't find the container with id 559da33691144c5132081fb906e2cdeb7f31734801c64dd7ed7d29b9dd0145c5
	Apr 01 18:59:12 multinode-853477 kubelet[3090]: E0401 18:59:12.410462    3090 manager.go:1116] Failed to create existing container: /kubepods/besteffort/poddb1681a5-1807-454a-9b1f-90edc80f2243/crio-62f47c59bbfa39e1bc767709d6843ee547ff6fcd0bca672229b763606063b3df: Error finding container 62f47c59bbfa39e1bc767709d6843ee547ff6fcd0bca672229b763606063b3df: Status 404 returned error can't find the container with id 62f47c59bbfa39e1bc767709d6843ee547ff6fcd0bca672229b763606063b3df
	Apr 01 18:59:12 multinode-853477 kubelet[3090]: E0401 18:59:12.410859    3090 manager.go:1116] Failed to create existing container: /kubepods/pod1dfd3904-101a-4734-abf3-8cb24d0a5e04/crio-77c08d4d25f512e9dfdf6ef3f9d48537f063a828df94fe22c0ae7d0dddb8cae0: Error finding container 77c08d4d25f512e9dfdf6ef3f9d48537f063a828df94fe22c0ae7d0dddb8cae0: Status 404 returned error can't find the container with id 77c08d4d25f512e9dfdf6ef3f9d48537f063a828df94fe22c0ae7d0dddb8cae0
	Apr 01 18:59:12 multinode-853477 kubelet[3090]: E0401 18:59:12.411328    3090 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod48a5c765-3e3b-408f-8e4a-c53083b879f3/crio-75744372fad26de7b6e1b6c839fc23b2ff308507c86fdca8567273156dbda995: Error finding container 75744372fad26de7b6e1b6c839fc23b2ff308507c86fdca8567273156dbda995: Status 404 returned error can't find the container with id 75744372fad26de7b6e1b6c839fc23b2ff308507c86fdca8567273156dbda995
	Apr 01 18:59:12 multinode-853477 kubelet[3090]: E0401 18:59:12.411480    3090 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod2b9674072226eb1dc8bbe6dde036b55a/crio-90fafaa15e6fa316c3a0a0cee56c57f10126e6ee997df72f659139377156b3c8: Error finding container 90fafaa15e6fa316c3a0a0cee56c57f10126e6ee997df72f659139377156b3c8: Status 404 returned error can't find the container with id 90fafaa15e6fa316c3a0a0cee56c57f10126e6ee997df72f659139377156b3c8
	Apr 01 18:59:12 multinode-853477 kubelet[3090]: E0401 18:59:12.411583    3090 manager.go:1116] Failed to create existing container: /kubepods/besteffort/poda5a8401b-2fe6-4724-99e0-63a1b6cb4367/crio-7bee69ef0da5ad552234d2defa9c6e8528220a57a16b3ff76f744f5f7614f96a: Error finding container 7bee69ef0da5ad552234d2defa9c6e8528220a57a16b3ff76f744f5f7614f96a: Status 404 returned error can't find the container with id 7bee69ef0da5ad552234d2defa9c6e8528220a57a16b3ff76f744f5f7614f96a
	Apr 01 18:59:12 multinode-853477 kubelet[3090]: E0401 18:59:12.411910    3090 manager.go:1116] Failed to create existing container: /kubepods/besteffort/podc3c447a9-e35f-4cf8-95db-abfbb425cab3/crio-128a8579b95a9768f81397b4d324b60498232e0267943a31e4c4c96dbefd2fd8: Error finding container 128a8579b95a9768f81397b4d324b60498232e0267943a31e4c4c96dbefd2fd8: Status 404 returned error can't find the container with id 128a8579b95a9768f81397b4d324b60498232e0267943a31e4c4c96dbefd2fd8
	Apr 01 18:59:12 multinode-853477 kubelet[3090]: E0401 18:59:12.412218    3090 manager.go:1116] Failed to create existing container: /kubepods/burstable/podfe554da581d909271f7689d472dd2373/crio-9c287c311be2a1629eaaa8319f309fec795e952b03175efcd304daa90298f755: Error finding container 9c287c311be2a1629eaaa8319f309fec795e952b03175efcd304daa90298f755: Status 404 returned error can't find the container with id 9c287c311be2a1629eaaa8319f309fec795e952b03175efcd304daa90298f755
	Apr 01 18:59:12 multinode-853477 kubelet[3090]: E0401 18:59:12.412541    3090 manager.go:1116] Failed to create existing container: /kubepods/burstable/podecfdaf127945ce28382d8f90ac75c026/crio-bf98c87f2f39aa7c3585e2ac2500f198edf1b142a9efb4ed874f5a29bfdcf084: Error finding container bf98c87f2f39aa7c3585e2ac2500f198edf1b142a9efb4ed874f5a29bfdcf084: Status 404 returned error can't find the container with id bf98c87f2f39aa7c3585e2ac2500f198edf1b142a9efb4ed874f5a29bfdcf084
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0401 19:00:02.598266   44633 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18233-10493/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-853477 -n multinode-853477
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-853477 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.47s)

                                                
                                    
x
+
TestPreload (274.43s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-578801 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0401 19:03:52.854603   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/client.crt: no such file or directory
E0401 19:04:16.856959   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/functional-784295/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-578801 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m13.198789779s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-578801 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-578801 image pull gcr.io/k8s-minikube/busybox: (1.370733316s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-578801
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-578801: exit status 82 (2m0.461515067s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-578801"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-578801 failed: exit status 82
panic.go:626: *** TestPreload FAILED at 2024-04-01 19:07:52.252030478 +0000 UTC m=+3701.707581665
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-578801 -n test-preload-578801
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-578801 -n test-preload-578801: exit status 3 (18.438247146s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0401 19:08:10.685942   47011 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.194:22: connect: no route to host
	E0401 19:08:10.685965   47011 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.194:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-578801" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-578801" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-578801
--- FAIL: TestPreload (274.43s)

                                                
                                    
x
+
TestKubernetesUpgrade (452.11s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-054413 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-054413 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m1.213619399s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-054413] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18233
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18233-10493/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-10493/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-054413" primary control-plane node in "kubernetes-upgrade-054413" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 19:13:45.993663   53106 out.go:291] Setting OutFile to fd 1 ...
	I0401 19:13:45.994135   53106 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 19:13:45.994150   53106 out.go:304] Setting ErrFile to fd 2...
	I0401 19:13:45.994164   53106 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 19:13:45.994525   53106 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-10493/.minikube/bin
	I0401 19:13:45.995321   53106 out.go:298] Setting JSON to false
	I0401 19:13:45.996893   53106 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6978,"bootTime":1711991848,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 19:13:45.996957   53106 start.go:139] virtualization: kvm guest
	I0401 19:13:45.999227   53106 out.go:177] * [kubernetes-upgrade-054413] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 19:13:46.001255   53106 out.go:177]   - MINIKUBE_LOCATION=18233
	I0401 19:13:46.001267   53106 notify.go:220] Checking for updates...
	I0401 19:13:46.004271   53106 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 19:13:46.005765   53106 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 19:13:46.007286   53106 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-10493/.minikube
	I0401 19:13:46.008811   53106 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 19:13:46.010328   53106 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 19:13:46.012487   53106 config.go:182] Loaded profile config "NoKubernetes-249249": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0401 19:13:46.012575   53106 config.go:182] Loaded profile config "cert-expiration-385547": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 19:13:46.012658   53106 config.go:182] Loaded profile config "running-upgrade-349166": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0401 19:13:46.012735   53106 driver.go:392] Setting default libvirt URI to qemu:///system
	I0401 19:13:46.771815   53106 out.go:177] * Using the kvm2 driver based on user configuration
	I0401 19:13:46.773163   53106 start.go:297] selected driver: kvm2
	I0401 19:13:46.773175   53106 start.go:901] validating driver "kvm2" against <nil>
	I0401 19:13:46.773186   53106 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 19:13:46.773939   53106 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 19:13:46.774022   53106 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18233-10493/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0401 19:13:46.789212   53106 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0401 19:13:46.789266   53106 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0401 19:13:46.789508   53106 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0401 19:13:46.789575   53106 cni.go:84] Creating CNI manager for ""
	I0401 19:13:46.789591   53106 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:13:46.789598   53106 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0401 19:13:46.789675   53106 start.go:340] cluster config:
	{Name:kubernetes-upgrade-054413 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-054413 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:13:46.789786   53106 iso.go:125] acquiring lock: {Name:mka511ffe42ecd86bd7f46e7a17ddcdd3e5e4327 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 19:13:46.791654   53106 out.go:177] * Starting "kubernetes-upgrade-054413" primary control-plane node in "kubernetes-upgrade-054413" cluster
	I0401 19:13:46.792881   53106 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0401 19:13:46.792919   53106 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0401 19:13:46.792937   53106 cache.go:56] Caching tarball of preloaded images
	I0401 19:13:46.793011   53106 preload.go:173] Found /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 19:13:46.793023   53106 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0401 19:13:46.793118   53106 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kubernetes-upgrade-054413/config.json ...
	I0401 19:13:46.793141   53106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kubernetes-upgrade-054413/config.json: {Name:mkd3f3edcb04d099d5b2539b8bb83fd5c29d37e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:13:46.793280   53106 start.go:360] acquireMachinesLock for kubernetes-upgrade-054413: {Name:mk6b7472209a8db5f40be4c2f0565da7e0094c19 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 19:14:11.870670   53106 start.go:364] duration metric: took 25.077364095s to acquireMachinesLock for "kubernetes-upgrade-054413"
	I0401 19:14:11.870745   53106 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-054413 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-054413 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 19:14:11.870863   53106 start.go:125] createHost starting for "" (driver="kvm2")
	I0401 19:14:11.872965   53106 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0401 19:14:11.873240   53106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:14:11.873291   53106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:14:11.889133   53106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43643
	I0401 19:14:11.889562   53106 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:14:11.890100   53106 main.go:141] libmachine: Using API Version  1
	I0401 19:14:11.890129   53106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:14:11.890545   53106 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:14:11.890754   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetMachineName
	I0401 19:14:11.890914   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .DriverName
	I0401 19:14:11.891117   53106 start.go:159] libmachine.API.Create for "kubernetes-upgrade-054413" (driver="kvm2")
	I0401 19:14:11.891153   53106 client.go:168] LocalClient.Create starting
	I0401 19:14:11.891195   53106 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem
	I0401 19:14:11.891234   53106 main.go:141] libmachine: Decoding PEM data...
	I0401 19:14:11.891256   53106 main.go:141] libmachine: Parsing certificate...
	I0401 19:14:11.891331   53106 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem
	I0401 19:14:11.891353   53106 main.go:141] libmachine: Decoding PEM data...
	I0401 19:14:11.891377   53106 main.go:141] libmachine: Parsing certificate...
	I0401 19:14:11.891406   53106 main.go:141] libmachine: Running pre-create checks...
	I0401 19:14:11.891423   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .PreCreateCheck
	I0401 19:14:11.891746   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetConfigRaw
	I0401 19:14:11.892200   53106 main.go:141] libmachine: Creating machine...
	I0401 19:14:11.892219   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .Create
	I0401 19:14:11.892332   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Creating KVM machine...
	I0401 19:14:11.893548   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | found existing default KVM network
	I0401 19:14:11.894577   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | I0401 19:14:11.894423   53451 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:49:d3:b9} reservation:<nil>}
	I0401 19:14:11.895518   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | I0401 19:14:11.895441   53451 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002645c0}
	I0401 19:14:11.895535   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | created network xml: 
	I0401 19:14:11.895559   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | <network>
	I0401 19:14:11.895593   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG |   <name>mk-kubernetes-upgrade-054413</name>
	I0401 19:14:11.895608   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG |   <dns enable='no'/>
	I0401 19:14:11.895623   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG |   
	I0401 19:14:11.895635   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0401 19:14:11.895656   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG |     <dhcp>
	I0401 19:14:11.895670   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0401 19:14:11.895682   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG |     </dhcp>
	I0401 19:14:11.895692   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG |   </ip>
	I0401 19:14:11.895697   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG |   
	I0401 19:14:11.895705   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | </network>
	I0401 19:14:11.895710   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | 
	I0401 19:14:11.901391   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | trying to create private KVM network mk-kubernetes-upgrade-054413 192.168.50.0/24...
	I0401 19:14:11.967674   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | private KVM network mk-kubernetes-upgrade-054413 192.168.50.0/24 created
	I0401 19:14:11.967711   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Setting up store path in /home/jenkins/minikube-integration/18233-10493/.minikube/machines/kubernetes-upgrade-054413 ...
	I0401 19:14:11.967727   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | I0401 19:14:11.967607   53451 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18233-10493/.minikube
	I0401 19:14:11.967750   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Building disk image from file:///home/jenkins/minikube-integration/18233-10493/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso
	I0401 19:14:11.967771   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Downloading /home/jenkins/minikube-integration/18233-10493/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18233-10493/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso...
	I0401 19:14:12.188439   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | I0401 19:14:12.188321   53451 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/kubernetes-upgrade-054413/id_rsa...
	I0401 19:14:12.228945   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | I0401 19:14:12.228828   53451 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/kubernetes-upgrade-054413/kubernetes-upgrade-054413.rawdisk...
	I0401 19:14:12.228974   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | Writing magic tar header
	I0401 19:14:12.229018   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | Writing SSH key tar header
	I0401 19:14:12.229037   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | I0401 19:14:12.228939   53451 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18233-10493/.minikube/machines/kubernetes-upgrade-054413 ...
	I0401 19:14:12.229063   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/kubernetes-upgrade-054413
	I0401 19:14:12.229079   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18233-10493/.minikube/machines
	I0401 19:14:12.229087   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Setting executable bit set on /home/jenkins/minikube-integration/18233-10493/.minikube/machines/kubernetes-upgrade-054413 (perms=drwx------)
	I0401 19:14:12.229098   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Setting executable bit set on /home/jenkins/minikube-integration/18233-10493/.minikube/machines (perms=drwxr-xr-x)
	I0401 19:14:12.229111   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18233-10493/.minikube
	I0401 19:14:12.229124   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Setting executable bit set on /home/jenkins/minikube-integration/18233-10493/.minikube (perms=drwxr-xr-x)
	I0401 19:14:12.229141   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18233-10493
	I0401 19:14:12.229157   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0401 19:14:12.229166   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | Checking permissions on dir: /home/jenkins
	I0401 19:14:12.229178   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | Checking permissions on dir: /home
	I0401 19:14:12.229190   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | Skipping /home - not owner
	I0401 19:14:12.229203   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Setting executable bit set on /home/jenkins/minikube-integration/18233-10493 (perms=drwxrwxr-x)
	I0401 19:14:12.229217   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0401 19:14:12.229229   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0401 19:14:12.229240   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Creating domain...
	I0401 19:14:12.230227   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) define libvirt domain using xml: 
	I0401 19:14:12.230255   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) <domain type='kvm'>
	I0401 19:14:12.230266   53106 main.go:141] libmachine: (kubernetes-upgrade-054413)   <name>kubernetes-upgrade-054413</name>
	I0401 19:14:12.230284   53106 main.go:141] libmachine: (kubernetes-upgrade-054413)   <memory unit='MiB'>2200</memory>
	I0401 19:14:12.230297   53106 main.go:141] libmachine: (kubernetes-upgrade-054413)   <vcpu>2</vcpu>
	I0401 19:14:12.230309   53106 main.go:141] libmachine: (kubernetes-upgrade-054413)   <features>
	I0401 19:14:12.230321   53106 main.go:141] libmachine: (kubernetes-upgrade-054413)     <acpi/>
	I0401 19:14:12.230329   53106 main.go:141] libmachine: (kubernetes-upgrade-054413)     <apic/>
	I0401 19:14:12.230352   53106 main.go:141] libmachine: (kubernetes-upgrade-054413)     <pae/>
	I0401 19:14:12.230363   53106 main.go:141] libmachine: (kubernetes-upgrade-054413)     
	I0401 19:14:12.230374   53106 main.go:141] libmachine: (kubernetes-upgrade-054413)   </features>
	I0401 19:14:12.230387   53106 main.go:141] libmachine: (kubernetes-upgrade-054413)   <cpu mode='host-passthrough'>
	I0401 19:14:12.230398   53106 main.go:141] libmachine: (kubernetes-upgrade-054413)   
	I0401 19:14:12.230417   53106 main.go:141] libmachine: (kubernetes-upgrade-054413)   </cpu>
	I0401 19:14:12.230429   53106 main.go:141] libmachine: (kubernetes-upgrade-054413)   <os>
	I0401 19:14:12.230434   53106 main.go:141] libmachine: (kubernetes-upgrade-054413)     <type>hvm</type>
	I0401 19:14:12.230440   53106 main.go:141] libmachine: (kubernetes-upgrade-054413)     <boot dev='cdrom'/>
	I0401 19:14:12.230447   53106 main.go:141] libmachine: (kubernetes-upgrade-054413)     <boot dev='hd'/>
	I0401 19:14:12.230480   53106 main.go:141] libmachine: (kubernetes-upgrade-054413)     <bootmenu enable='no'/>
	I0401 19:14:12.230502   53106 main.go:141] libmachine: (kubernetes-upgrade-054413)   </os>
	I0401 19:14:12.230515   53106 main.go:141] libmachine: (kubernetes-upgrade-054413)   <devices>
	I0401 19:14:12.230528   53106 main.go:141] libmachine: (kubernetes-upgrade-054413)     <disk type='file' device='cdrom'>
	I0401 19:14:12.230547   53106 main.go:141] libmachine: (kubernetes-upgrade-054413)       <source file='/home/jenkins/minikube-integration/18233-10493/.minikube/machines/kubernetes-upgrade-054413/boot2docker.iso'/>
	I0401 19:14:12.230561   53106 main.go:141] libmachine: (kubernetes-upgrade-054413)       <target dev='hdc' bus='scsi'/>
	I0401 19:14:12.230576   53106 main.go:141] libmachine: (kubernetes-upgrade-054413)       <readonly/>
	I0401 19:14:12.230594   53106 main.go:141] libmachine: (kubernetes-upgrade-054413)     </disk>
	I0401 19:14:12.230607   53106 main.go:141] libmachine: (kubernetes-upgrade-054413)     <disk type='file' device='disk'>
	I0401 19:14:12.230619   53106 main.go:141] libmachine: (kubernetes-upgrade-054413)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0401 19:14:12.230637   53106 main.go:141] libmachine: (kubernetes-upgrade-054413)       <source file='/home/jenkins/minikube-integration/18233-10493/.minikube/machines/kubernetes-upgrade-054413/kubernetes-upgrade-054413.rawdisk'/>
	I0401 19:14:12.230654   53106 main.go:141] libmachine: (kubernetes-upgrade-054413)       <target dev='hda' bus='virtio'/>
	I0401 19:14:12.230666   53106 main.go:141] libmachine: (kubernetes-upgrade-054413)     </disk>
	I0401 19:14:12.230679   53106 main.go:141] libmachine: (kubernetes-upgrade-054413)     <interface type='network'>
	I0401 19:14:12.230694   53106 main.go:141] libmachine: (kubernetes-upgrade-054413)       <source network='mk-kubernetes-upgrade-054413'/>
	I0401 19:14:12.230705   53106 main.go:141] libmachine: (kubernetes-upgrade-054413)       <model type='virtio'/>
	I0401 19:14:12.230720   53106 main.go:141] libmachine: (kubernetes-upgrade-054413)     </interface>
	I0401 19:14:12.230736   53106 main.go:141] libmachine: (kubernetes-upgrade-054413)     <interface type='network'>
	I0401 19:14:12.230750   53106 main.go:141] libmachine: (kubernetes-upgrade-054413)       <source network='default'/>
	I0401 19:14:12.230762   53106 main.go:141] libmachine: (kubernetes-upgrade-054413)       <model type='virtio'/>
	I0401 19:14:12.230776   53106 main.go:141] libmachine: (kubernetes-upgrade-054413)     </interface>
	I0401 19:14:12.230797   53106 main.go:141] libmachine: (kubernetes-upgrade-054413)     <serial type='pty'>
	I0401 19:14:12.230809   53106 main.go:141] libmachine: (kubernetes-upgrade-054413)       <target port='0'/>
	I0401 19:14:12.230820   53106 main.go:141] libmachine: (kubernetes-upgrade-054413)     </serial>
	I0401 19:14:12.230831   53106 main.go:141] libmachine: (kubernetes-upgrade-054413)     <console type='pty'>
	I0401 19:14:12.230848   53106 main.go:141] libmachine: (kubernetes-upgrade-054413)       <target type='serial' port='0'/>
	I0401 19:14:12.230859   53106 main.go:141] libmachine: (kubernetes-upgrade-054413)     </console>
	I0401 19:14:12.230870   53106 main.go:141] libmachine: (kubernetes-upgrade-054413)     <rng model='virtio'>
	I0401 19:14:12.230882   53106 main.go:141] libmachine: (kubernetes-upgrade-054413)       <backend model='random'>/dev/random</backend>
	I0401 19:14:12.230894   53106 main.go:141] libmachine: (kubernetes-upgrade-054413)     </rng>
	I0401 19:14:12.230907   53106 main.go:141] libmachine: (kubernetes-upgrade-054413)     
	I0401 19:14:12.230918   53106 main.go:141] libmachine: (kubernetes-upgrade-054413)     
	I0401 19:14:12.230931   53106 main.go:141] libmachine: (kubernetes-upgrade-054413)   </devices>
	I0401 19:14:12.230942   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) </domain>
	I0401 19:14:12.230957   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) 
	I0401 19:14:12.234899   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined MAC address 52:54:00:6e:b9:c9 in network default
	I0401 19:14:12.235446   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Ensuring networks are active...
	I0401 19:14:12.235469   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:14:12.236116   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Ensuring network default is active
	I0401 19:14:12.236409   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Ensuring network mk-kubernetes-upgrade-054413 is active
	I0401 19:14:12.236807   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Getting domain xml...
	I0401 19:14:12.237486   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Creating domain...
	I0401 19:14:13.449581   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Waiting to get IP...
	I0401 19:14:13.450399   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:14:13.450840   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | unable to find current IP address of domain kubernetes-upgrade-054413 in network mk-kubernetes-upgrade-054413
	I0401 19:14:13.450868   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | I0401 19:14:13.450809   53451 retry.go:31] will retry after 201.964665ms: waiting for machine to come up
	I0401 19:14:13.654314   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:14:13.654781   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | unable to find current IP address of domain kubernetes-upgrade-054413 in network mk-kubernetes-upgrade-054413
	I0401 19:14:13.654809   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | I0401 19:14:13.654730   53451 retry.go:31] will retry after 347.470501ms: waiting for machine to come up
	I0401 19:14:14.004342   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:14:14.004808   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | unable to find current IP address of domain kubernetes-upgrade-054413 in network mk-kubernetes-upgrade-054413
	I0401 19:14:14.004836   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | I0401 19:14:14.004756   53451 retry.go:31] will retry after 428.071482ms: waiting for machine to come up
	I0401 19:14:14.434393   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:14:14.434870   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | unable to find current IP address of domain kubernetes-upgrade-054413 in network mk-kubernetes-upgrade-054413
	I0401 19:14:14.434901   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | I0401 19:14:14.434852   53451 retry.go:31] will retry after 383.896121ms: waiting for machine to come up
	I0401 19:14:14.820378   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:14:14.820910   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | unable to find current IP address of domain kubernetes-upgrade-054413 in network mk-kubernetes-upgrade-054413
	I0401 19:14:14.820940   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | I0401 19:14:14.820853   53451 retry.go:31] will retry after 625.790812ms: waiting for machine to come up
	I0401 19:14:15.448647   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:14:15.449176   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | unable to find current IP address of domain kubernetes-upgrade-054413 in network mk-kubernetes-upgrade-054413
	I0401 19:14:15.449201   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | I0401 19:14:15.449133   53451 retry.go:31] will retry after 677.902472ms: waiting for machine to come up
	I0401 19:14:16.129286   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:14:16.129933   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | unable to find current IP address of domain kubernetes-upgrade-054413 in network mk-kubernetes-upgrade-054413
	I0401 19:14:16.129965   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | I0401 19:14:16.129862   53451 retry.go:31] will retry after 1.183062023s: waiting for machine to come up
	I0401 19:14:17.314238   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:14:17.314718   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | unable to find current IP address of domain kubernetes-upgrade-054413 in network mk-kubernetes-upgrade-054413
	I0401 19:14:17.314745   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | I0401 19:14:17.314694   53451 retry.go:31] will retry after 1.063500984s: waiting for machine to come up
	I0401 19:14:18.379991   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:14:18.380537   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | unable to find current IP address of domain kubernetes-upgrade-054413 in network mk-kubernetes-upgrade-054413
	I0401 19:14:18.380574   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | I0401 19:14:18.380482   53451 retry.go:31] will retry after 1.192579436s: waiting for machine to come up
	I0401 19:14:19.574690   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:14:19.575246   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | unable to find current IP address of domain kubernetes-upgrade-054413 in network mk-kubernetes-upgrade-054413
	I0401 19:14:19.575281   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | I0401 19:14:19.575193   53451 retry.go:31] will retry after 1.466971383s: waiting for machine to come up
	I0401 19:14:21.043332   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:14:21.043793   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | unable to find current IP address of domain kubernetes-upgrade-054413 in network mk-kubernetes-upgrade-054413
	I0401 19:14:21.043838   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | I0401 19:14:21.043744   53451 retry.go:31] will retry after 2.665905204s: waiting for machine to come up
	I0401 19:14:23.710811   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:14:23.711260   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | unable to find current IP address of domain kubernetes-upgrade-054413 in network mk-kubernetes-upgrade-054413
	I0401 19:14:23.711287   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | I0401 19:14:23.711237   53451 retry.go:31] will retry after 2.973466608s: waiting for machine to come up
	I0401 19:14:26.688274   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:14:26.688704   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | unable to find current IP address of domain kubernetes-upgrade-054413 in network mk-kubernetes-upgrade-054413
	I0401 19:14:26.688729   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | I0401 19:14:26.688659   53451 retry.go:31] will retry after 4.431394005s: waiting for machine to come up
	I0401 19:14:31.122876   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:14:31.123389   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | unable to find current IP address of domain kubernetes-upgrade-054413 in network mk-kubernetes-upgrade-054413
	I0401 19:14:31.123436   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | I0401 19:14:31.123347   53451 retry.go:31] will retry after 4.760600658s: waiting for machine to come up
	I0401 19:14:35.887136   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:14:35.887512   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Found IP for machine: 192.168.50.39
	I0401 19:14:35.887550   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has current primary IP address 192.168.50.39 and MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:14:35.887560   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Reserving static IP address...
	I0401 19:14:35.887903   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-054413", mac: "52:54:00:e7:2c:90", ip: "192.168.50.39"} in network mk-kubernetes-upgrade-054413
	I0401 19:14:35.960464   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | Getting to WaitForSSH function...
	I0401 19:14:35.960494   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Reserved static IP address: 192.168.50.39
	I0401 19:14:35.960508   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Waiting for SSH to be available...
	I0401 19:14:35.963485   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:14:35.963989   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:2c:90", ip: ""} in network mk-kubernetes-upgrade-054413: {Iface:virbr2 ExpiryTime:2024-04-01 20:14:28 +0000 UTC Type:0 Mac:52:54:00:e7:2c:90 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e7:2c:90}
	I0401 19:14:35.964024   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined IP address 192.168.50.39 and MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:14:35.964138   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | Using SSH client type: external
	I0401 19:14:35.964162   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | Using SSH private key: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/kubernetes-upgrade-054413/id_rsa (-rw-------)
	I0401 19:14:35.964197   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.39 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18233-10493/.minikube/machines/kubernetes-upgrade-054413/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0401 19:14:35.964211   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | About to run SSH command:
	I0401 19:14:35.964223   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | exit 0
	I0401 19:14:36.090648   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | SSH cmd err, output: <nil>: 
	I0401 19:14:36.090916   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) KVM machine creation complete!
	I0401 19:14:36.091253   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetConfigRaw
	I0401 19:14:36.091884   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .DriverName
	I0401 19:14:36.092088   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .DriverName
	I0401 19:14:36.092233   53106 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0401 19:14:36.092244   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetState
	I0401 19:14:36.093532   53106 main.go:141] libmachine: Detecting operating system of created instance...
	I0401 19:14:36.093545   53106 main.go:141] libmachine: Waiting for SSH to be available...
	I0401 19:14:36.093560   53106 main.go:141] libmachine: Getting to WaitForSSH function...
	I0401 19:14:36.093567   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHHostname
	I0401 19:14:36.096168   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:14:36.096519   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:2c:90", ip: ""} in network mk-kubernetes-upgrade-054413: {Iface:virbr2 ExpiryTime:2024-04-01 20:14:28 +0000 UTC Type:0 Mac:52:54:00:e7:2c:90 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-054413 Clientid:01:52:54:00:e7:2c:90}
	I0401 19:14:36.096563   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined IP address 192.168.50.39 and MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:14:36.096725   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHPort
	I0401 19:14:36.096896   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHKeyPath
	I0401 19:14:36.097060   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHKeyPath
	I0401 19:14:36.097187   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHUsername
	I0401 19:14:36.097356   53106 main.go:141] libmachine: Using SSH client type: native
	I0401 19:14:36.097597   53106 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.39 22 <nil> <nil>}
	I0401 19:14:36.097616   53106 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0401 19:14:36.201349   53106 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 19:14:36.201382   53106 main.go:141] libmachine: Detecting the provisioner...
	I0401 19:14:36.201400   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHHostname
	I0401 19:14:36.204035   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:14:36.204372   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:2c:90", ip: ""} in network mk-kubernetes-upgrade-054413: {Iface:virbr2 ExpiryTime:2024-04-01 20:14:28 +0000 UTC Type:0 Mac:52:54:00:e7:2c:90 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-054413 Clientid:01:52:54:00:e7:2c:90}
	I0401 19:14:36.204414   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined IP address 192.168.50.39 and MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:14:36.204521   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHPort
	I0401 19:14:36.204698   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHKeyPath
	I0401 19:14:36.204830   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHKeyPath
	I0401 19:14:36.204985   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHUsername
	I0401 19:14:36.205179   53106 main.go:141] libmachine: Using SSH client type: native
	I0401 19:14:36.205357   53106 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.39 22 <nil> <nil>}
	I0401 19:14:36.205371   53106 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0401 19:14:36.314819   53106 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0401 19:14:36.314902   53106 main.go:141] libmachine: found compatible host: buildroot
	I0401 19:14:36.314917   53106 main.go:141] libmachine: Provisioning with buildroot...
	I0401 19:14:36.314932   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetMachineName
	I0401 19:14:36.315208   53106 buildroot.go:166] provisioning hostname "kubernetes-upgrade-054413"
	I0401 19:14:36.315243   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetMachineName
	I0401 19:14:36.315411   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHHostname
	I0401 19:14:36.318239   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:14:36.318623   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:2c:90", ip: ""} in network mk-kubernetes-upgrade-054413: {Iface:virbr2 ExpiryTime:2024-04-01 20:14:28 +0000 UTC Type:0 Mac:52:54:00:e7:2c:90 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-054413 Clientid:01:52:54:00:e7:2c:90}
	I0401 19:14:36.318663   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined IP address 192.168.50.39 and MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:14:36.318782   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHPort
	I0401 19:14:36.318970   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHKeyPath
	I0401 19:14:36.319131   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHKeyPath
	I0401 19:14:36.319293   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHUsername
	I0401 19:14:36.319462   53106 main.go:141] libmachine: Using SSH client type: native
	I0401 19:14:36.319628   53106 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.39 22 <nil> <nil>}
	I0401 19:14:36.319640   53106 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-054413 && echo "kubernetes-upgrade-054413" | sudo tee /etc/hostname
	I0401 19:14:36.446085   53106 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-054413
	
	I0401 19:14:36.446115   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHHostname
	I0401 19:14:36.448961   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:14:36.449413   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:2c:90", ip: ""} in network mk-kubernetes-upgrade-054413: {Iface:virbr2 ExpiryTime:2024-04-01 20:14:28 +0000 UTC Type:0 Mac:52:54:00:e7:2c:90 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-054413 Clientid:01:52:54:00:e7:2c:90}
	I0401 19:14:36.449444   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined IP address 192.168.50.39 and MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:14:36.449634   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHPort
	I0401 19:14:36.449865   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHKeyPath
	I0401 19:14:36.450032   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHKeyPath
	I0401 19:14:36.450186   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHUsername
	I0401 19:14:36.450363   53106 main.go:141] libmachine: Using SSH client type: native
	I0401 19:14:36.450570   53106 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.39 22 <nil> <nil>}
	I0401 19:14:36.450588   53106 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-054413' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-054413/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-054413' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 19:14:36.568312   53106 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 19:14:36.568337   53106 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18233-10493/.minikube CaCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18233-10493/.minikube}
	I0401 19:14:36.568391   53106 buildroot.go:174] setting up certificates
	I0401 19:14:36.568406   53106 provision.go:84] configureAuth start
	I0401 19:14:36.568418   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetMachineName
	I0401 19:14:36.568754   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetIP
	I0401 19:14:36.571497   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:14:36.571867   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:2c:90", ip: ""} in network mk-kubernetes-upgrade-054413: {Iface:virbr2 ExpiryTime:2024-04-01 20:14:28 +0000 UTC Type:0 Mac:52:54:00:e7:2c:90 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-054413 Clientid:01:52:54:00:e7:2c:90}
	I0401 19:14:36.571904   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined IP address 192.168.50.39 and MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:14:36.572044   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHHostname
	I0401 19:14:36.574378   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:14:36.574701   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:2c:90", ip: ""} in network mk-kubernetes-upgrade-054413: {Iface:virbr2 ExpiryTime:2024-04-01 20:14:28 +0000 UTC Type:0 Mac:52:54:00:e7:2c:90 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-054413 Clientid:01:52:54:00:e7:2c:90}
	I0401 19:14:36.574726   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined IP address 192.168.50.39 and MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:14:36.574871   53106 provision.go:143] copyHostCerts
	I0401 19:14:36.574938   53106 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem, removing ...
	I0401 19:14:36.574953   53106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem
	I0401 19:14:36.575026   53106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem (1082 bytes)
	I0401 19:14:36.575189   53106 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem, removing ...
	I0401 19:14:36.575202   53106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem
	I0401 19:14:36.575238   53106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem (1123 bytes)
	I0401 19:14:36.575336   53106 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem, removing ...
	I0401 19:14:36.575347   53106 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem
	I0401 19:14:36.575379   53106 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem (1679 bytes)
	I0401 19:14:36.575467   53106 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-054413 san=[127.0.0.1 192.168.50.39 kubernetes-upgrade-054413 localhost minikube]
	I0401 19:14:36.836585   53106 provision.go:177] copyRemoteCerts
	I0401 19:14:36.836638   53106 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 19:14:36.836667   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHHostname
	I0401 19:14:36.839949   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:14:36.840630   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:2c:90", ip: ""} in network mk-kubernetes-upgrade-054413: {Iface:virbr2 ExpiryTime:2024-04-01 20:14:28 +0000 UTC Type:0 Mac:52:54:00:e7:2c:90 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-054413 Clientid:01:52:54:00:e7:2c:90}
	I0401 19:14:36.840661   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined IP address 192.168.50.39 and MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:14:36.840830   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHPort
	I0401 19:14:36.841030   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHKeyPath
	I0401 19:14:36.841170   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHUsername
	I0401 19:14:36.841277   53106 sshutil.go:53] new ssh client: &{IP:192.168.50.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/kubernetes-upgrade-054413/id_rsa Username:docker}
	I0401 19:14:36.925469   53106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 19:14:36.952467   53106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0401 19:14:36.983094   53106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 19:14:37.015669   53106 provision.go:87] duration metric: took 447.251695ms to configureAuth
	I0401 19:14:37.015693   53106 buildroot.go:189] setting minikube options for container-runtime
	I0401 19:14:37.015893   53106 config.go:182] Loaded profile config "kubernetes-upgrade-054413": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 19:14:37.015970   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHHostname
	I0401 19:14:37.018903   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:14:37.019366   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:2c:90", ip: ""} in network mk-kubernetes-upgrade-054413: {Iface:virbr2 ExpiryTime:2024-04-01 20:14:28 +0000 UTC Type:0 Mac:52:54:00:e7:2c:90 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-054413 Clientid:01:52:54:00:e7:2c:90}
	I0401 19:14:37.019396   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined IP address 192.168.50.39 and MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:14:37.019580   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHPort
	I0401 19:14:37.019796   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHKeyPath
	I0401 19:14:37.020010   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHKeyPath
	I0401 19:14:37.020196   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHUsername
	I0401 19:14:37.020380   53106 main.go:141] libmachine: Using SSH client type: native
	I0401 19:14:37.020572   53106 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.39 22 <nil> <nil>}
	I0401 19:14:37.020587   53106 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 19:14:37.349762   53106 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 19:14:37.349797   53106 main.go:141] libmachine: Checking connection to Docker...
	I0401 19:14:37.349809   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetURL
	I0401 19:14:37.351205   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | Using libvirt version 6000000
	I0401 19:14:37.353845   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:14:37.354203   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:2c:90", ip: ""} in network mk-kubernetes-upgrade-054413: {Iface:virbr2 ExpiryTime:2024-04-01 20:14:28 +0000 UTC Type:0 Mac:52:54:00:e7:2c:90 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-054413 Clientid:01:52:54:00:e7:2c:90}
	I0401 19:14:37.354222   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined IP address 192.168.50.39 and MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:14:37.354406   53106 main.go:141] libmachine: Docker is up and running!
	I0401 19:14:37.354420   53106 main.go:141] libmachine: Reticulating splines...
	I0401 19:14:37.354428   53106 client.go:171] duration metric: took 25.463264942s to LocalClient.Create
	I0401 19:14:37.354466   53106 start.go:167] duration metric: took 25.463350224s to libmachine.API.Create "kubernetes-upgrade-054413"
	I0401 19:14:37.354479   53106 start.go:293] postStartSetup for "kubernetes-upgrade-054413" (driver="kvm2")
	I0401 19:14:37.354493   53106 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 19:14:37.354533   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .DriverName
	I0401 19:14:37.354781   53106 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 19:14:37.354801   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHHostname
	I0401 19:14:37.357338   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:14:37.357705   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:2c:90", ip: ""} in network mk-kubernetes-upgrade-054413: {Iface:virbr2 ExpiryTime:2024-04-01 20:14:28 +0000 UTC Type:0 Mac:52:54:00:e7:2c:90 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-054413 Clientid:01:52:54:00:e7:2c:90}
	I0401 19:14:37.357732   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined IP address 192.168.50.39 and MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:14:37.357881   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHPort
	I0401 19:14:37.358068   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHKeyPath
	I0401 19:14:37.358229   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHUsername
	I0401 19:14:37.358421   53106 sshutil.go:53] new ssh client: &{IP:192.168.50.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/kubernetes-upgrade-054413/id_rsa Username:docker}
	I0401 19:14:37.445302   53106 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 19:14:37.450198   53106 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 19:14:37.450226   53106 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/addons for local assets ...
	I0401 19:14:37.450293   53106 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/files for local assets ...
	I0401 19:14:37.450421   53106 filesync.go:149] local asset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> 177512.pem in /etc/ssl/certs
	I0401 19:14:37.450551   53106 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 19:14:37.462481   53106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:14:37.491656   53106 start.go:296] duration metric: took 137.149474ms for postStartSetup
	I0401 19:14:37.491706   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetConfigRaw
	I0401 19:14:37.492319   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetIP
	I0401 19:14:37.495100   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:14:37.495446   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:2c:90", ip: ""} in network mk-kubernetes-upgrade-054413: {Iface:virbr2 ExpiryTime:2024-04-01 20:14:28 +0000 UTC Type:0 Mac:52:54:00:e7:2c:90 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-054413 Clientid:01:52:54:00:e7:2c:90}
	I0401 19:14:37.495475   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined IP address 192.168.50.39 and MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:14:37.495795   53106 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kubernetes-upgrade-054413/config.json ...
	I0401 19:14:37.496000   53106 start.go:128] duration metric: took 25.625125683s to createHost
	I0401 19:14:37.496032   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHHostname
	I0401 19:14:37.498444   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:14:37.498865   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:2c:90", ip: ""} in network mk-kubernetes-upgrade-054413: {Iface:virbr2 ExpiryTime:2024-04-01 20:14:28 +0000 UTC Type:0 Mac:52:54:00:e7:2c:90 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-054413 Clientid:01:52:54:00:e7:2c:90}
	I0401 19:14:37.498905   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined IP address 192.168.50.39 and MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:14:37.499068   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHPort
	I0401 19:14:37.499273   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHKeyPath
	I0401 19:14:37.499488   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHKeyPath
	I0401 19:14:37.499684   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHUsername
	I0401 19:14:37.499874   53106 main.go:141] libmachine: Using SSH client type: native
	I0401 19:14:37.500075   53106 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.39 22 <nil> <nil>}
	I0401 19:14:37.500095   53106 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0401 19:14:37.620770   53106 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711998877.569801884
	
	I0401 19:14:37.620793   53106 fix.go:216] guest clock: 1711998877.569801884
	I0401 19:14:37.620800   53106 fix.go:229] Guest: 2024-04-01 19:14:37.569801884 +0000 UTC Remote: 2024-04-01 19:14:37.496017242 +0000 UTC m=+51.552345656 (delta=73.784642ms)
	I0401 19:14:37.620841   53106 fix.go:200] guest clock delta is within tolerance: 73.784642ms
	I0401 19:14:37.620848   53106 start.go:83] releasing machines lock for "kubernetes-upgrade-054413", held for 25.750130444s
	I0401 19:14:37.620881   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .DriverName
	I0401 19:14:37.621151   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetIP
	I0401 19:14:37.624342   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:14:37.624751   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:2c:90", ip: ""} in network mk-kubernetes-upgrade-054413: {Iface:virbr2 ExpiryTime:2024-04-01 20:14:28 +0000 UTC Type:0 Mac:52:54:00:e7:2c:90 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-054413 Clientid:01:52:54:00:e7:2c:90}
	I0401 19:14:37.624775   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined IP address 192.168.50.39 and MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:14:37.624971   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .DriverName
	I0401 19:14:37.625517   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .DriverName
	I0401 19:14:37.625729   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .DriverName
	I0401 19:14:37.625834   53106 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 19:14:37.625872   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHHostname
	I0401 19:14:37.625927   53106 ssh_runner.go:195] Run: cat /version.json
	I0401 19:14:37.625960   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHHostname
	I0401 19:14:37.629074   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:14:37.629217   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:14:37.629570   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:2c:90", ip: ""} in network mk-kubernetes-upgrade-054413: {Iface:virbr2 ExpiryTime:2024-04-01 20:14:28 +0000 UTC Type:0 Mac:52:54:00:e7:2c:90 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-054413 Clientid:01:52:54:00:e7:2c:90}
	I0401 19:14:37.629600   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined IP address 192.168.50.39 and MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:14:37.629760   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHPort
	I0401 19:14:37.629919   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:2c:90", ip: ""} in network mk-kubernetes-upgrade-054413: {Iface:virbr2 ExpiryTime:2024-04-01 20:14:28 +0000 UTC Type:0 Mac:52:54:00:e7:2c:90 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-054413 Clientid:01:52:54:00:e7:2c:90}
	I0401 19:14:37.629965   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHKeyPath
	I0401 19:14:37.629993   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined IP address 192.168.50.39 and MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:14:37.630168   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHUsername
	I0401 19:14:37.630249   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHPort
	I0401 19:14:37.630363   53106 sshutil.go:53] new ssh client: &{IP:192.168.50.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/kubernetes-upgrade-054413/id_rsa Username:docker}
	I0401 19:14:37.630459   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHKeyPath
	I0401 19:14:37.630570   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHUsername
	I0401 19:14:37.630742   53106 sshutil.go:53] new ssh client: &{IP:192.168.50.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/kubernetes-upgrade-054413/id_rsa Username:docker}
	I0401 19:14:37.758446   53106 ssh_runner.go:195] Run: systemctl --version
	I0401 19:14:37.769182   53106 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 19:14:37.947233   53106 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 19:14:37.954746   53106 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 19:14:37.954834   53106 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 19:14:37.982236   53106 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 19:14:37.982269   53106 start.go:494] detecting cgroup driver to use...
	I0401 19:14:37.982341   53106 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 19:14:38.007606   53106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 19:14:38.023114   53106 docker.go:217] disabling cri-docker service (if available) ...
	I0401 19:14:38.023189   53106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 19:14:38.039861   53106 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 19:14:38.056087   53106 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 19:14:38.195857   53106 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 19:14:38.387830   53106 docker.go:233] disabling docker service ...
	I0401 19:14:38.387891   53106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 19:14:38.404275   53106 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 19:14:38.419790   53106 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 19:14:38.558987   53106 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 19:14:38.706254   53106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 19:14:38.725050   53106 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 19:14:38.753528   53106 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0401 19:14:38.753594   53106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:14:38.769638   53106 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 19:14:38.769746   53106 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:14:38.784913   53106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:14:38.797089   53106 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:14:38.809233   53106 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 19:14:38.822304   53106 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 19:14:38.832797   53106 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0401 19:14:38.832876   53106 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0401 19:14:38.848712   53106 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 19:14:38.863951   53106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:14:38.989632   53106 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 19:14:39.175888   53106 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 19:14:39.175962   53106 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 19:14:39.181934   53106 start.go:562] Will wait 60s for crictl version
	I0401 19:14:39.182001   53106 ssh_runner.go:195] Run: which crictl
	I0401 19:14:39.188747   53106 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 19:14:39.246539   53106 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 19:14:39.246618   53106 ssh_runner.go:195] Run: crio --version
	I0401 19:14:39.283974   53106 ssh_runner.go:195] Run: crio --version
	I0401 19:14:39.321868   53106 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0401 19:14:39.323060   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetIP
	I0401 19:14:39.326490   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:14:39.326934   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:2c:90", ip: ""} in network mk-kubernetes-upgrade-054413: {Iface:virbr2 ExpiryTime:2024-04-01 20:14:28 +0000 UTC Type:0 Mac:52:54:00:e7:2c:90 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-054413 Clientid:01:52:54:00:e7:2c:90}
	I0401 19:14:39.326964   53106 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined IP address 192.168.50.39 and MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:14:39.327214   53106 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0401 19:14:39.333153   53106 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:14:39.348210   53106 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-054413 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-054413 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.39 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 19:14:39.348343   53106 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0401 19:14:39.348413   53106 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:14:39.393241   53106 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0401 19:14:39.393319   53106 ssh_runner.go:195] Run: which lz4
	I0401 19:14:39.399188   53106 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0401 19:14:39.405244   53106 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 19:14:39.405272   53106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0401 19:14:41.574771   53106 crio.go:462] duration metric: took 2.175615694s to copy over tarball
	I0401 19:14:41.574849   53106 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 19:14:44.719369   53106 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.144484031s)
	I0401 19:14:44.719405   53106 crio.go:469] duration metric: took 3.144602943s to extract the tarball
	I0401 19:14:44.719414   53106 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 19:14:44.764342   53106 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:14:44.828677   53106 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0401 19:14:44.828708   53106 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0401 19:14:44.828827   53106 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:14:44.828854   53106 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 19:14:44.828863   53106 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0401 19:14:44.828855   53106 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 19:14:44.828828   53106 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 19:14:44.828831   53106 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0401 19:14:44.828859   53106 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 19:14:44.828832   53106 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0401 19:14:44.830631   53106 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 19:14:44.830636   53106 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0401 19:14:44.830671   53106 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 19:14:44.830681   53106 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0401 19:14:44.830685   53106 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 19:14:44.830732   53106 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:14:44.830733   53106 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 19:14:44.830807   53106 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0401 19:14:45.000326   53106 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0401 19:14:45.002355   53106 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0401 19:14:45.012712   53106 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0401 19:14:45.013853   53106 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0401 19:14:45.023733   53106 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0401 19:14:45.026487   53106 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 19:14:45.037310   53106 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0401 19:14:45.077714   53106 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0401 19:14:45.077758   53106 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 19:14:45.077803   53106 ssh_runner.go:195] Run: which crictl
	I0401 19:14:45.122746   53106 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:14:45.215222   53106 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0401 19:14:45.215267   53106 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0401 19:14:45.215316   53106 ssh_runner.go:195] Run: which crictl
	I0401 19:14:45.265697   53106 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0401 19:14:45.265801   53106 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0401 19:14:45.265830   53106 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0401 19:14:45.265832   53106 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 19:14:45.265882   53106 ssh_runner.go:195] Run: which crictl
	I0401 19:14:45.265922   53106 ssh_runner.go:195] Run: which crictl
	I0401 19:14:45.273841   53106 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0401 19:14:45.273889   53106 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 19:14:45.273926   53106 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0401 19:14:45.273970   53106 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 19:14:45.273990   53106 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0401 19:14:45.274012   53106 ssh_runner.go:195] Run: which crictl
	I0401 19:14:45.273936   53106 ssh_runner.go:195] Run: which crictl
	I0401 19:14:45.274016   53106 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0401 19:14:45.274068   53106 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0401 19:14:45.274085   53106 ssh_runner.go:195] Run: which crictl
	I0401 19:14:45.383128   53106 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0401 19:14:45.383148   53106 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0401 19:14:45.383215   53106 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0401 19:14:45.383236   53106 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0401 19:14:45.383240   53106 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 19:14:45.383328   53106 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0401 19:14:45.383401   53106 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0401 19:14:45.526456   53106 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0401 19:14:45.526515   53106 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0401 19:14:45.539250   53106 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0401 19:14:45.539343   53106 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0401 19:14:45.539366   53106 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0401 19:14:45.539348   53106 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0401 19:14:45.539414   53106 cache_images.go:92] duration metric: took 710.686895ms to LoadCachedImages
	W0401 19:14:45.539487   53106 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0401 19:14:45.539508   53106 kubeadm.go:928] updating node { 192.168.50.39 8443 v1.20.0 crio true true} ...
	I0401 19:14:45.539641   53106 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-054413 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.39
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-054413 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 19:14:45.539725   53106 ssh_runner.go:195] Run: crio config
	I0401 19:14:45.592717   53106 cni.go:84] Creating CNI manager for ""
	I0401 19:14:45.592745   53106 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:14:45.592762   53106 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 19:14:45.592787   53106 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.39 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-054413 NodeName:kubernetes-upgrade-054413 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.39"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.39 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0401 19:14:45.592973   53106 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.39
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-054413"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.39
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.39"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 19:14:45.593047   53106 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0401 19:14:45.607272   53106 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 19:14:45.607348   53106 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 19:14:45.620408   53106 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0401 19:14:45.643947   53106 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 19:14:45.666060   53106 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0401 19:14:45.687914   53106 ssh_runner.go:195] Run: grep 192.168.50.39	control-plane.minikube.internal$ /etc/hosts
	I0401 19:14:45.692826   53106 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.39	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:14:45.707832   53106 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:14:45.857911   53106 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:14:45.880714   53106 certs.go:68] Setting up /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kubernetes-upgrade-054413 for IP: 192.168.50.39
	I0401 19:14:45.880741   53106 certs.go:194] generating shared ca certs ...
	I0401 19:14:45.880761   53106 certs.go:226] acquiring lock for ca certs: {Name:mk348b3e250c104b662139cd7212c6c6dfda3180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:14:45.880940   53106 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key
	I0401 19:14:45.881004   53106 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key
	I0401 19:14:45.881016   53106 certs.go:256] generating profile certs ...
	I0401 19:14:45.881063   53106 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kubernetes-upgrade-054413/client.key
	I0401 19:14:45.881079   53106 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kubernetes-upgrade-054413/client.crt with IP's: []
	I0401 19:14:46.185246   53106 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kubernetes-upgrade-054413/client.crt ...
	I0401 19:14:46.185276   53106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kubernetes-upgrade-054413/client.crt: {Name:mk31b02bf279474f14dd268fe50cacc8818ba46f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:14:46.185452   53106 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kubernetes-upgrade-054413/client.key ...
	I0401 19:14:46.185471   53106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kubernetes-upgrade-054413/client.key: {Name:mk33aa33bb084c1bad8bee3d9c48c415f562b64d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:14:46.185573   53106 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kubernetes-upgrade-054413/apiserver.key.d63685f6
	I0401 19:14:46.185594   53106 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kubernetes-upgrade-054413/apiserver.crt.d63685f6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.39]
	I0401 19:14:46.571883   53106 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kubernetes-upgrade-054413/apiserver.crt.d63685f6 ...
	I0401 19:14:46.571917   53106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kubernetes-upgrade-054413/apiserver.crt.d63685f6: {Name:mk549039a81750ca4b433284735dc099fa6321fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:14:46.572099   53106 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kubernetes-upgrade-054413/apiserver.key.d63685f6 ...
	I0401 19:14:46.572123   53106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kubernetes-upgrade-054413/apiserver.key.d63685f6: {Name:mke8b47ca31b69bad303f871be6efc2c8d5649e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:14:46.572213   53106 certs.go:381] copying /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kubernetes-upgrade-054413/apiserver.crt.d63685f6 -> /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kubernetes-upgrade-054413/apiserver.crt
	I0401 19:14:46.572303   53106 certs.go:385] copying /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kubernetes-upgrade-054413/apiserver.key.d63685f6 -> /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kubernetes-upgrade-054413/apiserver.key
	I0401 19:14:46.572379   53106 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kubernetes-upgrade-054413/proxy-client.key
	I0401 19:14:46.572401   53106 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kubernetes-upgrade-054413/proxy-client.crt with IP's: []
	I0401 19:14:46.814882   53106 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kubernetes-upgrade-054413/proxy-client.crt ...
	I0401 19:14:46.814911   53106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kubernetes-upgrade-054413/proxy-client.crt: {Name:mk4e803b479aefb652688ff89adfa7439083edf8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:14:46.815069   53106 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kubernetes-upgrade-054413/proxy-client.key ...
	I0401 19:14:46.815084   53106 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kubernetes-upgrade-054413/proxy-client.key: {Name:mk0a1ba8be2b7de81dfa3d7d30991f30f963f568 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:14:46.815236   53106 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem (1338 bytes)
	W0401 19:14:46.815273   53106 certs.go:480] ignoring /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751_empty.pem, impossibly tiny 0 bytes
	I0401 19:14:46.815283   53106 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem (1675 bytes)
	I0401 19:14:46.815314   53106 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem (1082 bytes)
	I0401 19:14:46.815336   53106 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem (1123 bytes)
	I0401 19:14:46.815359   53106 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem (1679 bytes)
	I0401 19:14:46.815413   53106 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:14:46.815960   53106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 19:14:46.864772   53106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 19:14:46.924898   53106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 19:14:46.974649   53106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 19:14:47.012568   53106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kubernetes-upgrade-054413/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0401 19:14:47.047814   53106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kubernetes-upgrade-054413/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0401 19:14:47.082550   53106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kubernetes-upgrade-054413/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 19:14:47.109851   53106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kubernetes-upgrade-054413/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 19:14:47.140610   53106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /usr/share/ca-certificates/177512.pem (1708 bytes)
	I0401 19:14:47.174588   53106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 19:14:47.210580   53106 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem --> /usr/share/ca-certificates/17751.pem (1338 bytes)
	I0401 19:14:47.241545   53106 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0401 19:14:47.266966   53106 ssh_runner.go:195] Run: openssl version
	I0401 19:14:47.273874   53106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177512.pem && ln -fs /usr/share/ca-certificates/177512.pem /etc/ssl/certs/177512.pem"
	I0401 19:14:47.287241   53106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177512.pem
	I0401 19:14:47.292570   53106 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 18:15 /usr/share/ca-certificates/177512.pem
	I0401 19:14:47.292666   53106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177512.pem
	I0401 19:14:47.299636   53106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177512.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 19:14:47.317487   53106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 19:14:47.332550   53106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:14:47.339003   53106 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 18:07 /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:14:47.339062   53106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:14:47.346619   53106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 19:14:47.364185   53106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17751.pem && ln -fs /usr/share/ca-certificates/17751.pem /etc/ssl/certs/17751.pem"
	I0401 19:14:47.381845   53106 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17751.pem
	I0401 19:14:47.388752   53106 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 18:15 /usr/share/ca-certificates/17751.pem
	I0401 19:14:47.388815   53106 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17751.pem
	I0401 19:14:47.397697   53106 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17751.pem /etc/ssl/certs/51391683.0"
	I0401 19:14:47.415463   53106 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 19:14:47.420563   53106 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 19:14:47.420629   53106 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-054413 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-054413 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.39 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:14:47.420719   53106 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 19:14:47.420774   53106 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:14:47.469324   53106 cri.go:89] found id: ""
	I0401 19:14:47.469406   53106 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 19:14:47.483960   53106 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:14:47.500201   53106 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:14:47.515371   53106 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:14:47.515392   53106 kubeadm.go:156] found existing configuration files:
	
	I0401 19:14:47.515439   53106 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 19:14:47.527117   53106 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:14:47.527184   53106 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:14:47.542032   53106 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 19:14:47.552995   53106 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:14:47.553049   53106 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:14:47.565986   53106 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 19:14:47.579392   53106 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:14:47.579453   53106 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:14:47.596061   53106 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 19:14:47.616674   53106 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:14:47.616743   53106 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:14:47.631912   53106 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 19:14:47.796180   53106 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0401 19:14:47.796385   53106 kubeadm.go:309] [preflight] Running pre-flight checks
	I0401 19:14:47.964017   53106 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 19:14:47.964166   53106 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 19:14:47.964311   53106 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 19:14:48.188532   53106 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 19:14:48.190649   53106 out.go:204]   - Generating certificates and keys ...
	I0401 19:14:48.190754   53106 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0401 19:14:48.190867   53106 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0401 19:14:48.330633   53106 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 19:14:48.460545   53106 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0401 19:14:48.612632   53106 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0401 19:14:48.884826   53106 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0401 19:14:49.112897   53106 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0401 19:14:49.113554   53106 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-054413 localhost] and IPs [192.168.50.39 127.0.0.1 ::1]
	I0401 19:14:49.264895   53106 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0401 19:14:49.265097   53106 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-054413 localhost] and IPs [192.168.50.39 127.0.0.1 ::1]
	I0401 19:14:49.460033   53106 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 19:14:49.896314   53106 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 19:14:50.414951   53106 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0401 19:14:50.415429   53106 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 19:14:50.937232   53106 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 19:14:51.028668   53106 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 19:14:51.414859   53106 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 19:14:51.591740   53106 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 19:14:51.621628   53106 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 19:14:51.624855   53106 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 19:14:51.624925   53106 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0401 19:14:51.811692   53106 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 19:14:51.813423   53106 out.go:204]   - Booting up control plane ...
	I0401 19:14:51.813593   53106 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 19:14:51.827132   53106 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 19:14:51.833303   53106 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 19:14:51.836109   53106 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 19:14:51.841191   53106 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 19:15:31.794295   53106 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0401 19:15:31.794435   53106 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:15:31.794698   53106 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:15:36.802586   53106 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:15:36.803087   53106 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:15:46.803661   53106 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:15:46.803838   53106 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:16:06.804529   53106 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:16:06.804812   53106 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:16:46.811182   53106 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:16:46.811465   53106 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:16:46.811473   53106 kubeadm.go:309] 
	I0401 19:16:46.811507   53106 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0401 19:16:46.811548   53106 kubeadm.go:309] 		timed out waiting for the condition
	I0401 19:16:46.811555   53106 kubeadm.go:309] 
	I0401 19:16:46.811598   53106 kubeadm.go:309] 	This error is likely caused by:
	I0401 19:16:46.811638   53106 kubeadm.go:309] 		- The kubelet is not running
	I0401 19:16:46.811775   53106 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0401 19:16:46.811782   53106 kubeadm.go:309] 
	I0401 19:16:46.811912   53106 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0401 19:16:46.811957   53106 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0401 19:16:46.812000   53106 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0401 19:16:46.812006   53106 kubeadm.go:309] 
	I0401 19:16:46.812225   53106 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0401 19:16:46.812470   53106 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0401 19:16:46.812502   53106 kubeadm.go:309] 
	I0401 19:16:46.812723   53106 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0401 19:16:46.812846   53106 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0401 19:16:46.812950   53106 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0401 19:16:46.813100   53106 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0401 19:16:46.813126   53106 kubeadm.go:309] 
	I0401 19:16:46.813262   53106 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 19:16:46.813363   53106 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	W0401 19:16:46.813549   53106 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-054413 localhost] and IPs [192.168.50.39 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-054413 localhost] and IPs [192.168.50.39 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-054413 localhost] and IPs [192.168.50.39 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-054413 localhost] and IPs [192.168.50.39 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0401 19:16:46.813600   53106 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0401 19:16:46.814030   53106 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0401 19:16:49.646258   53106 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.832633406s)
	I0401 19:16:49.646353   53106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:16:49.667526   53106 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:16:49.680612   53106 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:16:49.680641   53106 kubeadm.go:156] found existing configuration files:
	
	I0401 19:16:49.680714   53106 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 19:16:49.697615   53106 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:16:49.697714   53106 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:16:49.714608   53106 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 19:16:49.729440   53106 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:16:49.729515   53106 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:16:49.743075   53106 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 19:16:49.758524   53106 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:16:49.758600   53106 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:16:49.770813   53106 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 19:16:49.782165   53106 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:16:49.782251   53106 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:16:49.793931   53106 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 19:16:50.062872   53106 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 19:18:46.213856   53106 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0401 19:18:46.213954   53106 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0401 19:18:46.215899   53106 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0401 19:18:46.215961   53106 kubeadm.go:309] [preflight] Running pre-flight checks
	I0401 19:18:46.216048   53106 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 19:18:46.216161   53106 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 19:18:46.216282   53106 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 19:18:46.216357   53106 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 19:18:46.217871   53106 out.go:204]   - Generating certificates and keys ...
	I0401 19:18:46.217967   53106 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0401 19:18:46.218043   53106 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0401 19:18:46.218137   53106 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0401 19:18:46.218216   53106 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0401 19:18:46.218300   53106 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0401 19:18:46.218363   53106 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0401 19:18:46.218438   53106 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0401 19:18:46.218510   53106 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0401 19:18:46.218599   53106 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0401 19:18:46.218694   53106 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0401 19:18:46.218740   53106 kubeadm.go:309] [certs] Using the existing "sa" key
	I0401 19:18:46.218806   53106 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 19:18:46.218868   53106 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 19:18:46.218930   53106 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 19:18:46.219006   53106 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 19:18:46.219071   53106 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 19:18:46.219193   53106 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 19:18:46.219297   53106 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 19:18:46.219344   53106 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0401 19:18:46.219422   53106 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 19:18:46.221082   53106 out.go:204]   - Booting up control plane ...
	I0401 19:18:46.221192   53106 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 19:18:46.221292   53106 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 19:18:46.221371   53106 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 19:18:46.221468   53106 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 19:18:46.221674   53106 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 19:18:46.221733   53106 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0401 19:18:46.221813   53106 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:18:46.222039   53106 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:18:46.222119   53106 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:18:46.222344   53106 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:18:46.222424   53106 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:18:46.222642   53106 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:18:46.222723   53106 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:18:46.222941   53106 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:18:46.223021   53106 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:18:46.223245   53106 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:18:46.223251   53106 kubeadm.go:309] 
	I0401 19:18:46.223300   53106 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0401 19:18:46.223346   53106 kubeadm.go:309] 		timed out waiting for the condition
	I0401 19:18:46.223352   53106 kubeadm.go:309] 
	I0401 19:18:46.223394   53106 kubeadm.go:309] 	This error is likely caused by:
	I0401 19:18:46.223433   53106 kubeadm.go:309] 		- The kubelet is not running
	I0401 19:18:46.223557   53106 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0401 19:18:46.223562   53106 kubeadm.go:309] 
	I0401 19:18:46.223686   53106 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0401 19:18:46.223726   53106 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0401 19:18:46.223765   53106 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0401 19:18:46.223770   53106 kubeadm.go:309] 
	I0401 19:18:46.223893   53106 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0401 19:18:46.224003   53106 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0401 19:18:46.224009   53106 kubeadm.go:309] 
	I0401 19:18:46.224143   53106 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0401 19:18:46.224268   53106 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0401 19:18:46.224360   53106 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0401 19:18:46.224446   53106 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0401 19:18:46.224507   53106 kubeadm.go:393] duration metric: took 3m58.803881803s to StartCluster
	I0401 19:18:46.224549   53106 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:18:46.224600   53106 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:18:46.224658   53106 kubeadm.go:309] 
	I0401 19:18:46.314837   53106 cri.go:89] found id: ""
	I0401 19:18:46.314862   53106 logs.go:276] 0 containers: []
	W0401 19:18:46.314872   53106 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:18:46.314880   53106 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:18:46.314942   53106 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:18:46.377209   53106 cri.go:89] found id: ""
	I0401 19:18:46.377236   53106 logs.go:276] 0 containers: []
	W0401 19:18:46.377245   53106 logs.go:278] No container was found matching "etcd"
	I0401 19:18:46.377252   53106 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:18:46.377305   53106 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:18:46.435245   53106 cri.go:89] found id: ""
	I0401 19:18:46.435263   53106 logs.go:276] 0 containers: []
	W0401 19:18:46.435270   53106 logs.go:278] No container was found matching "coredns"
	I0401 19:18:46.435275   53106 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:18:46.435315   53106 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:18:46.488483   53106 cri.go:89] found id: ""
	I0401 19:18:46.488510   53106 logs.go:276] 0 containers: []
	W0401 19:18:46.488520   53106 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:18:46.488527   53106 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:18:46.488585   53106 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:18:46.537898   53106 cri.go:89] found id: ""
	I0401 19:18:46.537920   53106 logs.go:276] 0 containers: []
	W0401 19:18:46.537931   53106 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:18:46.537939   53106 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:18:46.537996   53106 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:18:46.597702   53106 cri.go:89] found id: ""
	I0401 19:18:46.597729   53106 logs.go:276] 0 containers: []
	W0401 19:18:46.597739   53106 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:18:46.597746   53106 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:18:46.597805   53106 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:18:46.641068   53106 cri.go:89] found id: ""
	I0401 19:18:46.641097   53106 logs.go:276] 0 containers: []
	W0401 19:18:46.641107   53106 logs.go:278] No container was found matching "kindnet"
	I0401 19:18:46.641118   53106 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:18:46.641134   53106 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:18:46.817049   53106 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:18:46.817135   53106 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:18:46.817165   53106 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:18:46.959631   53106 logs.go:123] Gathering logs for container status ...
	I0401 19:18:46.959741   53106 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:18:47.013716   53106 logs.go:123] Gathering logs for kubelet ...
	I0401 19:18:47.013754   53106 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:18:47.105671   53106 logs.go:123] Gathering logs for dmesg ...
	I0401 19:18:47.105725   53106 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0401 19:18:47.135193   53106 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0401 19:18:47.135244   53106 out.go:239] * 
	* 
	W0401 19:18:47.135304   53106 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0401 19:18:47.135338   53106 out.go:239] * 
	* 
	W0401 19:18:47.136486   53106 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 19:18:47.140291   53106 out.go:177] 
	W0401 19:18:47.141702   53106 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0401 19:18:47.141765   53106 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0401 19:18:47.141791   53106 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0401 19:18:47.143297   53106 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-054413 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-054413
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-054413: (2.990038768s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-054413 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-054413 status --format={{.Host}}: exit status 7 (77.320835ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-054413 --memory=2200 --kubernetes-version=v1.30.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0401 19:18:52.854883   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/client.crt: no such file or directory
E0401 19:19:16.856838   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/functional-784295/client.crt: no such file or directory
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-054413 --memory=2200 --kubernetes-version=v1.30.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m34.182761543s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-054413 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-054413 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-054413 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (100.612116ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-054413] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18233
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18233-10493/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-10493/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0-rc.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-054413
	    minikube start -p kubernetes-upgrade-054413 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0544132 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-054413 --kubernetes-version=v1.30.0-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-054413 --memory=2200 --kubernetes-version=v1.30.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-054413 --memory=2200 --kubernetes-version=v1.30.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (49.658625158s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-04-01 19:21:14.29588414 +0000 UTC m=+4503.751435333
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-054413 -n kubernetes-upgrade-054413
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-054413 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-054413 logs -n 25: (1.861689728s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p custom-flannel-408543                             | custom-flannel-408543     | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:20 UTC | 01 Apr 24 19:20 UTC |
	|         | sudo systemctl cat kubelet                           |                           |         |                |                     |                     |
	|         | --no-pager                                           |                           |         |                |                     |                     |
	| ssh     | -p custom-flannel-408543 sudo                        | custom-flannel-408543     | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:20 UTC | 01 Apr 24 19:20 UTC |
	|         | journalctl -xeu kubelet --all                        |                           |         |                |                     |                     |
	|         | --full --no-pager                                    |                           |         |                |                     |                     |
	| ssh     | -p custom-flannel-408543                             | custom-flannel-408543     | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:20 UTC | 01 Apr 24 19:20 UTC |
	|         | sudo cat                                             |                           |         |                |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |                |                     |                     |
	| ssh     | -p custom-flannel-408543                             | custom-flannel-408543     | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:20 UTC | 01 Apr 24 19:20 UTC |
	|         | sudo cat                                             |                           |         |                |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |                |                     |                     |
	| ssh     | -p custom-flannel-408543 sudo                        | custom-flannel-408543     | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:20 UTC |                     |
	|         | systemctl status docker --all                        |                           |         |                |                     |                     |
	|         | --full --no-pager                                    |                           |         |                |                     |                     |
	| ssh     | -p custom-flannel-408543                             | custom-flannel-408543     | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:20 UTC | 01 Apr 24 19:20 UTC |
	|         | sudo systemctl cat docker                            |                           |         |                |                     |                     |
	|         | --no-pager                                           |                           |         |                |                     |                     |
	| ssh     | -p custom-flannel-408543 sudo                        | custom-flannel-408543     | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:20 UTC | 01 Apr 24 19:20 UTC |
	|         | cat /etc/docker/daemon.json                          |                           |         |                |                     |                     |
	| ssh     | -p custom-flannel-408543 sudo                        | custom-flannel-408543     | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:20 UTC |                     |
	|         | docker system info                                   |                           |         |                |                     |                     |
	| ssh     | -p custom-flannel-408543 sudo                        | custom-flannel-408543     | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:20 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |                |                     |                     |
	|         | --all --full --no-pager                              |                           |         |                |                     |                     |
	| ssh     | -p custom-flannel-408543                             | custom-flannel-408543     | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:20 UTC | 01 Apr 24 19:20 UTC |
	|         | sudo systemctl cat cri-docker                        |                           |         |                |                     |                     |
	|         | --no-pager                                           |                           |         |                |                     |                     |
	| ssh     | -p custom-flannel-408543 sudo cat                    | custom-flannel-408543     | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:20 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |                |                     |                     |
	| ssh     | -p custom-flannel-408543 sudo cat                    | custom-flannel-408543     | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:20 UTC | 01 Apr 24 19:20 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |                |                     |                     |
	| ssh     | -p custom-flannel-408543 sudo                        | custom-flannel-408543     | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:20 UTC | 01 Apr 24 19:20 UTC |
	|         | cri-dockerd --version                                |                           |         |                |                     |                     |
	| ssh     | -p custom-flannel-408543 sudo                        | custom-flannel-408543     | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:20 UTC |                     |
	|         | systemctl status containerd                          |                           |         |                |                     |                     |
	|         | --all --full --no-pager                              |                           |         |                |                     |                     |
	| ssh     | -p custom-flannel-408543                             | custom-flannel-408543     | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:20 UTC | 01 Apr 24 19:20 UTC |
	|         | sudo systemctl cat containerd                        |                           |         |                |                     |                     |
	|         | --no-pager                                           |                           |         |                |                     |                     |
	| ssh     | -p custom-flannel-408543 sudo cat                    | custom-flannel-408543     | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:20 UTC | 01 Apr 24 19:20 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |                |                     |                     |
	| ssh     | -p custom-flannel-408543                             | custom-flannel-408543     | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:20 UTC | 01 Apr 24 19:20 UTC |
	|         | sudo cat                                             |                           |         |                |                     |                     |
	|         | /etc/containerd/config.toml                          |                           |         |                |                     |                     |
	| ssh     | -p custom-flannel-408543 sudo                        | custom-flannel-408543     | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:20 UTC | 01 Apr 24 19:20 UTC |
	|         | containerd config dump                               |                           |         |                |                     |                     |
	| ssh     | -p custom-flannel-408543 sudo                        | custom-flannel-408543     | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:20 UTC | 01 Apr 24 19:20 UTC |
	|         | systemctl status crio --all                          |                           |         |                |                     |                     |
	|         | --full --no-pager                                    |                           |         |                |                     |                     |
	| ssh     | -p custom-flannel-408543 sudo                        | custom-flannel-408543     | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:20 UTC | 01 Apr 24 19:20 UTC |
	|         | systemctl cat crio --no-pager                        |                           |         |                |                     |                     |
	| ssh     | -p custom-flannel-408543 sudo                        | custom-flannel-408543     | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:20 UTC | 01 Apr 24 19:20 UTC |
	|         | find /etc/crio -type f -exec                         |                           |         |                |                     |                     |
	|         | sh -c 'echo {}; cat {}' \;                           |                           |         |                |                     |                     |
	| ssh     | -p custom-flannel-408543 sudo                        | custom-flannel-408543     | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:20 UTC | 01 Apr 24 19:20 UTC |
	|         | crio config                                          |                           |         |                |                     |                     |
	| delete  | -p custom-flannel-408543                             | custom-flannel-408543     | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:20 UTC | 01 Apr 24 19:20 UTC |
	| start   | -p bridge-408543 --memory=3072                       | bridge-408543             | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:20 UTC |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |                |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |                |                     |                     |
	|         | --cni=bridge --driver=kvm2                           |                           |         |                |                     |                     |
	|         | --container-runtime=crio                             |                           |         |                |                     |                     |
	| ssh     | -p enable-default-cni-408543                         | enable-default-cni-408543 | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:20 UTC | 01 Apr 24 19:20 UTC |
	|         | pgrep -a kubelet                                     |                           |         |                |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/01 19:20:37
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 19:20:37.008754   63469 out.go:291] Setting OutFile to fd 1 ...
	I0401 19:20:37.009291   63469 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 19:20:37.009342   63469 out.go:304] Setting ErrFile to fd 2...
	I0401 19:20:37.009359   63469 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 19:20:37.009839   63469 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-10493/.minikube/bin
	I0401 19:20:37.010713   63469 out.go:298] Setting JSON to false
	I0401 19:20:37.012010   63469 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7389,"bootTime":1711991848,"procs":301,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 19:20:37.012093   63469 start.go:139] virtualization: kvm guest
	I0401 19:20:37.014932   63469 out.go:177] * [bridge-408543] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 19:20:37.016486   63469 notify.go:220] Checking for updates...
	I0401 19:20:37.016496   63469 out.go:177]   - MINIKUBE_LOCATION=18233
	I0401 19:20:37.017882   63469 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 19:20:37.019192   63469 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 19:20:37.020412   63469 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-10493/.minikube
	I0401 19:20:37.021588   63469 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 19:20:37.022796   63469 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 19:20:37.024604   63469 config.go:182] Loaded profile config "enable-default-cni-408543": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 19:20:37.024786   63469 config.go:182] Loaded profile config "flannel-408543": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 19:20:37.024911   63469 config.go:182] Loaded profile config "kubernetes-upgrade-054413": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0401 19:20:37.025032   63469 driver.go:392] Setting default libvirt URI to qemu:///system
	I0401 19:20:37.071107   63469 out.go:177] * Using the kvm2 driver based on user configuration
	I0401 19:20:37.072449   63469 start.go:297] selected driver: kvm2
	I0401 19:20:37.072471   63469 start.go:901] validating driver "kvm2" against <nil>
	I0401 19:20:37.072486   63469 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 19:20:37.073505   63469 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 19:20:37.073630   63469 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18233-10493/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0401 19:20:37.089816   63469 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0401 19:20:37.089891   63469 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0401 19:20:37.090171   63469 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 19:20:37.090252   63469 cni.go:84] Creating CNI manager for "bridge"
	I0401 19:20:37.090263   63469 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0401 19:20:37.090340   63469 start.go:340] cluster config:
	{Name:bridge-408543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:bridge-408543 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:20:37.090488   63469 iso.go:125] acquiring lock: {Name:mka511ffe42ecd86bd7f46e7a17ddcdd3e5e4327 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 19:20:37.092904   63469 out.go:177] * Starting "bridge-408543" primary control-plane node in "bridge-408543" cluster
	I0401 19:20:34.714732   61938 main.go:141] libmachine: (flannel-408543) DBG | domain flannel-408543 has defined MAC address 52:54:00:ee:c8:5a in network mk-flannel-408543
	I0401 19:20:34.715178   61938 main.go:141] libmachine: (flannel-408543) DBG | unable to find current IP address of domain flannel-408543 in network mk-flannel-408543
	I0401 19:20:34.715203   61938 main.go:141] libmachine: (flannel-408543) DBG | I0401 19:20:34.715095   61960 retry.go:31] will retry after 2.778667292s: waiting for machine to come up
	I0401 19:20:37.496221   61938 main.go:141] libmachine: (flannel-408543) DBG | domain flannel-408543 has defined MAC address 52:54:00:ee:c8:5a in network mk-flannel-408543
	I0401 19:20:37.496821   61938 main.go:141] libmachine: (flannel-408543) DBG | unable to find current IP address of domain flannel-408543 in network mk-flannel-408543
	I0401 19:20:37.496851   61938 main.go:141] libmachine: (flannel-408543) DBG | I0401 19:20:37.496776   61960 retry.go:31] will retry after 3.264590019s: waiting for machine to come up
	I0401 19:20:34.188888   59634 pod_ready.go:102] pod "coredns-76f75df574-2t4d7" in "kube-system" namespace has status "Ready":"False"
	I0401 19:20:36.685541   59634 pod_ready.go:102] pod "coredns-76f75df574-2t4d7" in "kube-system" namespace has status "Ready":"False"
	I0401 19:20:38.686562   59634 pod_ready.go:102] pod "coredns-76f75df574-2t4d7" in "kube-system" namespace has status "Ready":"False"
	I0401 19:20:37.094386   63469 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0401 19:20:37.094429   63469 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0401 19:20:37.094442   63469 cache.go:56] Caching tarball of preloaded images
	I0401 19:20:37.094529   63469 preload.go:173] Found /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 19:20:37.094544   63469 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0401 19:20:37.094647   63469 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/bridge-408543/config.json ...
	I0401 19:20:37.094670   63469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/bridge-408543/config.json: {Name:mk3ac174876e644b90d422f85174db6236170831 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:20:37.094824   63469 start.go:360] acquireMachinesLock for bridge-408543: {Name:mk6b7472209a8db5f40be4c2f0565da7e0094c19 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 19:20:40.764062   61938 main.go:141] libmachine: (flannel-408543) DBG | domain flannel-408543 has defined MAC address 52:54:00:ee:c8:5a in network mk-flannel-408543
	I0401 19:20:40.764591   61938 main.go:141] libmachine: (flannel-408543) DBG | unable to find current IP address of domain flannel-408543 in network mk-flannel-408543
	I0401 19:20:40.764641   61938 main.go:141] libmachine: (flannel-408543) DBG | I0401 19:20:40.764559   61960 retry.go:31] will retry after 3.407326598s: waiting for machine to come up
	I0401 19:20:41.184565   59634 pod_ready.go:102] pod "coredns-76f75df574-2t4d7" in "kube-system" namespace has status "Ready":"False"
	I0401 19:20:43.684239   59634 pod_ready.go:102] pod "coredns-76f75df574-2t4d7" in "kube-system" namespace has status "Ready":"False"
	I0401 19:20:44.174981   61938 main.go:141] libmachine: (flannel-408543) DBG | domain flannel-408543 has defined MAC address 52:54:00:ee:c8:5a in network mk-flannel-408543
	I0401 19:20:44.175478   61938 main.go:141] libmachine: (flannel-408543) DBG | unable to find current IP address of domain flannel-408543 in network mk-flannel-408543
	I0401 19:20:44.175500   61938 main.go:141] libmachine: (flannel-408543) DBG | I0401 19:20:44.175437   61960 retry.go:31] will retry after 3.765601847s: waiting for machine to come up
	I0401 19:20:47.942424   61938 main.go:141] libmachine: (flannel-408543) DBG | domain flannel-408543 has defined MAC address 52:54:00:ee:c8:5a in network mk-flannel-408543
	I0401 19:20:47.942959   61938 main.go:141] libmachine: (flannel-408543) DBG | domain flannel-408543 has current primary IP address 192.168.39.24 and MAC address 52:54:00:ee:c8:5a in network mk-flannel-408543
	I0401 19:20:47.942998   61938 main.go:141] libmachine: (flannel-408543) Found IP for machine: 192.168.39.24
	I0401 19:20:47.943010   61938 main.go:141] libmachine: (flannel-408543) Reserving static IP address...
	I0401 19:20:47.943352   61938 main.go:141] libmachine: (flannel-408543) DBG | unable to find host DHCP lease matching {name: "flannel-408543", mac: "52:54:00:ee:c8:5a", ip: "192.168.39.24"} in network mk-flannel-408543
	I0401 19:20:48.016296   61938 main.go:141] libmachine: (flannel-408543) DBG | Getting to WaitForSSH function...
	I0401 19:20:48.016331   61938 main.go:141] libmachine: (flannel-408543) Reserved static IP address: 192.168.39.24
	I0401 19:20:48.016343   61938 main.go:141] libmachine: (flannel-408543) Waiting for SSH to be available...
	I0401 19:20:48.019264   61938 main.go:141] libmachine: (flannel-408543) DBG | domain flannel-408543 has defined MAC address 52:54:00:ee:c8:5a in network mk-flannel-408543
	I0401 19:20:48.019728   61938 main.go:141] libmachine: (flannel-408543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c8:5a", ip: ""} in network mk-flannel-408543: {Iface:virbr1 ExpiryTime:2024-04-01 20:20:41 +0000 UTC Type:0 Mac:52:54:00:ee:c8:5a Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ee:c8:5a}
	I0401 19:20:48.019762   61938 main.go:141] libmachine: (flannel-408543) DBG | domain flannel-408543 has defined IP address 192.168.39.24 and MAC address 52:54:00:ee:c8:5a in network mk-flannel-408543
	I0401 19:20:48.019912   61938 main.go:141] libmachine: (flannel-408543) DBG | Using SSH client type: external
	I0401 19:20:48.019960   61938 main.go:141] libmachine: (flannel-408543) DBG | Using SSH private key: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/flannel-408543/id_rsa (-rw-------)
	I0401 19:20:48.019991   61938 main.go:141] libmachine: (flannel-408543) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.24 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18233-10493/.minikube/machines/flannel-408543/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0401 19:20:48.020009   61938 main.go:141] libmachine: (flannel-408543) DBG | About to run SSH command:
	I0401 19:20:48.020026   61938 main.go:141] libmachine: (flannel-408543) DBG | exit 0
	I0401 19:20:48.153660   61938 main.go:141] libmachine: (flannel-408543) DBG | SSH cmd err, output: <nil>: 
	I0401 19:20:48.154008   61938 main.go:141] libmachine: (flannel-408543) KVM machine creation complete!
	I0401 19:20:48.154276   61938 main.go:141] libmachine: (flannel-408543) Calling .GetConfigRaw
	I0401 19:20:48.154866   61938 main.go:141] libmachine: (flannel-408543) Calling .DriverName
	I0401 19:20:48.155084   61938 main.go:141] libmachine: (flannel-408543) Calling .DriverName
	I0401 19:20:48.155271   61938 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0401 19:20:48.155283   61938 main.go:141] libmachine: (flannel-408543) Calling .GetState
	I0401 19:20:48.156848   61938 main.go:141] libmachine: Detecting operating system of created instance...
	I0401 19:20:48.156866   61938 main.go:141] libmachine: Waiting for SSH to be available...
	I0401 19:20:48.156874   61938 main.go:141] libmachine: Getting to WaitForSSH function...
	I0401 19:20:48.156882   61938 main.go:141] libmachine: (flannel-408543) Calling .GetSSHHostname
	I0401 19:20:48.159478   61938 main.go:141] libmachine: (flannel-408543) DBG | domain flannel-408543 has defined MAC address 52:54:00:ee:c8:5a in network mk-flannel-408543
	I0401 19:20:48.159923   61938 main.go:141] libmachine: (flannel-408543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c8:5a", ip: ""} in network mk-flannel-408543: {Iface:virbr1 ExpiryTime:2024-04-01 20:20:41 +0000 UTC Type:0 Mac:52:54:00:ee:c8:5a Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:flannel-408543 Clientid:01:52:54:00:ee:c8:5a}
	I0401 19:20:48.159947   61938 main.go:141] libmachine: (flannel-408543) DBG | domain flannel-408543 has defined IP address 192.168.39.24 and MAC address 52:54:00:ee:c8:5a in network mk-flannel-408543
	I0401 19:20:48.160074   61938 main.go:141] libmachine: (flannel-408543) Calling .GetSSHPort
	I0401 19:20:48.160242   61938 main.go:141] libmachine: (flannel-408543) Calling .GetSSHKeyPath
	I0401 19:20:48.160383   61938 main.go:141] libmachine: (flannel-408543) Calling .GetSSHKeyPath
	I0401 19:20:48.160504   61938 main.go:141] libmachine: (flannel-408543) Calling .GetSSHUsername
	I0401 19:20:48.160670   61938 main.go:141] libmachine: Using SSH client type: native
	I0401 19:20:48.160854   61938 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I0401 19:20:48.160866   61938 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0401 19:20:48.273235   61938 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 19:20:48.273260   61938 main.go:141] libmachine: Detecting the provisioner...
	I0401 19:20:48.273269   61938 main.go:141] libmachine: (flannel-408543) Calling .GetSSHHostname
	I0401 19:20:48.276258   61938 main.go:141] libmachine: (flannel-408543) DBG | domain flannel-408543 has defined MAC address 52:54:00:ee:c8:5a in network mk-flannel-408543
	I0401 19:20:48.276841   61938 main.go:141] libmachine: (flannel-408543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c8:5a", ip: ""} in network mk-flannel-408543: {Iface:virbr1 ExpiryTime:2024-04-01 20:20:41 +0000 UTC Type:0 Mac:52:54:00:ee:c8:5a Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:flannel-408543 Clientid:01:52:54:00:ee:c8:5a}
	I0401 19:20:48.276875   61938 main.go:141] libmachine: (flannel-408543) DBG | domain flannel-408543 has defined IP address 192.168.39.24 and MAC address 52:54:00:ee:c8:5a in network mk-flannel-408543
	I0401 19:20:48.276994   61938 main.go:141] libmachine: (flannel-408543) Calling .GetSSHPort
	I0401 19:20:48.277224   61938 main.go:141] libmachine: (flannel-408543) Calling .GetSSHKeyPath
	I0401 19:20:48.277389   61938 main.go:141] libmachine: (flannel-408543) Calling .GetSSHKeyPath
	I0401 19:20:48.277557   61938 main.go:141] libmachine: (flannel-408543) Calling .GetSSHUsername
	I0401 19:20:48.277761   61938 main.go:141] libmachine: Using SSH client type: native
	I0401 19:20:48.277933   61938 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I0401 19:20:48.277945   61938 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0401 19:20:48.390875   61938 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0401 19:20:48.390968   61938 main.go:141] libmachine: found compatible host: buildroot
	I0401 19:20:48.390983   61938 main.go:141] libmachine: Provisioning with buildroot...
	I0401 19:20:48.391000   61938 main.go:141] libmachine: (flannel-408543) Calling .GetMachineName
	I0401 19:20:48.391265   61938 buildroot.go:166] provisioning hostname "flannel-408543"
	I0401 19:20:48.391297   61938 main.go:141] libmachine: (flannel-408543) Calling .GetMachineName
	I0401 19:20:48.391482   61938 main.go:141] libmachine: (flannel-408543) Calling .GetSSHHostname
	I0401 19:20:48.394024   61938 main.go:141] libmachine: (flannel-408543) DBG | domain flannel-408543 has defined MAC address 52:54:00:ee:c8:5a in network mk-flannel-408543
	I0401 19:20:48.394392   61938 main.go:141] libmachine: (flannel-408543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c8:5a", ip: ""} in network mk-flannel-408543: {Iface:virbr1 ExpiryTime:2024-04-01 20:20:41 +0000 UTC Type:0 Mac:52:54:00:ee:c8:5a Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:flannel-408543 Clientid:01:52:54:00:ee:c8:5a}
	I0401 19:20:48.394440   61938 main.go:141] libmachine: (flannel-408543) DBG | domain flannel-408543 has defined IP address 192.168.39.24 and MAC address 52:54:00:ee:c8:5a in network mk-flannel-408543
	I0401 19:20:48.394590   61938 main.go:141] libmachine: (flannel-408543) Calling .GetSSHPort
	I0401 19:20:48.394771   61938 main.go:141] libmachine: (flannel-408543) Calling .GetSSHKeyPath
	I0401 19:20:48.394956   61938 main.go:141] libmachine: (flannel-408543) Calling .GetSSHKeyPath
	I0401 19:20:48.395117   61938 main.go:141] libmachine: (flannel-408543) Calling .GetSSHUsername
	I0401 19:20:48.395328   61938 main.go:141] libmachine: Using SSH client type: native
	I0401 19:20:48.395506   61938 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I0401 19:20:48.395519   61938 main.go:141] libmachine: About to run SSH command:
	sudo hostname flannel-408543 && echo "flannel-408543" | sudo tee /etc/hostname
	I0401 19:20:48.525224   61938 main.go:141] libmachine: SSH cmd err, output: <nil>: flannel-408543
	
	I0401 19:20:48.525251   61938 main.go:141] libmachine: (flannel-408543) Calling .GetSSHHostname
	I0401 19:20:48.528024   61938 main.go:141] libmachine: (flannel-408543) DBG | domain flannel-408543 has defined MAC address 52:54:00:ee:c8:5a in network mk-flannel-408543
	I0401 19:20:48.528384   61938 main.go:141] libmachine: (flannel-408543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c8:5a", ip: ""} in network mk-flannel-408543: {Iface:virbr1 ExpiryTime:2024-04-01 20:20:41 +0000 UTC Type:0 Mac:52:54:00:ee:c8:5a Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:flannel-408543 Clientid:01:52:54:00:ee:c8:5a}
	I0401 19:20:48.528413   61938 main.go:141] libmachine: (flannel-408543) DBG | domain flannel-408543 has defined IP address 192.168.39.24 and MAC address 52:54:00:ee:c8:5a in network mk-flannel-408543
	I0401 19:20:48.528593   61938 main.go:141] libmachine: (flannel-408543) Calling .GetSSHPort
	I0401 19:20:48.528769   61938 main.go:141] libmachine: (flannel-408543) Calling .GetSSHKeyPath
	I0401 19:20:48.528894   61938 main.go:141] libmachine: (flannel-408543) Calling .GetSSHKeyPath
	I0401 19:20:48.529105   61938 main.go:141] libmachine: (flannel-408543) Calling .GetSSHUsername
	I0401 19:20:48.529267   61938 main.go:141] libmachine: Using SSH client type: native
	I0401 19:20:48.529427   61938 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I0401 19:20:48.529445   61938 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-408543' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-408543/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-408543' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 19:20:48.653169   61938 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 19:20:48.653198   61938 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18233-10493/.minikube CaCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18233-10493/.minikube}
	I0401 19:20:48.653220   61938 buildroot.go:174] setting up certificates
	I0401 19:20:48.653232   61938 provision.go:84] configureAuth start
	I0401 19:20:48.653244   61938 main.go:141] libmachine: (flannel-408543) Calling .GetMachineName
	I0401 19:20:48.653524   61938 main.go:141] libmachine: (flannel-408543) Calling .GetIP
	I0401 19:20:48.656176   61938 main.go:141] libmachine: (flannel-408543) DBG | domain flannel-408543 has defined MAC address 52:54:00:ee:c8:5a in network mk-flannel-408543
	I0401 19:20:48.656583   61938 main.go:141] libmachine: (flannel-408543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c8:5a", ip: ""} in network mk-flannel-408543: {Iface:virbr1 ExpiryTime:2024-04-01 20:20:41 +0000 UTC Type:0 Mac:52:54:00:ee:c8:5a Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:flannel-408543 Clientid:01:52:54:00:ee:c8:5a}
	I0401 19:20:48.656611   61938 main.go:141] libmachine: (flannel-408543) DBG | domain flannel-408543 has defined IP address 192.168.39.24 and MAC address 52:54:00:ee:c8:5a in network mk-flannel-408543
	I0401 19:20:48.656707   61938 main.go:141] libmachine: (flannel-408543) Calling .GetSSHHostname
	I0401 19:20:48.658894   61938 main.go:141] libmachine: (flannel-408543) DBG | domain flannel-408543 has defined MAC address 52:54:00:ee:c8:5a in network mk-flannel-408543
	I0401 19:20:48.659204   61938 main.go:141] libmachine: (flannel-408543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c8:5a", ip: ""} in network mk-flannel-408543: {Iface:virbr1 ExpiryTime:2024-04-01 20:20:41 +0000 UTC Type:0 Mac:52:54:00:ee:c8:5a Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:flannel-408543 Clientid:01:52:54:00:ee:c8:5a}
	I0401 19:20:48.659247   61938 main.go:141] libmachine: (flannel-408543) DBG | domain flannel-408543 has defined IP address 192.168.39.24 and MAC address 52:54:00:ee:c8:5a in network mk-flannel-408543
	I0401 19:20:48.659396   61938 provision.go:143] copyHostCerts
	I0401 19:20:48.659443   61938 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem, removing ...
	I0401 19:20:48.659452   61938 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem
	I0401 19:20:48.659513   61938 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem (1082 bytes)
	I0401 19:20:48.659613   61938 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem, removing ...
	I0401 19:20:48.659621   61938 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem
	I0401 19:20:48.659644   61938 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem (1123 bytes)
	I0401 19:20:48.659713   61938 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem, removing ...
	I0401 19:20:48.659721   61938 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem
	I0401 19:20:48.659741   61938 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem (1679 bytes)
	I0401 19:20:48.659799   61938 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem org=jenkins.flannel-408543 san=[127.0.0.1 192.168.39.24 flannel-408543 localhost minikube]
	I0401 19:20:48.746701   61938 provision.go:177] copyRemoteCerts
	I0401 19:20:48.746756   61938 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 19:20:48.746775   61938 main.go:141] libmachine: (flannel-408543) Calling .GetSSHHostname
	I0401 19:20:48.749557   61938 main.go:141] libmachine: (flannel-408543) DBG | domain flannel-408543 has defined MAC address 52:54:00:ee:c8:5a in network mk-flannel-408543
	I0401 19:20:48.749941   61938 main.go:141] libmachine: (flannel-408543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c8:5a", ip: ""} in network mk-flannel-408543: {Iface:virbr1 ExpiryTime:2024-04-01 20:20:41 +0000 UTC Type:0 Mac:52:54:00:ee:c8:5a Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:flannel-408543 Clientid:01:52:54:00:ee:c8:5a}
	I0401 19:20:48.749962   61938 main.go:141] libmachine: (flannel-408543) DBG | domain flannel-408543 has defined IP address 192.168.39.24 and MAC address 52:54:00:ee:c8:5a in network mk-flannel-408543
	I0401 19:20:48.750159   61938 main.go:141] libmachine: (flannel-408543) Calling .GetSSHPort
	I0401 19:20:48.750332   61938 main.go:141] libmachine: (flannel-408543) Calling .GetSSHKeyPath
	I0401 19:20:48.750459   61938 main.go:141] libmachine: (flannel-408543) Calling .GetSSHUsername
	I0401 19:20:48.750602   61938 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/flannel-408543/id_rsa Username:docker}
	I0401 19:20:48.841595   61938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 19:20:46.183300   59634 pod_ready.go:102] pod "coredns-76f75df574-2t4d7" in "kube-system" namespace has status "Ready":"False"
	I0401 19:20:48.184504   59634 pod_ready.go:102] pod "coredns-76f75df574-2t4d7" in "kube-system" namespace has status "Ready":"False"
	I0401 19:20:49.483314   62058 start.go:364] duration metric: took 24.052182778s to acquireMachinesLock for "kubernetes-upgrade-054413"
	I0401 19:20:49.483366   62058 start.go:96] Skipping create...Using existing machine configuration
	I0401 19:20:49.483378   62058 fix.go:54] fixHost starting: 
	I0401 19:20:49.483796   62058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:20:49.483843   62058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:20:49.500948   62058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43591
	I0401 19:20:49.501320   62058 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:20:49.501834   62058 main.go:141] libmachine: Using API Version  1
	I0401 19:20:49.501853   62058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:20:49.502235   62058 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:20:49.502472   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .DriverName
	I0401 19:20:49.502640   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetState
	I0401 19:20:49.504098   62058 fix.go:112] recreateIfNeeded on kubernetes-upgrade-054413: state=Running err=<nil>
	W0401 19:20:49.504117   62058 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 19:20:49.507723   62058 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-054413" VM ...
	I0401 19:20:49.509057   62058 machine.go:94] provisionDockerMachine start ...
	I0401 19:20:49.509076   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .DriverName
	I0401 19:20:49.509267   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHHostname
	I0401 19:20:49.511672   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:20:49.512171   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:2c:90", ip: ""} in network mk-kubernetes-upgrade-054413: {Iface:virbr2 ExpiryTime:2024-04-01 20:14:28 +0000 UTC Type:0 Mac:52:54:00:e7:2c:90 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-054413 Clientid:01:52:54:00:e7:2c:90}
	I0401 19:20:49.512199   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined IP address 192.168.50.39 and MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:20:49.512327   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHPort
	I0401 19:20:49.512506   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHKeyPath
	I0401 19:20:49.512640   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHKeyPath
	I0401 19:20:49.512785   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHUsername
	I0401 19:20:49.512928   62058 main.go:141] libmachine: Using SSH client type: native
	I0401 19:20:49.513155   62058 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.39 22 <nil> <nil>}
	I0401 19:20:49.513170   62058 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 19:20:49.631243   62058 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-054413
	
	I0401 19:20:49.631279   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetMachineName
	I0401 19:20:49.631533   62058 buildroot.go:166] provisioning hostname "kubernetes-upgrade-054413"
	I0401 19:20:49.631564   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetMachineName
	I0401 19:20:49.631762   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHHostname
	I0401 19:20:49.634658   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:20:49.635139   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:2c:90", ip: ""} in network mk-kubernetes-upgrade-054413: {Iface:virbr2 ExpiryTime:2024-04-01 20:14:28 +0000 UTC Type:0 Mac:52:54:00:e7:2c:90 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-054413 Clientid:01:52:54:00:e7:2c:90}
	I0401 19:20:49.635171   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined IP address 192.168.50.39 and MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:20:49.635349   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHPort
	I0401 19:20:49.635511   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHKeyPath
	I0401 19:20:49.635635   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHKeyPath
	I0401 19:20:49.635754   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHUsername
	I0401 19:20:49.635936   62058 main.go:141] libmachine: Using SSH client type: native
	I0401 19:20:49.636146   62058 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.39 22 <nil> <nil>}
	I0401 19:20:49.636162   62058 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-054413 && echo "kubernetes-upgrade-054413" | sudo tee /etc/hostname
	I0401 19:20:48.871056   61938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0401 19:20:48.900145   61938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 19:20:48.928592   61938 provision.go:87] duration metric: took 275.338208ms to configureAuth
	I0401 19:20:48.928616   61938 buildroot.go:189] setting minikube options for container-runtime
	I0401 19:20:48.928774   61938 config.go:182] Loaded profile config "flannel-408543": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 19:20:48.928863   61938 main.go:141] libmachine: (flannel-408543) Calling .GetSSHHostname
	I0401 19:20:48.931584   61938 main.go:141] libmachine: (flannel-408543) DBG | domain flannel-408543 has defined MAC address 52:54:00:ee:c8:5a in network mk-flannel-408543
	I0401 19:20:48.931922   61938 main.go:141] libmachine: (flannel-408543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c8:5a", ip: ""} in network mk-flannel-408543: {Iface:virbr1 ExpiryTime:2024-04-01 20:20:41 +0000 UTC Type:0 Mac:52:54:00:ee:c8:5a Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:flannel-408543 Clientid:01:52:54:00:ee:c8:5a}
	I0401 19:20:48.931954   61938 main.go:141] libmachine: (flannel-408543) DBG | domain flannel-408543 has defined IP address 192.168.39.24 and MAC address 52:54:00:ee:c8:5a in network mk-flannel-408543
	I0401 19:20:48.932130   61938 main.go:141] libmachine: (flannel-408543) Calling .GetSSHPort
	I0401 19:20:48.932350   61938 main.go:141] libmachine: (flannel-408543) Calling .GetSSHKeyPath
	I0401 19:20:48.932510   61938 main.go:141] libmachine: (flannel-408543) Calling .GetSSHKeyPath
	I0401 19:20:48.932630   61938 main.go:141] libmachine: (flannel-408543) Calling .GetSSHUsername
	I0401 19:20:48.932795   61938 main.go:141] libmachine: Using SSH client type: native
	I0401 19:20:48.932966   61938 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I0401 19:20:48.932980   61938 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 19:20:49.221765   61938 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 19:20:49.221800   61938 main.go:141] libmachine: Checking connection to Docker...
	I0401 19:20:49.221818   61938 main.go:141] libmachine: (flannel-408543) Calling .GetURL
	I0401 19:20:49.222930   61938 main.go:141] libmachine: (flannel-408543) DBG | Using libvirt version 6000000
	I0401 19:20:49.225041   61938 main.go:141] libmachine: (flannel-408543) DBG | domain flannel-408543 has defined MAC address 52:54:00:ee:c8:5a in network mk-flannel-408543
	I0401 19:20:49.225325   61938 main.go:141] libmachine: (flannel-408543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c8:5a", ip: ""} in network mk-flannel-408543: {Iface:virbr1 ExpiryTime:2024-04-01 20:20:41 +0000 UTC Type:0 Mac:52:54:00:ee:c8:5a Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:flannel-408543 Clientid:01:52:54:00:ee:c8:5a}
	I0401 19:20:49.225347   61938 main.go:141] libmachine: (flannel-408543) DBG | domain flannel-408543 has defined IP address 192.168.39.24 and MAC address 52:54:00:ee:c8:5a in network mk-flannel-408543
	I0401 19:20:49.225534   61938 main.go:141] libmachine: Docker is up and running!
	I0401 19:20:49.225550   61938 main.go:141] libmachine: Reticulating splines...
	I0401 19:20:49.225558   61938 client.go:171] duration metric: took 25.272800534s to LocalClient.Create
	I0401 19:20:49.225581   61938 start.go:167] duration metric: took 25.272857332s to libmachine.API.Create "flannel-408543"
	I0401 19:20:49.225605   61938 start.go:293] postStartSetup for "flannel-408543" (driver="kvm2")
	I0401 19:20:49.225616   61938 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 19:20:49.225631   61938 main.go:141] libmachine: (flannel-408543) Calling .DriverName
	I0401 19:20:49.225899   61938 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 19:20:49.225927   61938 main.go:141] libmachine: (flannel-408543) Calling .GetSSHHostname
	I0401 19:20:49.228084   61938 main.go:141] libmachine: (flannel-408543) DBG | domain flannel-408543 has defined MAC address 52:54:00:ee:c8:5a in network mk-flannel-408543
	I0401 19:20:49.228442   61938 main.go:141] libmachine: (flannel-408543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c8:5a", ip: ""} in network mk-flannel-408543: {Iface:virbr1 ExpiryTime:2024-04-01 20:20:41 +0000 UTC Type:0 Mac:52:54:00:ee:c8:5a Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:flannel-408543 Clientid:01:52:54:00:ee:c8:5a}
	I0401 19:20:49.228470   61938 main.go:141] libmachine: (flannel-408543) DBG | domain flannel-408543 has defined IP address 192.168.39.24 and MAC address 52:54:00:ee:c8:5a in network mk-flannel-408543
	I0401 19:20:49.228585   61938 main.go:141] libmachine: (flannel-408543) Calling .GetSSHPort
	I0401 19:20:49.228765   61938 main.go:141] libmachine: (flannel-408543) Calling .GetSSHKeyPath
	I0401 19:20:49.228914   61938 main.go:141] libmachine: (flannel-408543) Calling .GetSSHUsername
	I0401 19:20:49.229025   61938 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/flannel-408543/id_rsa Username:docker}
	I0401 19:20:49.317507   61938 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 19:20:49.322911   61938 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 19:20:49.322936   61938 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/addons for local assets ...
	I0401 19:20:49.322999   61938 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/files for local assets ...
	I0401 19:20:49.323088   61938 filesync.go:149] local asset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> 177512.pem in /etc/ssl/certs
	I0401 19:20:49.323179   61938 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 19:20:49.333184   61938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:20:49.362528   61938 start.go:296] duration metric: took 136.907069ms for postStartSetup
	I0401 19:20:49.362583   61938 main.go:141] libmachine: (flannel-408543) Calling .GetConfigRaw
	I0401 19:20:49.363239   61938 main.go:141] libmachine: (flannel-408543) Calling .GetIP
	I0401 19:20:49.366472   61938 main.go:141] libmachine: (flannel-408543) DBG | domain flannel-408543 has defined MAC address 52:54:00:ee:c8:5a in network mk-flannel-408543
	I0401 19:20:49.366884   61938 main.go:141] libmachine: (flannel-408543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c8:5a", ip: ""} in network mk-flannel-408543: {Iface:virbr1 ExpiryTime:2024-04-01 20:20:41 +0000 UTC Type:0 Mac:52:54:00:ee:c8:5a Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:flannel-408543 Clientid:01:52:54:00:ee:c8:5a}
	I0401 19:20:49.366904   61938 main.go:141] libmachine: (flannel-408543) DBG | domain flannel-408543 has defined IP address 192.168.39.24 and MAC address 52:54:00:ee:c8:5a in network mk-flannel-408543
	I0401 19:20:49.367149   61938 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/flannel-408543/config.json ...
	I0401 19:20:49.367318   61938 start.go:128] duration metric: took 25.435256756s to createHost
	I0401 19:20:49.367340   61938 main.go:141] libmachine: (flannel-408543) Calling .GetSSHHostname
	I0401 19:20:49.369429   61938 main.go:141] libmachine: (flannel-408543) DBG | domain flannel-408543 has defined MAC address 52:54:00:ee:c8:5a in network mk-flannel-408543
	I0401 19:20:49.369764   61938 main.go:141] libmachine: (flannel-408543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c8:5a", ip: ""} in network mk-flannel-408543: {Iface:virbr1 ExpiryTime:2024-04-01 20:20:41 +0000 UTC Type:0 Mac:52:54:00:ee:c8:5a Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:flannel-408543 Clientid:01:52:54:00:ee:c8:5a}
	I0401 19:20:49.369790   61938 main.go:141] libmachine: (flannel-408543) DBG | domain flannel-408543 has defined IP address 192.168.39.24 and MAC address 52:54:00:ee:c8:5a in network mk-flannel-408543
	I0401 19:20:49.369934   61938 main.go:141] libmachine: (flannel-408543) Calling .GetSSHPort
	I0401 19:20:49.370128   61938 main.go:141] libmachine: (flannel-408543) Calling .GetSSHKeyPath
	I0401 19:20:49.370302   61938 main.go:141] libmachine: (flannel-408543) Calling .GetSSHKeyPath
	I0401 19:20:49.370452   61938 main.go:141] libmachine: (flannel-408543) Calling .GetSSHUsername
	I0401 19:20:49.370613   61938 main.go:141] libmachine: Using SSH client type: native
	I0401 19:20:49.370816   61938 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I0401 19:20:49.370835   61938 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 19:20:49.483173   61938 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711999249.461065093
	
	I0401 19:20:49.483194   61938 fix.go:216] guest clock: 1711999249.461065093
	I0401 19:20:49.483203   61938 fix.go:229] Guest: 2024-04-01 19:20:49.461065093 +0000 UTC Remote: 2024-04-01 19:20:49.367329375 +0000 UTC m=+25.566345521 (delta=93.735718ms)
	I0401 19:20:49.483229   61938 fix.go:200] guest clock delta is within tolerance: 93.735718ms
	I0401 19:20:49.483235   61938 start.go:83] releasing machines lock for "flannel-408543", held for 25.551296662s
	I0401 19:20:49.483267   61938 main.go:141] libmachine: (flannel-408543) Calling .DriverName
	I0401 19:20:49.483519   61938 main.go:141] libmachine: (flannel-408543) Calling .GetIP
	I0401 19:20:49.486240   61938 main.go:141] libmachine: (flannel-408543) DBG | domain flannel-408543 has defined MAC address 52:54:00:ee:c8:5a in network mk-flannel-408543
	I0401 19:20:49.486671   61938 main.go:141] libmachine: (flannel-408543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c8:5a", ip: ""} in network mk-flannel-408543: {Iface:virbr1 ExpiryTime:2024-04-01 20:20:41 +0000 UTC Type:0 Mac:52:54:00:ee:c8:5a Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:flannel-408543 Clientid:01:52:54:00:ee:c8:5a}
	I0401 19:20:49.486699   61938 main.go:141] libmachine: (flannel-408543) DBG | domain flannel-408543 has defined IP address 192.168.39.24 and MAC address 52:54:00:ee:c8:5a in network mk-flannel-408543
	I0401 19:20:49.486923   61938 main.go:141] libmachine: (flannel-408543) Calling .DriverName
	I0401 19:20:49.487435   61938 main.go:141] libmachine: (flannel-408543) Calling .DriverName
	I0401 19:20:49.487623   61938 main.go:141] libmachine: (flannel-408543) Calling .DriverName
	I0401 19:20:49.487698   61938 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 19:20:49.487743   61938 main.go:141] libmachine: (flannel-408543) Calling .GetSSHHostname
	I0401 19:20:49.487871   61938 ssh_runner.go:195] Run: cat /version.json
	I0401 19:20:49.487897   61938 main.go:141] libmachine: (flannel-408543) Calling .GetSSHHostname
	I0401 19:20:49.490485   61938 main.go:141] libmachine: (flannel-408543) DBG | domain flannel-408543 has defined MAC address 52:54:00:ee:c8:5a in network mk-flannel-408543
	I0401 19:20:49.490751   61938 main.go:141] libmachine: (flannel-408543) DBG | domain flannel-408543 has defined MAC address 52:54:00:ee:c8:5a in network mk-flannel-408543
	I0401 19:20:49.490801   61938 main.go:141] libmachine: (flannel-408543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c8:5a", ip: ""} in network mk-flannel-408543: {Iface:virbr1 ExpiryTime:2024-04-01 20:20:41 +0000 UTC Type:0 Mac:52:54:00:ee:c8:5a Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:flannel-408543 Clientid:01:52:54:00:ee:c8:5a}
	I0401 19:20:49.490822   61938 main.go:141] libmachine: (flannel-408543) DBG | domain flannel-408543 has defined IP address 192.168.39.24 and MAC address 52:54:00:ee:c8:5a in network mk-flannel-408543
	I0401 19:20:49.490945   61938 main.go:141] libmachine: (flannel-408543) Calling .GetSSHPort
	I0401 19:20:49.491113   61938 main.go:141] libmachine: (flannel-408543) Calling .GetSSHKeyPath
	I0401 19:20:49.491225   61938 main.go:141] libmachine: (flannel-408543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c8:5a", ip: ""} in network mk-flannel-408543: {Iface:virbr1 ExpiryTime:2024-04-01 20:20:41 +0000 UTC Type:0 Mac:52:54:00:ee:c8:5a Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:flannel-408543 Clientid:01:52:54:00:ee:c8:5a}
	I0401 19:20:49.491251   61938 main.go:141] libmachine: (flannel-408543) Calling .GetSSHUsername
	I0401 19:20:49.491244   61938 main.go:141] libmachine: (flannel-408543) DBG | domain flannel-408543 has defined IP address 192.168.39.24 and MAC address 52:54:00:ee:c8:5a in network mk-flannel-408543
	I0401 19:20:49.491388   61938 main.go:141] libmachine: (flannel-408543) Calling .GetSSHPort
	I0401 19:20:49.491442   61938 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/flannel-408543/id_rsa Username:docker}
	I0401 19:20:49.491518   61938 main.go:141] libmachine: (flannel-408543) Calling .GetSSHKeyPath
	I0401 19:20:49.491649   61938 main.go:141] libmachine: (flannel-408543) Calling .GetSSHUsername
	I0401 19:20:49.491792   61938 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/flannel-408543/id_rsa Username:docker}
	I0401 19:20:49.603713   61938 ssh_runner.go:195] Run: systemctl --version
	I0401 19:20:49.611119   61938 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 19:20:49.772714   61938 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 19:20:49.779940   61938 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 19:20:49.779995   61938 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 19:20:49.798386   61938 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 19:20:49.798405   61938 start.go:494] detecting cgroup driver to use...
	I0401 19:20:49.798455   61938 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 19:20:49.819992   61938 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 19:20:49.835851   61938 docker.go:217] disabling cri-docker service (if available) ...
	I0401 19:20:49.835910   61938 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 19:20:49.851240   61938 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 19:20:49.866828   61938 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 19:20:49.994895   61938 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 19:20:50.174214   61938 docker.go:233] disabling docker service ...
	I0401 19:20:50.174302   61938 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 19:20:50.196269   61938 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 19:20:50.211992   61938 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 19:20:50.354178   61938 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 19:20:50.484325   61938 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 19:20:50.501108   61938 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 19:20:50.522390   61938 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0401 19:20:50.522455   61938 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:20:50.534843   61938 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 19:20:50.534934   61938 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:20:50.548284   61938 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:20:50.560163   61938 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:20:50.572315   61938 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 19:20:50.585058   61938 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:20:50.597428   61938 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:20:50.616455   61938 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:20:50.628020   61938 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 19:20:50.638704   61938 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0401 19:20:50.638764   61938 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0401 19:20:50.652466   61938 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 19:20:50.662564   61938 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:20:50.781919   61938 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 19:20:50.932114   61938 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 19:20:50.932174   61938 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 19:20:50.937663   61938 start.go:562] Will wait 60s for crictl version
	I0401 19:20:50.937713   61938 ssh_runner.go:195] Run: which crictl
	I0401 19:20:50.942038   61938 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 19:20:50.982882   61938 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 19:20:50.982974   61938 ssh_runner.go:195] Run: crio --version
	I0401 19:20:51.014868   61938 ssh_runner.go:195] Run: crio --version
	I0401 19:20:51.045630   61938 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0401 19:20:51.047023   61938 main.go:141] libmachine: (flannel-408543) Calling .GetIP
	I0401 19:20:51.049873   61938 main.go:141] libmachine: (flannel-408543) DBG | domain flannel-408543 has defined MAC address 52:54:00:ee:c8:5a in network mk-flannel-408543
	I0401 19:20:51.050257   61938 main.go:141] libmachine: (flannel-408543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:c8:5a", ip: ""} in network mk-flannel-408543: {Iface:virbr1 ExpiryTime:2024-04-01 20:20:41 +0000 UTC Type:0 Mac:52:54:00:ee:c8:5a Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:flannel-408543 Clientid:01:52:54:00:ee:c8:5a}
	I0401 19:20:51.050285   61938 main.go:141] libmachine: (flannel-408543) DBG | domain flannel-408543 has defined IP address 192.168.39.24 and MAC address 52:54:00:ee:c8:5a in network mk-flannel-408543
	I0401 19:20:51.050487   61938 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0401 19:20:51.055147   61938 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:20:51.069038   61938 kubeadm.go:877] updating cluster {Name:flannel-408543 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.3 ClusterName:flannel-408543 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 19:20:51.069132   61938 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0401 19:20:51.069177   61938 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:20:51.103713   61938 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0401 19:20:51.103793   61938 ssh_runner.go:195] Run: which lz4
	I0401 19:20:51.108340   61938 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0401 19:20:51.113195   61938 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 19:20:51.113233   61938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0401 19:20:52.793552   61938 crio.go:462] duration metric: took 1.685250301s to copy over tarball
	I0401 19:20:52.793652   61938 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 19:20:50.184916   59634 pod_ready.go:102] pod "coredns-76f75df574-2t4d7" in "kube-system" namespace has status "Ready":"False"
	I0401 19:20:52.185553   59634 pod_ready.go:102] pod "coredns-76f75df574-2t4d7" in "kube-system" namespace has status "Ready":"False"
	I0401 19:20:49.767734   62058 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-054413
	
	I0401 19:20:49.767763   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHHostname
	I0401 19:20:49.770738   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:20:49.771145   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:2c:90", ip: ""} in network mk-kubernetes-upgrade-054413: {Iface:virbr2 ExpiryTime:2024-04-01 20:14:28 +0000 UTC Type:0 Mac:52:54:00:e7:2c:90 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-054413 Clientid:01:52:54:00:e7:2c:90}
	I0401 19:20:49.771180   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined IP address 192.168.50.39 and MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:20:49.771439   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHPort
	I0401 19:20:49.771666   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHKeyPath
	I0401 19:20:49.771847   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHKeyPath
	I0401 19:20:49.771999   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHUsername
	I0401 19:20:49.772181   62058 main.go:141] libmachine: Using SSH client type: native
	I0401 19:20:49.772424   62058 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.39 22 <nil> <nil>}
	I0401 19:20:49.772451   62058 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-054413' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-054413/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-054413' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 19:20:49.891373   62058 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 19:20:49.891401   62058 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18233-10493/.minikube CaCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18233-10493/.minikube}
	I0401 19:20:49.891442   62058 buildroot.go:174] setting up certificates
	I0401 19:20:49.891455   62058 provision.go:84] configureAuth start
	I0401 19:20:49.891473   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetMachineName
	I0401 19:20:49.891722   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetIP
	I0401 19:20:49.894631   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:20:49.895074   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:2c:90", ip: ""} in network mk-kubernetes-upgrade-054413: {Iface:virbr2 ExpiryTime:2024-04-01 20:14:28 +0000 UTC Type:0 Mac:52:54:00:e7:2c:90 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-054413 Clientid:01:52:54:00:e7:2c:90}
	I0401 19:20:49.895102   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined IP address 192.168.50.39 and MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:20:49.895399   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHHostname
	I0401 19:20:49.897811   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:20:49.898206   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:2c:90", ip: ""} in network mk-kubernetes-upgrade-054413: {Iface:virbr2 ExpiryTime:2024-04-01 20:14:28 +0000 UTC Type:0 Mac:52:54:00:e7:2c:90 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-054413 Clientid:01:52:54:00:e7:2c:90}
	I0401 19:20:49.898233   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined IP address 192.168.50.39 and MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:20:49.898385   62058 provision.go:143] copyHostCerts
	I0401 19:20:49.898442   62058 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem, removing ...
	I0401 19:20:49.898453   62058 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem
	I0401 19:20:49.898504   62058 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem (1082 bytes)
	I0401 19:20:49.898592   62058 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem, removing ...
	I0401 19:20:49.898602   62058 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem
	I0401 19:20:49.898620   62058 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem (1123 bytes)
	I0401 19:20:49.898667   62058 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem, removing ...
	I0401 19:20:49.898674   62058 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem
	I0401 19:20:49.898690   62058 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem (1679 bytes)
	I0401 19:20:49.898745   62058 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-054413 san=[127.0.0.1 192.168.50.39 kubernetes-upgrade-054413 localhost minikube]
	I0401 19:20:50.094033   62058 provision.go:177] copyRemoteCerts
	I0401 19:20:50.094086   62058 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 19:20:50.094106   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHHostname
	I0401 19:20:50.096696   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:20:50.097038   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:2c:90", ip: ""} in network mk-kubernetes-upgrade-054413: {Iface:virbr2 ExpiryTime:2024-04-01 20:14:28 +0000 UTC Type:0 Mac:52:54:00:e7:2c:90 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-054413 Clientid:01:52:54:00:e7:2c:90}
	I0401 19:20:50.097070   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined IP address 192.168.50.39 and MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:20:50.097283   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHPort
	I0401 19:20:50.097487   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHKeyPath
	I0401 19:20:50.097669   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHUsername
	I0401 19:20:50.097860   62058 sshutil.go:53] new ssh client: &{IP:192.168.50.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/kubernetes-upgrade-054413/id_rsa Username:docker}
	I0401 19:20:50.193761   62058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 19:20:50.225179   62058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0401 19:20:50.256980   62058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 19:20:50.286812   62058 provision.go:87] duration metric: took 395.333333ms to configureAuth
	I0401 19:20:50.286849   62058 buildroot.go:189] setting minikube options for container-runtime
	I0401 19:20:50.287114   62058 config.go:182] Loaded profile config "kubernetes-upgrade-054413": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0401 19:20:50.287235   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHHostname
	I0401 19:20:50.290060   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:20:50.290430   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:2c:90", ip: ""} in network mk-kubernetes-upgrade-054413: {Iface:virbr2 ExpiryTime:2024-04-01 20:14:28 +0000 UTC Type:0 Mac:52:54:00:e7:2c:90 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-054413 Clientid:01:52:54:00:e7:2c:90}
	I0401 19:20:50.290451   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined IP address 192.168.50.39 and MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:20:50.290644   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHPort
	I0401 19:20:50.290805   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHKeyPath
	I0401 19:20:50.290957   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHKeyPath
	I0401 19:20:50.291135   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHUsername
	I0401 19:20:50.291289   62058 main.go:141] libmachine: Using SSH client type: native
	I0401 19:20:50.291496   62058 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.39 22 <nil> <nil>}
	I0401 19:20:50.291520   62058 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 19:20:57.611615   63469 start.go:364] duration metric: took 20.51676011s to acquireMachinesLock for "bridge-408543"
	I0401 19:20:57.611694   63469 start.go:93] Provisioning new machine with config: &{Name:bridge-408543 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:bridge-408543 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 19:20:57.611853   63469 start.go:125] createHost starting for "" (driver="kvm2")
	I0401 19:20:55.644637   61938 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.850960143s)
	I0401 19:20:55.644662   61938 crio.go:469] duration metric: took 2.851087138s to extract the tarball
	I0401 19:20:55.644669   61938 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 19:20:55.686359   61938 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:20:55.732047   61938 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 19:20:55.732068   61938 cache_images.go:84] Images are preloaded, skipping loading
	I0401 19:20:55.732075   61938 kubeadm.go:928] updating node { 192.168.39.24 8443 v1.29.3 crio true true} ...
	I0401 19:20:55.732182   61938 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=flannel-408543 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.24
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:flannel-408543 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I0401 19:20:55.732242   61938 ssh_runner.go:195] Run: crio config
	I0401 19:20:55.785846   61938 cni.go:84] Creating CNI manager for "flannel"
	I0401 19:20:55.785873   61938 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 19:20:55.785892   61938 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.24 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-408543 NodeName:flannel-408543 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.24"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.24 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 19:20:55.786017   61938 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.24
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "flannel-408543"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.24
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.24"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 19:20:55.786075   61938 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0401 19:20:55.797258   61938 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 19:20:55.797313   61938 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 19:20:55.808172   61938 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0401 19:20:55.831355   61938 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 19:20:55.855171   61938 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0401 19:20:55.875107   61938 ssh_runner.go:195] Run: grep 192.168.39.24	control-plane.minikube.internal$ /etc/hosts
	I0401 19:20:55.881343   61938 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.24	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:20:55.900757   61938 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:20:56.047413   61938 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:20:56.068670   61938 certs.go:68] Setting up /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/flannel-408543 for IP: 192.168.39.24
	I0401 19:20:56.068695   61938 certs.go:194] generating shared ca certs ...
	I0401 19:20:56.068711   61938 certs.go:226] acquiring lock for ca certs: {Name:mk348b3e250c104b662139cd7212c6c6dfda3180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:20:56.068892   61938 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key
	I0401 19:20:56.068951   61938 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key
	I0401 19:20:56.068961   61938 certs.go:256] generating profile certs ...
	I0401 19:20:56.069037   61938 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/flannel-408543/client.key
	I0401 19:20:56.069056   61938 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/flannel-408543/client.crt with IP's: []
	I0401 19:20:56.260468   61938 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/flannel-408543/client.crt ...
	I0401 19:20:56.260495   61938 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/flannel-408543/client.crt: {Name:mk71c46ba630389604778e3d3dc2d06b86bfe9bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:20:56.260650   61938 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/flannel-408543/client.key ...
	I0401 19:20:56.260662   61938 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/flannel-408543/client.key: {Name:mk2dc96a4db91924e586e800712a526d3140ec87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:20:56.260730   61938 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/flannel-408543/apiserver.key.1d513e04
	I0401 19:20:56.260745   61938 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/flannel-408543/apiserver.crt.1d513e04 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.24]
	I0401 19:20:56.372666   61938 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/flannel-408543/apiserver.crt.1d513e04 ...
	I0401 19:20:56.372695   61938 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/flannel-408543/apiserver.crt.1d513e04: {Name:mk0fa79a0af8a65ed2f842b2ae0323cd9bc46116 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:20:56.372841   61938 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/flannel-408543/apiserver.key.1d513e04 ...
	I0401 19:20:56.372855   61938 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/flannel-408543/apiserver.key.1d513e04: {Name:mk0b7da146989f5309e2bd6614b0a37b7d594ec5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:20:56.372961   61938 certs.go:381] copying /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/flannel-408543/apiserver.crt.1d513e04 -> /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/flannel-408543/apiserver.crt
	I0401 19:20:56.373075   61938 certs.go:385] copying /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/flannel-408543/apiserver.key.1d513e04 -> /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/flannel-408543/apiserver.key
	I0401 19:20:56.373137   61938 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/flannel-408543/proxy-client.key
	I0401 19:20:56.373151   61938 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/flannel-408543/proxy-client.crt with IP's: []
	I0401 19:20:56.500111   61938 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/flannel-408543/proxy-client.crt ...
	I0401 19:20:56.500138   61938 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/flannel-408543/proxy-client.crt: {Name:mk15ed583a1cbe1d57b5c6adab85dd6114497926 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:20:56.500294   61938 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/flannel-408543/proxy-client.key ...
	I0401 19:20:56.500307   61938 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/flannel-408543/proxy-client.key: {Name:mk19f33d59fa0ffdb80f25d52af77dd5f9e324aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:20:56.500474   61938 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem (1338 bytes)
	W0401 19:20:56.500508   61938 certs.go:480] ignoring /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751_empty.pem, impossibly tiny 0 bytes
	I0401 19:20:56.500518   61938 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem (1675 bytes)
	I0401 19:20:56.500543   61938 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem (1082 bytes)
	I0401 19:20:56.500564   61938 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem (1123 bytes)
	I0401 19:20:56.500587   61938 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem (1679 bytes)
	I0401 19:20:56.500627   61938 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:20:56.501223   61938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 19:20:56.534556   61938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 19:20:56.564765   61938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 19:20:56.596107   61938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 19:20:56.625608   61938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/flannel-408543/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0401 19:20:56.655856   61938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/flannel-408543/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 19:20:56.684125   61938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/flannel-408543/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 19:20:56.713916   61938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/flannel-408543/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 19:20:56.746283   61938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem --> /usr/share/ca-certificates/17751.pem (1338 bytes)
	I0401 19:20:56.795942   61938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /usr/share/ca-certificates/177512.pem (1708 bytes)
	I0401 19:20:56.845676   61938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 19:20:56.878134   61938 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0401 19:20:56.897909   61938 ssh_runner.go:195] Run: openssl version
	I0401 19:20:56.904190   61938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17751.pem && ln -fs /usr/share/ca-certificates/17751.pem /etc/ssl/certs/17751.pem"
	I0401 19:20:56.916998   61938 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17751.pem
	I0401 19:20:56.921965   61938 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 18:15 /usr/share/ca-certificates/17751.pem
	I0401 19:20:56.922018   61938 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17751.pem
	I0401 19:20:56.929049   61938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17751.pem /etc/ssl/certs/51391683.0"
	I0401 19:20:56.943432   61938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177512.pem && ln -fs /usr/share/ca-certificates/177512.pem /etc/ssl/certs/177512.pem"
	I0401 19:20:56.956050   61938 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177512.pem
	I0401 19:20:56.961388   61938 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 18:15 /usr/share/ca-certificates/177512.pem
	I0401 19:20:56.961450   61938 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177512.pem
	I0401 19:20:56.968076   61938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177512.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 19:20:56.982552   61938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 19:20:56.995038   61938 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:20:57.000546   61938 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 18:07 /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:20:57.000611   61938 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:20:57.007118   61938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 19:20:57.023562   61938 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 19:20:57.029390   61938 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 19:20:57.029448   61938 kubeadm.go:391] StartCluster: {Name:flannel-408543 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3
ClusterName:flannel-408543 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:20:57.029547   61938 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 19:20:57.029585   61938 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:20:57.084254   61938 cri.go:89] found id: ""
	I0401 19:20:57.084329   61938 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 19:20:57.097012   61938 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:20:57.111814   61938 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:20:57.124337   61938 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:20:57.124357   61938 kubeadm.go:156] found existing configuration files:
	
	I0401 19:20:57.124410   61938 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 19:20:57.138181   61938 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:20:57.138258   61938 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:20:57.149916   61938 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 19:20:57.161463   61938 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:20:57.161532   61938 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:20:57.172362   61938 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 19:20:57.185150   61938 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:20:57.185211   61938 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:20:57.198021   61938 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 19:20:57.208927   61938 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:20:57.208989   61938 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:20:57.220022   61938 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 19:20:57.283392   61938 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0401 19:20:57.283468   61938 kubeadm.go:309] [preflight] Running pre-flight checks
	I0401 19:20:57.432072   61938 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 19:20:57.432260   61938 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 19:20:57.432398   61938 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 19:20:57.741359   61938 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 19:20:54.188408   59634 pod_ready.go:102] pod "coredns-76f75df574-2t4d7" in "kube-system" namespace has status "Ready":"False"
	I0401 19:20:56.763204   59634 pod_ready.go:92] pod "coredns-76f75df574-2t4d7" in "kube-system" namespace has status "Ready":"True"
	I0401 19:20:56.763232   59634 pod_ready.go:81] duration metric: took 40.586111016s for pod "coredns-76f75df574-2t4d7" in "kube-system" namespace to be "Ready" ...
	I0401 19:20:56.763244   59634 pod_ready.go:78] waiting up to 15m0s for pod "coredns-76f75df574-tl592" in "kube-system" namespace to be "Ready" ...
	I0401 19:20:56.766236   59634 pod_ready.go:97] error getting pod "coredns-76f75df574-tl592" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-tl592" not found
	I0401 19:20:56.766264   59634 pod_ready.go:81] duration metric: took 3.012214ms for pod "coredns-76f75df574-tl592" in "kube-system" namespace to be "Ready" ...
	E0401 19:20:56.766276   59634 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-76f75df574-tl592" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-tl592" not found
	I0401 19:20:56.766285   59634 pod_ready.go:78] waiting up to 15m0s for pod "etcd-enable-default-cni-408543" in "kube-system" namespace to be "Ready" ...
	I0401 19:20:56.777780   59634 pod_ready.go:92] pod "etcd-enable-default-cni-408543" in "kube-system" namespace has status "Ready":"True"
	I0401 19:20:56.777809   59634 pod_ready.go:81] duration metric: took 11.51717ms for pod "etcd-enable-default-cni-408543" in "kube-system" namespace to be "Ready" ...
	I0401 19:20:56.777822   59634 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-enable-default-cni-408543" in "kube-system" namespace to be "Ready" ...
	I0401 19:20:56.785598   59634 pod_ready.go:92] pod "kube-apiserver-enable-default-cni-408543" in "kube-system" namespace has status "Ready":"True"
	I0401 19:20:56.785627   59634 pod_ready.go:81] duration metric: took 7.796179ms for pod "kube-apiserver-enable-default-cni-408543" in "kube-system" namespace to be "Ready" ...
	I0401 19:20:56.785653   59634 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-enable-default-cni-408543" in "kube-system" namespace to be "Ready" ...
	I0401 19:20:56.792651   59634 pod_ready.go:92] pod "kube-controller-manager-enable-default-cni-408543" in "kube-system" namespace has status "Ready":"True"
	I0401 19:20:56.792680   59634 pod_ready.go:81] duration metric: took 7.01781ms for pod "kube-controller-manager-enable-default-cni-408543" in "kube-system" namespace to be "Ready" ...
	I0401 19:20:56.792695   59634 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-wdzdc" in "kube-system" namespace to be "Ready" ...
	I0401 19:20:56.882417   59634 pod_ready.go:92] pod "kube-proxy-wdzdc" in "kube-system" namespace has status "Ready":"True"
	I0401 19:20:56.882444   59634 pod_ready.go:81] duration metric: took 89.741484ms for pod "kube-proxy-wdzdc" in "kube-system" namespace to be "Ready" ...
	I0401 19:20:56.882457   59634 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-enable-default-cni-408543" in "kube-system" namespace to be "Ready" ...
	I0401 19:20:57.281494   59634 pod_ready.go:92] pod "kube-scheduler-enable-default-cni-408543" in "kube-system" namespace has status "Ready":"True"
	I0401 19:20:57.281518   59634 pod_ready.go:81] duration metric: took 399.054126ms for pod "kube-scheduler-enable-default-cni-408543" in "kube-system" namespace to be "Ready" ...
	I0401 19:20:57.281526   59634 pod_ready.go:38] duration metric: took 41.116693636s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:20:57.281538   59634 api_server.go:52] waiting for apiserver process to appear ...
	I0401 19:20:57.281597   59634 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:20:57.300586   59634 api_server.go:72] duration metric: took 41.910398191s to wait for apiserver process to appear ...
	I0401 19:20:57.300678   59634 api_server.go:88] waiting for apiserver healthz status ...
	I0401 19:20:57.300709   59634 api_server.go:253] Checking apiserver healthz at https://192.168.72.151:8443/healthz ...
	I0401 19:20:57.306312   59634 api_server.go:279] https://192.168.72.151:8443/healthz returned 200:
	ok
	I0401 19:20:57.308089   59634 api_server.go:141] control plane version: v1.29.3
	I0401 19:20:57.308128   59634 api_server.go:131] duration metric: took 7.436954ms to wait for apiserver health ...
	I0401 19:20:57.308139   59634 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 19:20:57.484223   59634 system_pods.go:59] 7 kube-system pods found
	I0401 19:20:57.484253   59634 system_pods.go:61] "coredns-76f75df574-2t4d7" [4bf92303-869a-4525-9a07-bf5ba8162545] Running
	I0401 19:20:57.484260   59634 system_pods.go:61] "etcd-enable-default-cni-408543" [e1d85d4d-7a7a-4b9f-92bc-31529945f97c] Running
	I0401 19:20:57.484266   59634 system_pods.go:61] "kube-apiserver-enable-default-cni-408543" [e0767706-1919-4bae-953e-25523e5d0f7c] Running
	I0401 19:20:57.484271   59634 system_pods.go:61] "kube-controller-manager-enable-default-cni-408543" [77c150b2-0c4b-46ae-8dfa-882857f3a4be] Running
	I0401 19:20:57.484274   59634 system_pods.go:61] "kube-proxy-wdzdc" [dc8f6d20-37ec-4020-b0c8-fd8fbaa5403a] Running
	I0401 19:20:57.484284   59634 system_pods.go:61] "kube-scheduler-enable-default-cni-408543" [e73e00a9-78bb-4708-ae0f-c2c2416aec85] Running
	I0401 19:20:57.484289   59634 system_pods.go:61] "storage-provisioner" [f86fd3e2-d755-4070-af07-afca97cb75ea] Running
	I0401 19:20:57.484298   59634 system_pods.go:74] duration metric: took 176.151872ms to wait for pod list to return data ...
	I0401 19:20:57.484306   59634 default_sa.go:34] waiting for default service account to be created ...
	I0401 19:20:57.682137   59634 default_sa.go:45] found service account: "default"
	I0401 19:20:57.682166   59634 default_sa.go:55] duration metric: took 197.851979ms for default service account to be created ...
	I0401 19:20:57.682177   59634 system_pods.go:116] waiting for k8s-apps to be running ...
	I0401 19:20:57.883869   59634 system_pods.go:86] 7 kube-system pods found
	I0401 19:20:57.883896   59634 system_pods.go:89] "coredns-76f75df574-2t4d7" [4bf92303-869a-4525-9a07-bf5ba8162545] Running
	I0401 19:20:57.883902   59634 system_pods.go:89] "etcd-enable-default-cni-408543" [e1d85d4d-7a7a-4b9f-92bc-31529945f97c] Running
	I0401 19:20:57.883918   59634 system_pods.go:89] "kube-apiserver-enable-default-cni-408543" [e0767706-1919-4bae-953e-25523e5d0f7c] Running
	I0401 19:20:57.883923   59634 system_pods.go:89] "kube-controller-manager-enable-default-cni-408543" [77c150b2-0c4b-46ae-8dfa-882857f3a4be] Running
	I0401 19:20:57.883931   59634 system_pods.go:89] "kube-proxy-wdzdc" [dc8f6d20-37ec-4020-b0c8-fd8fbaa5403a] Running
	I0401 19:20:57.883935   59634 system_pods.go:89] "kube-scheduler-enable-default-cni-408543" [e73e00a9-78bb-4708-ae0f-c2c2416aec85] Running
	I0401 19:20:57.883940   59634 system_pods.go:89] "storage-provisioner" [f86fd3e2-d755-4070-af07-afca97cb75ea] Running
	I0401 19:20:57.883948   59634 system_pods.go:126] duration metric: took 201.764086ms to wait for k8s-apps to be running ...
	I0401 19:20:57.883957   59634 system_svc.go:44] waiting for kubelet service to be running ....
	I0401 19:20:57.884006   59634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:20:57.902295   59634 system_svc.go:56] duration metric: took 18.314609ms WaitForService to wait for kubelet
	I0401 19:20:57.902328   59634 kubeadm.go:576] duration metric: took 42.512144778s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 19:20:57.902351   59634 node_conditions.go:102] verifying NodePressure condition ...
	I0401 19:20:58.082666   59634 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 19:20:58.082699   59634 node_conditions.go:123] node cpu capacity is 2
	I0401 19:20:58.082712   59634 node_conditions.go:105] duration metric: took 180.355583ms to run NodePressure ...
	I0401 19:20:58.082727   59634 start.go:240] waiting for startup goroutines ...
	I0401 19:20:58.082737   59634 start.go:245] waiting for cluster config update ...
	I0401 19:20:58.082750   59634 start.go:254] writing updated cluster config ...
	I0401 19:20:58.083075   59634 ssh_runner.go:195] Run: rm -f paused
	I0401 19:20:58.153025   59634 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0401 19:20:58.155029   59634 out.go:177] * Done! kubectl is now configured to use "enable-default-cni-408543" cluster and "default" namespace by default
	I0401 19:20:57.744762   61938 out.go:204]   - Generating certificates and keys ...
	I0401 19:20:57.744862   61938 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0401 19:20:57.744930   61938 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0401 19:20:57.791994   61938 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 19:20:58.079443   61938 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0401 19:20:58.577839   61938 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0401 19:20:57.334796   62058 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 19:20:57.334825   62058 machine.go:97] duration metric: took 7.825753816s to provisionDockerMachine
	I0401 19:20:57.334849   62058 start.go:293] postStartSetup for "kubernetes-upgrade-054413" (driver="kvm2")
	I0401 19:20:57.334866   62058 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 19:20:57.334897   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .DriverName
	I0401 19:20:57.335295   62058 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 19:20:57.335341   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHHostname
	I0401 19:20:57.338342   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:20:57.338763   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:2c:90", ip: ""} in network mk-kubernetes-upgrade-054413: {Iface:virbr2 ExpiryTime:2024-04-01 20:14:28 +0000 UTC Type:0 Mac:52:54:00:e7:2c:90 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-054413 Clientid:01:52:54:00:e7:2c:90}
	I0401 19:20:57.338790   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined IP address 192.168.50.39 and MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:20:57.339003   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHPort
	I0401 19:20:57.339185   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHKeyPath
	I0401 19:20:57.339352   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHUsername
	I0401 19:20:57.339488   62058 sshutil.go:53] new ssh client: &{IP:192.168.50.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/kubernetes-upgrade-054413/id_rsa Username:docker}
	I0401 19:20:57.435580   62058 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 19:20:57.440457   62058 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 19:20:57.440484   62058 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/addons for local assets ...
	I0401 19:20:57.440569   62058 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/files for local assets ...
	I0401 19:20:57.440707   62058 filesync.go:149] local asset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> 177512.pem in /etc/ssl/certs
	I0401 19:20:57.440837   62058 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 19:20:57.453578   62058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:20:57.484737   62058 start.go:296] duration metric: took 149.874747ms for postStartSetup
	I0401 19:20:57.484780   62058 fix.go:56] duration metric: took 8.00140269s for fixHost
	I0401 19:20:57.484805   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHHostname
	I0401 19:20:57.487197   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:20:57.487529   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:2c:90", ip: ""} in network mk-kubernetes-upgrade-054413: {Iface:virbr2 ExpiryTime:2024-04-01 20:14:28 +0000 UTC Type:0 Mac:52:54:00:e7:2c:90 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-054413 Clientid:01:52:54:00:e7:2c:90}
	I0401 19:20:57.487558   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined IP address 192.168.50.39 and MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:20:57.487680   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHPort
	I0401 19:20:57.487885   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHKeyPath
	I0401 19:20:57.488059   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHKeyPath
	I0401 19:20:57.488235   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHUsername
	I0401 19:20:57.488419   62058 main.go:141] libmachine: Using SSH client type: native
	I0401 19:20:57.488605   62058 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.39 22 <nil> <nil>}
	I0401 19:20:57.488619   62058 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 19:20:57.611465   62058 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711999257.607323180
	
	I0401 19:20:57.611488   62058 fix.go:216] guest clock: 1711999257.607323180
	I0401 19:20:57.611499   62058 fix.go:229] Guest: 2024-04-01 19:20:57.60732318 +0000 UTC Remote: 2024-04-01 19:20:57.484785801 +0000 UTC m=+32.844529444 (delta=122.537379ms)
	I0401 19:20:57.611548   62058 fix.go:200] guest clock delta is within tolerance: 122.537379ms
	I0401 19:20:57.611555   62058 start.go:83] releasing machines lock for "kubernetes-upgrade-054413", held for 8.128216205s
	I0401 19:20:57.611584   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .DriverName
	I0401 19:20:57.611914   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetIP
	I0401 19:20:57.615413   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:20:57.615841   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:2c:90", ip: ""} in network mk-kubernetes-upgrade-054413: {Iface:virbr2 ExpiryTime:2024-04-01 20:14:28 +0000 UTC Type:0 Mac:52:54:00:e7:2c:90 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-054413 Clientid:01:52:54:00:e7:2c:90}
	I0401 19:20:57.615880   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined IP address 192.168.50.39 and MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:20:57.616063   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .DriverName
	I0401 19:20:57.616642   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .DriverName
	I0401 19:20:57.616861   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .DriverName
	I0401 19:20:57.616944   62058 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 19:20:57.616993   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHHostname
	I0401 19:20:57.617091   62058 ssh_runner.go:195] Run: cat /version.json
	I0401 19:20:57.617115   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHHostname
	I0401 19:20:57.620065   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:20:57.620377   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:20:57.620616   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:2c:90", ip: ""} in network mk-kubernetes-upgrade-054413: {Iface:virbr2 ExpiryTime:2024-04-01 20:14:28 +0000 UTC Type:0 Mac:52:54:00:e7:2c:90 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-054413 Clientid:01:52:54:00:e7:2c:90}
	I0401 19:20:57.620642   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined IP address 192.168.50.39 and MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:20:57.620816   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:2c:90", ip: ""} in network mk-kubernetes-upgrade-054413: {Iface:virbr2 ExpiryTime:2024-04-01 20:14:28 +0000 UTC Type:0 Mac:52:54:00:e7:2c:90 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-054413 Clientid:01:52:54:00:e7:2c:90}
	I0401 19:20:57.620855   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined IP address 192.168.50.39 and MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:20:57.621111   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHPort
	I0401 19:20:57.621176   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHPort
	I0401 19:20:57.621338   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHKeyPath
	I0401 19:20:57.621340   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHKeyPath
	I0401 19:20:57.621531   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHUsername
	I0401 19:20:57.621533   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetSSHUsername
	I0401 19:20:57.621751   62058 sshutil.go:53] new ssh client: &{IP:192.168.50.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/kubernetes-upgrade-054413/id_rsa Username:docker}
	I0401 19:20:57.621858   62058 sshutil.go:53] new ssh client: &{IP:192.168.50.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/kubernetes-upgrade-054413/id_rsa Username:docker}
	I0401 19:20:57.711706   62058 ssh_runner.go:195] Run: systemctl --version
	I0401 19:20:57.735345   62058 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 19:20:57.907954   62058 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 19:20:57.920592   62058 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 19:20:57.920666   62058 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 19:20:57.936068   62058 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 19:20:57.936092   62058 start.go:494] detecting cgroup driver to use...
	I0401 19:20:57.936172   62058 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 19:20:57.955494   62058 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 19:20:57.973176   62058 docker.go:217] disabling cri-docker service (if available) ...
	I0401 19:20:57.973242   62058 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 19:20:57.990949   62058 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 19:20:58.007206   62058 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 19:20:58.297570   62058 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 19:20:58.726728   62058 docker.go:233] disabling docker service ...
	I0401 19:20:58.726800   62058 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 19:20:59.050925   62058 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 19:20:59.175978   62058 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 19:20:59.652721   62058 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 19:20:58.935690   61938 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0401 19:20:59.373757   61938 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0401 19:20:59.374247   61938 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [flannel-408543 localhost] and IPs [192.168.39.24 127.0.0.1 ::1]
	I0401 19:20:59.534510   61938 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0401 19:20:59.534931   61938 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [flannel-408543 localhost] and IPs [192.168.39.24 127.0.0.1 ::1]
	I0401 19:20:59.955926   61938 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 19:21:00.169863   61938 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 19:21:00.351319   61938 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0401 19:21:00.351812   61938 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 19:21:00.460716   61938 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 19:21:00.790894   61938 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 19:21:01.104917   61938 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 19:21:01.176172   61938 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 19:21:01.420187   61938 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 19:21:01.421670   61938 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 19:21:01.427062   61938 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 19:20:57.613898   63469 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0401 19:20:57.614119   63469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:20:57.614184   63469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:20:57.631751   63469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45627
	I0401 19:20:57.632323   63469 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:20:57.632890   63469 main.go:141] libmachine: Using API Version  1
	I0401 19:20:57.632914   63469 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:20:57.633314   63469 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:20:57.633509   63469 main.go:141] libmachine: (bridge-408543) Calling .GetMachineName
	I0401 19:20:57.633731   63469 main.go:141] libmachine: (bridge-408543) Calling .DriverName
	I0401 19:20:57.633911   63469 start.go:159] libmachine.API.Create for "bridge-408543" (driver="kvm2")
	I0401 19:20:57.633942   63469 client.go:168] LocalClient.Create starting
	I0401 19:20:57.633974   63469 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem
	I0401 19:20:57.634029   63469 main.go:141] libmachine: Decoding PEM data...
	I0401 19:20:57.634050   63469 main.go:141] libmachine: Parsing certificate...
	I0401 19:20:57.634112   63469 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem
	I0401 19:20:57.634140   63469 main.go:141] libmachine: Decoding PEM data...
	I0401 19:20:57.634161   63469 main.go:141] libmachine: Parsing certificate...
	I0401 19:20:57.634185   63469 main.go:141] libmachine: Running pre-create checks...
	I0401 19:20:57.634208   63469 main.go:141] libmachine: (bridge-408543) Calling .PreCreateCheck
	I0401 19:20:57.634558   63469 main.go:141] libmachine: (bridge-408543) Calling .GetConfigRaw
	I0401 19:20:57.634958   63469 main.go:141] libmachine: Creating machine...
	I0401 19:20:57.634974   63469 main.go:141] libmachine: (bridge-408543) Calling .Create
	I0401 19:20:57.635147   63469 main.go:141] libmachine: (bridge-408543) Creating KVM machine...
	I0401 19:20:57.636467   63469 main.go:141] libmachine: (bridge-408543) DBG | found existing default KVM network
	I0401 19:20:57.638251   63469 main.go:141] libmachine: (bridge-408543) DBG | I0401 19:20:57.638075   63637 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:66:e9:7e} reservation:<nil>}
	I0401 19:20:57.639278   63469 main.go:141] libmachine: (bridge-408543) DBG | I0401 19:20:57.639159   63637 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:9a:9e:e2} reservation:<nil>}
	I0401 19:20:57.640695   63469 main.go:141] libmachine: (bridge-408543) DBG | I0401 19:20:57.640608   63637 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00030a930}
	I0401 19:20:57.640750   63469 main.go:141] libmachine: (bridge-408543) DBG | created network xml: 
	I0401 19:20:57.640774   63469 main.go:141] libmachine: (bridge-408543) DBG | <network>
	I0401 19:20:57.640787   63469 main.go:141] libmachine: (bridge-408543) DBG |   <name>mk-bridge-408543</name>
	I0401 19:20:57.640799   63469 main.go:141] libmachine: (bridge-408543) DBG |   <dns enable='no'/>
	I0401 19:20:57.640821   63469 main.go:141] libmachine: (bridge-408543) DBG |   
	I0401 19:20:57.640836   63469 main.go:141] libmachine: (bridge-408543) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0401 19:20:57.640850   63469 main.go:141] libmachine: (bridge-408543) DBG |     <dhcp>
	I0401 19:20:57.640873   63469 main.go:141] libmachine: (bridge-408543) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0401 19:20:57.640886   63469 main.go:141] libmachine: (bridge-408543) DBG |     </dhcp>
	I0401 19:20:57.640897   63469 main.go:141] libmachine: (bridge-408543) DBG |   </ip>
	I0401 19:20:57.640910   63469 main.go:141] libmachine: (bridge-408543) DBG |   
	I0401 19:20:57.640923   63469 main.go:141] libmachine: (bridge-408543) DBG | </network>
	I0401 19:20:57.640936   63469 main.go:141] libmachine: (bridge-408543) DBG | 
	I0401 19:20:57.646594   63469 main.go:141] libmachine: (bridge-408543) DBG | trying to create private KVM network mk-bridge-408543 192.168.61.0/24...
	I0401 19:20:57.728039   63469 main.go:141] libmachine: (bridge-408543) DBG | private KVM network mk-bridge-408543 192.168.61.0/24 created
	I0401 19:20:57.728113   63469 main.go:141] libmachine: (bridge-408543) Setting up store path in /home/jenkins/minikube-integration/18233-10493/.minikube/machines/bridge-408543 ...
	I0401 19:20:57.728220   63469 main.go:141] libmachine: (bridge-408543) Building disk image from file:///home/jenkins/minikube-integration/18233-10493/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso
	I0401 19:20:57.728246   63469 main.go:141] libmachine: (bridge-408543) DBG | I0401 19:20:57.728187   63637 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18233-10493/.minikube
	I0401 19:20:57.728452   63469 main.go:141] libmachine: (bridge-408543) Downloading /home/jenkins/minikube-integration/18233-10493/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18233-10493/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso...
	I0401 19:20:57.975876   63469 main.go:141] libmachine: (bridge-408543) DBG | I0401 19:20:57.975768   63637 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/bridge-408543/id_rsa...
	I0401 19:20:58.020133   63469 main.go:141] libmachine: (bridge-408543) DBG | I0401 19:20:58.020009   63637 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/bridge-408543/bridge-408543.rawdisk...
	I0401 19:20:58.020175   63469 main.go:141] libmachine: (bridge-408543) DBG | Writing magic tar header
	I0401 19:20:58.020208   63469 main.go:141] libmachine: (bridge-408543) DBG | Writing SSH key tar header
	I0401 19:20:58.020258   63469 main.go:141] libmachine: (bridge-408543) DBG | I0401 19:20:58.020158   63637 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18233-10493/.minikube/machines/bridge-408543 ...
	I0401 19:20:58.020278   63469 main.go:141] libmachine: (bridge-408543) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/bridge-408543
	I0401 19:20:58.020319   63469 main.go:141] libmachine: (bridge-408543) Setting executable bit set on /home/jenkins/minikube-integration/18233-10493/.minikube/machines/bridge-408543 (perms=drwx------)
	I0401 19:20:58.020347   63469 main.go:141] libmachine: (bridge-408543) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18233-10493/.minikube/machines
	I0401 19:20:58.020362   63469 main.go:141] libmachine: (bridge-408543) Setting executable bit set on /home/jenkins/minikube-integration/18233-10493/.minikube/machines (perms=drwxr-xr-x)
	I0401 19:20:58.020378   63469 main.go:141] libmachine: (bridge-408543) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18233-10493/.minikube
	I0401 19:20:58.020395   63469 main.go:141] libmachine: (bridge-408543) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18233-10493
	I0401 19:20:58.020410   63469 main.go:141] libmachine: (bridge-408543) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0401 19:20:58.020435   63469 main.go:141] libmachine: (bridge-408543) Setting executable bit set on /home/jenkins/minikube-integration/18233-10493/.minikube (perms=drwxr-xr-x)
	I0401 19:20:58.020448   63469 main.go:141] libmachine: (bridge-408543) DBG | Checking permissions on dir: /home/jenkins
	I0401 19:20:58.020463   63469 main.go:141] libmachine: (bridge-408543) DBG | Checking permissions on dir: /home
	I0401 19:20:58.020475   63469 main.go:141] libmachine: (bridge-408543) DBG | Skipping /home - not owner
	I0401 19:20:58.020487   63469 main.go:141] libmachine: (bridge-408543) Setting executable bit set on /home/jenkins/minikube-integration/18233-10493 (perms=drwxrwxr-x)
	I0401 19:20:58.020501   63469 main.go:141] libmachine: (bridge-408543) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0401 19:20:58.020516   63469 main.go:141] libmachine: (bridge-408543) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0401 19:20:58.020537   63469 main.go:141] libmachine: (bridge-408543) Creating domain...
	I0401 19:20:58.021684   63469 main.go:141] libmachine: (bridge-408543) define libvirt domain using xml: 
	I0401 19:20:58.021707   63469 main.go:141] libmachine: (bridge-408543) <domain type='kvm'>
	I0401 19:20:58.021719   63469 main.go:141] libmachine: (bridge-408543)   <name>bridge-408543</name>
	I0401 19:20:58.021727   63469 main.go:141] libmachine: (bridge-408543)   <memory unit='MiB'>3072</memory>
	I0401 19:20:58.021738   63469 main.go:141] libmachine: (bridge-408543)   <vcpu>2</vcpu>
	I0401 19:20:58.021749   63469 main.go:141] libmachine: (bridge-408543)   <features>
	I0401 19:20:58.021758   63469 main.go:141] libmachine: (bridge-408543)     <acpi/>
	I0401 19:20:58.021769   63469 main.go:141] libmachine: (bridge-408543)     <apic/>
	I0401 19:20:58.021778   63469 main.go:141] libmachine: (bridge-408543)     <pae/>
	I0401 19:20:58.021791   63469 main.go:141] libmachine: (bridge-408543)     
	I0401 19:20:58.021802   63469 main.go:141] libmachine: (bridge-408543)   </features>
	I0401 19:20:58.021819   63469 main.go:141] libmachine: (bridge-408543)   <cpu mode='host-passthrough'>
	I0401 19:20:58.021836   63469 main.go:141] libmachine: (bridge-408543)   
	I0401 19:20:58.021843   63469 main.go:141] libmachine: (bridge-408543)   </cpu>
	I0401 19:20:58.021850   63469 main.go:141] libmachine: (bridge-408543)   <os>
	I0401 19:20:58.021860   63469 main.go:141] libmachine: (bridge-408543)     <type>hvm</type>
	I0401 19:20:58.021890   63469 main.go:141] libmachine: (bridge-408543)     <boot dev='cdrom'/>
	I0401 19:20:58.021914   63469 main.go:141] libmachine: (bridge-408543)     <boot dev='hd'/>
	I0401 19:20:58.021924   63469 main.go:141] libmachine: (bridge-408543)     <bootmenu enable='no'/>
	I0401 19:20:58.021933   63469 main.go:141] libmachine: (bridge-408543)   </os>
	I0401 19:20:58.021941   63469 main.go:141] libmachine: (bridge-408543)   <devices>
	I0401 19:20:58.021952   63469 main.go:141] libmachine: (bridge-408543)     <disk type='file' device='cdrom'>
	I0401 19:20:58.021978   63469 main.go:141] libmachine: (bridge-408543)       <source file='/home/jenkins/minikube-integration/18233-10493/.minikube/machines/bridge-408543/boot2docker.iso'/>
	I0401 19:20:58.022011   63469 main.go:141] libmachine: (bridge-408543)       <target dev='hdc' bus='scsi'/>
	I0401 19:20:58.022025   63469 main.go:141] libmachine: (bridge-408543)       <readonly/>
	I0401 19:20:58.022037   63469 main.go:141] libmachine: (bridge-408543)     </disk>
	I0401 19:20:58.022048   63469 main.go:141] libmachine: (bridge-408543)     <disk type='file' device='disk'>
	I0401 19:20:58.022062   63469 main.go:141] libmachine: (bridge-408543)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0401 19:20:58.022079   63469 main.go:141] libmachine: (bridge-408543)       <source file='/home/jenkins/minikube-integration/18233-10493/.minikube/machines/bridge-408543/bridge-408543.rawdisk'/>
	I0401 19:20:58.022096   63469 main.go:141] libmachine: (bridge-408543)       <target dev='hda' bus='virtio'/>
	I0401 19:20:58.022124   63469 main.go:141] libmachine: (bridge-408543)     </disk>
	I0401 19:20:58.022135   63469 main.go:141] libmachine: (bridge-408543)     <interface type='network'>
	I0401 19:20:58.022146   63469 main.go:141] libmachine: (bridge-408543)       <source network='mk-bridge-408543'/>
	I0401 19:20:58.022165   63469 main.go:141] libmachine: (bridge-408543)       <model type='virtio'/>
	I0401 19:20:58.022181   63469 main.go:141] libmachine: (bridge-408543)     </interface>
	I0401 19:20:58.022196   63469 main.go:141] libmachine: (bridge-408543)     <interface type='network'>
	I0401 19:20:58.022207   63469 main.go:141] libmachine: (bridge-408543)       <source network='default'/>
	I0401 19:20:58.022218   63469 main.go:141] libmachine: (bridge-408543)       <model type='virtio'/>
	I0401 19:20:58.022228   63469 main.go:141] libmachine: (bridge-408543)     </interface>
	I0401 19:20:58.022238   63469 main.go:141] libmachine: (bridge-408543)     <serial type='pty'>
	I0401 19:20:58.022245   63469 main.go:141] libmachine: (bridge-408543)       <target port='0'/>
	I0401 19:20:58.022262   63469 main.go:141] libmachine: (bridge-408543)     </serial>
	I0401 19:20:58.022283   63469 main.go:141] libmachine: (bridge-408543)     <console type='pty'>
	I0401 19:20:58.022302   63469 main.go:141] libmachine: (bridge-408543)       <target type='serial' port='0'/>
	I0401 19:20:58.022322   63469 main.go:141] libmachine: (bridge-408543)     </console>
	I0401 19:20:58.022333   63469 main.go:141] libmachine: (bridge-408543)     <rng model='virtio'>
	I0401 19:20:58.022358   63469 main.go:141] libmachine: (bridge-408543)       <backend model='random'>/dev/random</backend>
	I0401 19:20:58.022374   63469 main.go:141] libmachine: (bridge-408543)     </rng>
	I0401 19:20:58.022384   63469 main.go:141] libmachine: (bridge-408543)     
	I0401 19:20:58.022400   63469 main.go:141] libmachine: (bridge-408543)     
	I0401 19:20:58.022413   63469 main.go:141] libmachine: (bridge-408543)   </devices>
	I0401 19:20:58.022425   63469 main.go:141] libmachine: (bridge-408543) </domain>
	I0401 19:20:58.022431   63469 main.go:141] libmachine: (bridge-408543) 
	I0401 19:20:58.026939   63469 main.go:141] libmachine: (bridge-408543) DBG | domain bridge-408543 has defined MAC address 52:54:00:90:24:74 in network default
	I0401 19:20:58.027595   63469 main.go:141] libmachine: (bridge-408543) Ensuring networks are active...
	I0401 19:20:58.027618   63469 main.go:141] libmachine: (bridge-408543) DBG | domain bridge-408543 has defined MAC address 52:54:00:68:63:60 in network mk-bridge-408543
	I0401 19:20:58.028412   63469 main.go:141] libmachine: (bridge-408543) Ensuring network default is active
	I0401 19:20:58.028766   63469 main.go:141] libmachine: (bridge-408543) Ensuring network mk-bridge-408543 is active
	I0401 19:20:58.029266   63469 main.go:141] libmachine: (bridge-408543) Getting domain xml...
	I0401 19:20:58.030036   63469 main.go:141] libmachine: (bridge-408543) Creating domain...
	I0401 19:20:59.602698   63469 main.go:141] libmachine: (bridge-408543) Waiting to get IP...
	I0401 19:20:59.603684   63469 main.go:141] libmachine: (bridge-408543) DBG | domain bridge-408543 has defined MAC address 52:54:00:68:63:60 in network mk-bridge-408543
	I0401 19:20:59.604269   63469 main.go:141] libmachine: (bridge-408543) DBG | unable to find current IP address of domain bridge-408543 in network mk-bridge-408543
	I0401 19:20:59.604299   63469 main.go:141] libmachine: (bridge-408543) DBG | I0401 19:20:59.604253   63637 retry.go:31] will retry after 258.880891ms: waiting for machine to come up
	I0401 19:20:59.864950   63469 main.go:141] libmachine: (bridge-408543) DBG | domain bridge-408543 has defined MAC address 52:54:00:68:63:60 in network mk-bridge-408543
	I0401 19:20:59.865691   63469 main.go:141] libmachine: (bridge-408543) DBG | unable to find current IP address of domain bridge-408543 in network mk-bridge-408543
	I0401 19:20:59.865728   63469 main.go:141] libmachine: (bridge-408543) DBG | I0401 19:20:59.865625   63637 retry.go:31] will retry after 384.277186ms: waiting for machine to come up
	I0401 19:21:00.252070   63469 main.go:141] libmachine: (bridge-408543) DBG | domain bridge-408543 has defined MAC address 52:54:00:68:63:60 in network mk-bridge-408543
	I0401 19:21:00.252760   63469 main.go:141] libmachine: (bridge-408543) DBG | unable to find current IP address of domain bridge-408543 in network mk-bridge-408543
	I0401 19:21:00.252786   63469 main.go:141] libmachine: (bridge-408543) DBG | I0401 19:21:00.252731   63637 retry.go:31] will retry after 463.883679ms: waiting for machine to come up
	I0401 19:21:00.718552   63469 main.go:141] libmachine: (bridge-408543) DBG | domain bridge-408543 has defined MAC address 52:54:00:68:63:60 in network mk-bridge-408543
	I0401 19:21:00.719103   63469 main.go:141] libmachine: (bridge-408543) DBG | unable to find current IP address of domain bridge-408543 in network mk-bridge-408543
	I0401 19:21:00.719124   63469 main.go:141] libmachine: (bridge-408543) DBG | I0401 19:21:00.719038   63637 retry.go:31] will retry after 438.02871ms: waiting for machine to come up
	I0401 19:21:01.159997   63469 main.go:141] libmachine: (bridge-408543) DBG | domain bridge-408543 has defined MAC address 52:54:00:68:63:60 in network mk-bridge-408543
	I0401 19:21:01.160837   63469 main.go:141] libmachine: (bridge-408543) DBG | unable to find current IP address of domain bridge-408543 in network mk-bridge-408543
	I0401 19:21:01.160866   63469 main.go:141] libmachine: (bridge-408543) DBG | I0401 19:21:01.160831   63637 retry.go:31] will retry after 523.869802ms: waiting for machine to come up
	I0401 19:21:01.686736   63469 main.go:141] libmachine: (bridge-408543) DBG | domain bridge-408543 has defined MAC address 52:54:00:68:63:60 in network mk-bridge-408543
	I0401 19:21:01.687428   63469 main.go:141] libmachine: (bridge-408543) DBG | unable to find current IP address of domain bridge-408543 in network mk-bridge-408543
	I0401 19:21:01.687475   63469 main.go:141] libmachine: (bridge-408543) DBG | I0401 19:21:01.687384   63637 retry.go:31] will retry after 869.095646ms: waiting for machine to come up
	I0401 19:21:00.081957   62058 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 19:21:00.110746   62058 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 19:21:00.138739   62058 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0401 19:21:00.138804   62058 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:21:00.156969   62058 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 19:21:00.157048   62058 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:21:00.185885   62058 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:21:00.210702   62058 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:21:00.228774   62058 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 19:21:00.246817   62058 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:21:00.266960   62058 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:21:00.294801   62058 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:21:00.316222   62058 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 19:21:00.335945   62058 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 19:21:00.353758   62058 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:21:00.734817   62058 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 19:21:01.977226   62058 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.242372257s)
	I0401 19:21:01.977260   62058 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 19:21:01.977313   62058 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 19:21:01.986767   62058 start.go:562] Will wait 60s for crictl version
	I0401 19:21:01.986824   62058 ssh_runner.go:195] Run: which crictl
	I0401 19:21:01.992727   62058 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 19:21:02.056653   62058 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 19:21:02.056736   62058 ssh_runner.go:195] Run: crio --version
	I0401 19:21:02.107104   62058 ssh_runner.go:195] Run: crio --version
	I0401 19:21:02.155014   62058 out.go:177] * Preparing Kubernetes v1.30.0-rc.0 on CRI-O 1.29.1 ...
	I0401 19:21:01.428750   61938 out.go:204]   - Booting up control plane ...
	I0401 19:21:01.428879   61938 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 19:21:01.428983   61938 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 19:21:01.429070   61938 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 19:21:01.450141   61938 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 19:21:01.454127   61938 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 19:21:01.454197   61938 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0401 19:21:01.645944   61938 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 19:21:02.156503   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) Calling .GetIP
	I0401 19:21:02.159794   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:21:02.160263   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:2c:90", ip: ""} in network mk-kubernetes-upgrade-054413: {Iface:virbr2 ExpiryTime:2024-04-01 20:14:28 +0000 UTC Type:0 Mac:52:54:00:e7:2c:90 Iaid: IPaddr:192.168.50.39 Prefix:24 Hostname:kubernetes-upgrade-054413 Clientid:01:52:54:00:e7:2c:90}
	I0401 19:21:02.160288   62058 main.go:141] libmachine: (kubernetes-upgrade-054413) DBG | domain kubernetes-upgrade-054413 has defined IP address 192.168.50.39 and MAC address 52:54:00:e7:2c:90 in network mk-kubernetes-upgrade-054413
	I0401 19:21:02.160544   62058 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0401 19:21:02.167146   62058 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-054413 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.0-rc.0 ClusterName:kubernetes-upgrade-054413 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.39 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 19:21:02.167271   62058 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.0 and runtime crio
	I0401 19:21:02.167332   62058 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:21:02.221503   62058 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 19:21:02.221531   62058 crio.go:433] Images already preloaded, skipping extraction
	I0401 19:21:02.221580   62058 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:21:02.272040   62058 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 19:21:02.272065   62058 cache_images.go:84] Images are preloaded, skipping loading
	I0401 19:21:02.272075   62058 kubeadm.go:928] updating node { 192.168.50.39 8443 v1.30.0-rc.0 crio true true} ...
	I0401 19:21:02.272208   62058 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-054413 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.39
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-rc.0 ClusterName:kubernetes-upgrade-054413 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 19:21:02.272302   62058 ssh_runner.go:195] Run: crio config
	I0401 19:21:02.338358   62058 cni.go:84] Creating CNI manager for ""
	I0401 19:21:02.338388   62058 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:21:02.338411   62058 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 19:21:02.338443   62058 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.39 APIServerPort:8443 KubernetesVersion:v1.30.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-054413 NodeName:kubernetes-upgrade-054413 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.39"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.39 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 19:21:02.338631   62058 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.39
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-054413"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.39
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.39"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 19:21:02.338712   62058 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.0
	I0401 19:21:02.353412   62058 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 19:21:02.353500   62058 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 19:21:02.364434   62058 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (329 bytes)
	I0401 19:21:02.387224   62058 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0401 19:21:02.411312   62058 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2171 bytes)
	I0401 19:21:02.435254   62058 ssh_runner.go:195] Run: grep 192.168.50.39	control-plane.minikube.internal$ /etc/hosts
	I0401 19:21:02.441630   62058 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:21:02.664621   62058 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:21:02.682854   62058 certs.go:68] Setting up /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kubernetes-upgrade-054413 for IP: 192.168.50.39
	I0401 19:21:02.682882   62058 certs.go:194] generating shared ca certs ...
	I0401 19:21:02.682948   62058 certs.go:226] acquiring lock for ca certs: {Name:mk348b3e250c104b662139cd7212c6c6dfda3180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:21:02.683211   62058 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key
	I0401 19:21:02.683290   62058 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key
	I0401 19:21:02.683306   62058 certs.go:256] generating profile certs ...
	I0401 19:21:02.683431   62058 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kubernetes-upgrade-054413/client.key
	I0401 19:21:02.683488   62058 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kubernetes-upgrade-054413/apiserver.key.d63685f6
	I0401 19:21:02.683534   62058 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kubernetes-upgrade-054413/proxy-client.key
	I0401 19:21:02.683639   62058 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem (1338 bytes)
	W0401 19:21:02.683667   62058 certs.go:480] ignoring /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751_empty.pem, impossibly tiny 0 bytes
	I0401 19:21:02.683680   62058 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem (1675 bytes)
	I0401 19:21:02.683704   62058 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem (1082 bytes)
	I0401 19:21:02.683732   62058 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem (1123 bytes)
	I0401 19:21:02.683756   62058 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem (1679 bytes)
	I0401 19:21:02.683806   62058 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:21:02.684449   62058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 19:21:02.711444   62058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 19:21:02.739537   62058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 19:21:02.767188   62058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 19:21:02.797376   62058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kubernetes-upgrade-054413/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0401 19:21:02.886002   62058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kubernetes-upgrade-054413/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0401 19:21:03.026978   62058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kubernetes-upgrade-054413/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 19:21:03.292091   62058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kubernetes-upgrade-054413/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 19:21:03.366524   62058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /usr/share/ca-certificates/177512.pem (1708 bytes)
	I0401 19:21:03.404832   62058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 19:21:03.736038   62058 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem --> /usr/share/ca-certificates/17751.pem (1338 bytes)
	I0401 19:21:03.827175   62058 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0401 19:21:03.938225   62058 ssh_runner.go:195] Run: openssl version
	I0401 19:21:03.945663   62058 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17751.pem && ln -fs /usr/share/ca-certificates/17751.pem /etc/ssl/certs/17751.pem"
	I0401 19:21:03.969135   62058 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17751.pem
	I0401 19:21:03.974962   62058 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 18:15 /usr/share/ca-certificates/17751.pem
	I0401 19:21:03.975034   62058 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17751.pem
	I0401 19:21:03.981983   62058 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17751.pem /etc/ssl/certs/51391683.0"
	I0401 19:21:03.995453   62058 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177512.pem && ln -fs /usr/share/ca-certificates/177512.pem /etc/ssl/certs/177512.pem"
	I0401 19:21:04.010842   62058 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177512.pem
	I0401 19:21:04.016391   62058 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 18:15 /usr/share/ca-certificates/177512.pem
	I0401 19:21:04.016449   62058 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177512.pem
	I0401 19:21:04.025418   62058 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177512.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 19:21:04.042056   62058 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 19:21:04.101463   62058 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:21:04.117907   62058 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 18:07 /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:21:04.117976   62058 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:21:04.133995   62058 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 19:21:04.149425   62058 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 19:21:04.155237   62058 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 19:21:04.162960   62058 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 19:21:04.170397   62058 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 19:21:04.177192   62058 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 19:21:04.184225   62058 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 19:21:04.190686   62058 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 19:21:04.197169   62058 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-054413 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.30.0-rc.0 ClusterName:kubernetes-upgrade-054413 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.39 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:21:04.197270   62058 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 19:21:04.197328   62058 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:21:04.256917   62058 cri.go:89] found id: "c02fd41b50262d7f37b0bfc0aa610e0c8b6bd350d86fa733d490833cecdbd258"
	I0401 19:21:04.256946   62058 cri.go:89] found id: "bd6d21cff6f1f52191a997981952d3cce044bc67e3355e7ca9e556d8dfa6434a"
	I0401 19:21:04.256952   62058 cri.go:89] found id: "821b68c93674ce1671f5b97310e7799161ccc60de027e51529488494ef06152e"
	I0401 19:21:04.256960   62058 cri.go:89] found id: "a50db166eb3d3fb2c801d45c4c4a922cb72019b22ec502d281fbbb6e7a138aa1"
	I0401 19:21:04.256964   62058 cri.go:89] found id: "ffade4498d1af51c188265ddceb174897e689d4eccc1560cfec7442b477d3da0"
	I0401 19:21:04.256968   62058 cri.go:89] found id: "95510ef549f6f743571d06e7b7decc2560a22526fa8ac9823b19c1422e4d18c2"
	I0401 19:21:04.256972   62058 cri.go:89] found id: "2ae2fed014a84263fbea8754dae39f8fa5cc635f6334478eb5d7f76ba30032cd"
	I0401 19:21:04.256976   62058 cri.go:89] found id: "a23575fe9bd2e2304920349a60b2c45d477c2c3454cf9462754e3fa3df56985b"
	I0401 19:21:04.256979   62058 cri.go:89] found id: ""
	I0401 19:21:04.257028   62058 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 01 19:21:15 kubernetes-upgrade-054413 crio[2777]: time="2024-04-01 19:21:15.064068839Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711999275064038934,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121225,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c58ee91c-152d-40ef-a0cf-63c6158b3e6a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:21:15 kubernetes-upgrade-054413 crio[2777]: time="2024-04-01 19:21:15.065009451Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e0d26801-2478-4c0c-bf7f-fa80e37f32dc name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:21:15 kubernetes-upgrade-054413 crio[2777]: time="2024-04-01 19:21:15.065072727Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e0d26801-2478-4c0c-bf7f-fa80e37f32dc name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:21:15 kubernetes-upgrade-054413 crio[2777]: time="2024-04-01 19:21:15.065913047Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7f617236b9c8ec56bfebe1feff3389a201bc5884506267f76d1f1c937ef9add6,PodSandboxId:017aee4b9f7c63dc4f6eab3da4b4b23865875d3a104bc64124a095346ca9a994,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,State:CONTAINER_RUNNING,CreatedAt:1711999271755265526,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pt82p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c20d59c-0a33-40af-b395-d749ad63baac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a694105,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c604e6b0b1ee0ef2705d8f2b9a86f3843014a3fe6054d9aba4401e5a3b05e964,PodSandboxId:3619faec62e5931825d01e4d95615af06b5399486dd02b14204f65a4ee2c7cc9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711999271706517799,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k2rs2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b51a0b59-d774-466d-922c-cba0c429b4dd,},Annotations:map[string]string{io.kubernetes.container.hash: 9695eba9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\
":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23cbb0cb5e78411b8c65a7d6fd71f5dea9cdabe3b4d4ff747f5fde4927132465,PodSandboxId:c79a1c3ce32ebe835e5887cc266e40131ed9ce8512c849713c17d98aaffe67cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711999271736309721,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 0b542f40-5f9f-458a-812a-dfbaf44b09a9,},Annotations:map[string]string{io.kubernetes.container.hash: 941246f8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bce153433a02ad2929f5c23080caf9a296338c36fa1d521cc05526583e638f64,PodSandboxId:15871cfaaa2f07fec3a94ba2e4bfdc2be20ebfb1c7b585f8e0cf9e6187429af8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711999271702343001,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bkhgz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32105658-72c3-41d6-9f35-fd
73c4a0fddd,},Annotations:map[string]string{io.kubernetes.container.hash: 3962aabf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ee781ac7f2ea73e677d8c4cfe993322bb9802303db119ec13e213d58618baff,PodSandboxId:cac627ed1cbb652eee66855f1019760ddfc005fce3bf119462d15d3251e71356,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711999267080858084,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-054413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42ff22d5489dff818501241aae4a19ae,},Annotations:map[string]string{io.kubernetes.container.hash: 7189bf23,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:202ee5218ad97f38d16b43cdb41c2ba858a16b3a78101285572f33291a260237,PodSandboxId:804c3664a8890073473915bad11ce4de850b45ef08801d92f30bf026e3e4c5ed,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,State:CONTAINER_RUNNING,CreatedAt:1711999267056397135,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-054413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46548619beeb5154f75b2c0fd3601055,},Annotations:map[string]string{io.kubernetes.container.hash: e124cbce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82b0fec35b824bd8bba846bae75d60299340ef9efc109491e22ca1a81ea11fda,PodSandboxId:d6a5ad57139949451850a2fd4fd139b1de7534c6965b4eacc38af33dc12a028b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,State:CONTAINER_RUNNING,CreatedAt:1711999267020548022,Labels:map[st
ring]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-054413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08289e2ebbd08dec7e0ea0b878ecc803,},Annotations:map[string]string{io.kubernetes.container.hash: e817c594,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceaa82c28259335b7c591ec9dea73a33142f3407ab267c13cfddab38e71ea3ba,PodSandboxId:1fb6178d98eafee9c1d5bda98caacf11bf40dc1993e751f90c11a22ac33968ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,State:CONTAINER_RUNNING,CreatedAt:1711999267016995868,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-054413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 829b5625f7efdcbfc8f40ad5e8b18b1a,},Annotations:map[string]string{io.kubernetes.container.hash: 7a3f4cc8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd6d21cff6f1f52191a997981952d3cce044bc67e3355e7ca9e556d8dfa6434a,PodSandboxId:0205cf148bab38516d7cf1236c7ba0ef2a519667c1b1120adb1a06fc061f357d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,State:CONTAINER_EXITED,CreatedAt:1711999259492796680,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pt82p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c20d59c-0a33-40af-b395-d749ad63baac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a694105,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95510ef549f6f743571d06e7b7decc2560a22526fa8ac9823b19c1422e4d18c2,PodSandboxId:aba1780ce7b6816704d1f60fef6dc0315b5f18a9c6a6a1b20dca16c1660973c6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1711999258885282120,Labels:map[string]string{io.kubernetes.container.name
: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b542f40-5f9f-458a-812a-dfbaf44b09a9,},Annotations:map[string]string{io.kubernetes.container.hash: 941246f8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c02fd41b50262d7f37b0bfc0aa610e0c8b6bd350d86fa733d490833cecdbd258,PodSandboxId:d29a9da542d5b3bf3c2faed73cad119ec0dd23137cb7ff4875b3cf6f120bbfa0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711999260532177591,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.p
od.name: coredns-7db6d8ff4d-k2rs2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b51a0b59-d774-466d-922c-cba0c429b4dd,},Annotations:map[string]string{io.kubernetes.container.hash: 9695eba9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a50db166eb3d3fb2c801d45c4c4a922cb72019b22ec502d281fbbb6e7a138aa1,PodSandboxId:096e57b3b4dbaa7e48d8f7bb39213967e8ee3d4c9f59af591eee94f27694a188,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandle
r:,},ImageRef:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,State:CONTAINER_EXITED,CreatedAt:1711999259021176396,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-054413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46548619beeb5154f75b2c0fd3601055,},Annotations:map[string]string{io.kubernetes.container.hash: e124cbce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:821b68c93674ce1671f5b97310e7799161ccc60de027e51529488494ef06152e,PodSandboxId:d4ddc567c9e54a56279b257c2587353e57df6cf0bc060ac02ee9dc8a9e314347,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1711999259146439713,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-054413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42ff22d5489dff818501241aae4a19ae,},Annotations:map[string]string{io.kubernetes.container.hash: 7189bf23,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffade4498d1af51c188265ddceb174897e689d4eccc1560cfec7442b477d3da0,PodSandboxId:608026789c067667d786e6d4d57a58b31376e73a2453a1fe48c2810a77d96677,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e840fbdc46
4ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,State:CONTAINER_EXITED,CreatedAt:1711999258918381868,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-054413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 829b5625f7efdcbfc8f40ad5e8b18b1a,},Annotations:map[string]string{io.kubernetes.container.hash: 7a3f4cc8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ae2fed014a84263fbea8754dae39f8fa5cc635f6334478eb5d7f76ba30032cd,PodSandboxId:295c148846937fa239b29fbb20c4cd67d0381a6b94988d6392edd796521b364c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fcfa8f0102326598
8284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,State:CONTAINER_EXITED,CreatedAt:1711999258595357824,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-054413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08289e2ebbd08dec7e0ea0b878ecc803,},Annotations:map[string]string{io.kubernetes.container.hash: e817c594,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a23575fe9bd2e2304920349a60b2c45d477c2c3454cf9462754e3fa3df56985b,PodSandboxId:3778d46611e30e5d930a209c8cb41147bc9a09be0203767cbda370a31cbefa63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909
a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711999236702196622,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bkhgz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32105658-72c3-41d6-9f35-fd73c4a0fddd,},Annotations:map[string]string{io.kubernetes.container.hash: 3962aabf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e0d26801-2478-4c0c-bf7f-fa80e37f32dc name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:21:15 kubernetes-upgrade-054413 crio[2777]: time="2024-04-01 19:21:15.122940142Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d5038e36-3a84-4db0-83d2-198e1721fc17 name=/runtime.v1.RuntimeService/Version
	Apr 01 19:21:15 kubernetes-upgrade-054413 crio[2777]: time="2024-04-01 19:21:15.123035243Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d5038e36-3a84-4db0-83d2-198e1721fc17 name=/runtime.v1.RuntimeService/Version
	Apr 01 19:21:15 kubernetes-upgrade-054413 crio[2777]: time="2024-04-01 19:21:15.124570452Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c1c4cceb-13f4-4c32-ace9-9033146166e3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:21:15 kubernetes-upgrade-054413 crio[2777]: time="2024-04-01 19:21:15.125186635Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711999275125154260,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121225,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c1c4cceb-13f4-4c32-ace9-9033146166e3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:21:15 kubernetes-upgrade-054413 crio[2777]: time="2024-04-01 19:21:15.125872091Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=999420c7-daeb-4e95-8da4-cad73fa106a4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:21:15 kubernetes-upgrade-054413 crio[2777]: time="2024-04-01 19:21:15.125936458Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=999420c7-daeb-4e95-8da4-cad73fa106a4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:21:15 kubernetes-upgrade-054413 crio[2777]: time="2024-04-01 19:21:15.126287740Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7f617236b9c8ec56bfebe1feff3389a201bc5884506267f76d1f1c937ef9add6,PodSandboxId:017aee4b9f7c63dc4f6eab3da4b4b23865875d3a104bc64124a095346ca9a994,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,State:CONTAINER_RUNNING,CreatedAt:1711999271755265526,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pt82p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c20d59c-0a33-40af-b395-d749ad63baac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a694105,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c604e6b0b1ee0ef2705d8f2b9a86f3843014a3fe6054d9aba4401e5a3b05e964,PodSandboxId:3619faec62e5931825d01e4d95615af06b5399486dd02b14204f65a4ee2c7cc9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711999271706517799,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k2rs2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b51a0b59-d774-466d-922c-cba0c429b4dd,},Annotations:map[string]string{io.kubernetes.container.hash: 9695eba9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\
":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23cbb0cb5e78411b8c65a7d6fd71f5dea9cdabe3b4d4ff747f5fde4927132465,PodSandboxId:c79a1c3ce32ebe835e5887cc266e40131ed9ce8512c849713c17d98aaffe67cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711999271736309721,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 0b542f40-5f9f-458a-812a-dfbaf44b09a9,},Annotations:map[string]string{io.kubernetes.container.hash: 941246f8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bce153433a02ad2929f5c23080caf9a296338c36fa1d521cc05526583e638f64,PodSandboxId:15871cfaaa2f07fec3a94ba2e4bfdc2be20ebfb1c7b585f8e0cf9e6187429af8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711999271702343001,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bkhgz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32105658-72c3-41d6-9f35-fd
73c4a0fddd,},Annotations:map[string]string{io.kubernetes.container.hash: 3962aabf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ee781ac7f2ea73e677d8c4cfe993322bb9802303db119ec13e213d58618baff,PodSandboxId:cac627ed1cbb652eee66855f1019760ddfc005fce3bf119462d15d3251e71356,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711999267080858084,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-054413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42ff22d5489dff818501241aae4a19ae,},Annotations:map[string]string{io.kubernetes.container.hash: 7189bf23,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:202ee5218ad97f38d16b43cdb41c2ba858a16b3a78101285572f33291a260237,PodSandboxId:804c3664a8890073473915bad11ce4de850b45ef08801d92f30bf026e3e4c5ed,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,State:CONTAINER_RUNNING,CreatedAt:1711999267056397135,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-054413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46548619beeb5154f75b2c0fd3601055,},Annotations:map[string]string{io.kubernetes.container.hash: e124cbce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82b0fec35b824bd8bba846bae75d60299340ef9efc109491e22ca1a81ea11fda,PodSandboxId:d6a5ad57139949451850a2fd4fd139b1de7534c6965b4eacc38af33dc12a028b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,State:CONTAINER_RUNNING,CreatedAt:1711999267020548022,Labels:map[st
ring]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-054413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08289e2ebbd08dec7e0ea0b878ecc803,},Annotations:map[string]string{io.kubernetes.container.hash: e817c594,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceaa82c28259335b7c591ec9dea73a33142f3407ab267c13cfddab38e71ea3ba,PodSandboxId:1fb6178d98eafee9c1d5bda98caacf11bf40dc1993e751f90c11a22ac33968ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,State:CONTAINER_RUNNING,CreatedAt:1711999267016995868,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-054413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 829b5625f7efdcbfc8f40ad5e8b18b1a,},Annotations:map[string]string{io.kubernetes.container.hash: 7a3f4cc8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd6d21cff6f1f52191a997981952d3cce044bc67e3355e7ca9e556d8dfa6434a,PodSandboxId:0205cf148bab38516d7cf1236c7ba0ef2a519667c1b1120adb1a06fc061f357d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,State:CONTAINER_EXITED,CreatedAt:1711999259492796680,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pt82p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c20d59c-0a33-40af-b395-d749ad63baac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a694105,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95510ef549f6f743571d06e7b7decc2560a22526fa8ac9823b19c1422e4d18c2,PodSandboxId:aba1780ce7b6816704d1f60fef6dc0315b5f18a9c6a6a1b20dca16c1660973c6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1711999258885282120,Labels:map[string]string{io.kubernetes.container.name
: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b542f40-5f9f-458a-812a-dfbaf44b09a9,},Annotations:map[string]string{io.kubernetes.container.hash: 941246f8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c02fd41b50262d7f37b0bfc0aa610e0c8b6bd350d86fa733d490833cecdbd258,PodSandboxId:d29a9da542d5b3bf3c2faed73cad119ec0dd23137cb7ff4875b3cf6f120bbfa0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711999260532177591,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.p
od.name: coredns-7db6d8ff4d-k2rs2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b51a0b59-d774-466d-922c-cba0c429b4dd,},Annotations:map[string]string{io.kubernetes.container.hash: 9695eba9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a50db166eb3d3fb2c801d45c4c4a922cb72019b22ec502d281fbbb6e7a138aa1,PodSandboxId:096e57b3b4dbaa7e48d8f7bb39213967e8ee3d4c9f59af591eee94f27694a188,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandle
r:,},ImageRef:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,State:CONTAINER_EXITED,CreatedAt:1711999259021176396,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-054413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46548619beeb5154f75b2c0fd3601055,},Annotations:map[string]string{io.kubernetes.container.hash: e124cbce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:821b68c93674ce1671f5b97310e7799161ccc60de027e51529488494ef06152e,PodSandboxId:d4ddc567c9e54a56279b257c2587353e57df6cf0bc060ac02ee9dc8a9e314347,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1711999259146439713,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-054413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42ff22d5489dff818501241aae4a19ae,},Annotations:map[string]string{io.kubernetes.container.hash: 7189bf23,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffade4498d1af51c188265ddceb174897e689d4eccc1560cfec7442b477d3da0,PodSandboxId:608026789c067667d786e6d4d57a58b31376e73a2453a1fe48c2810a77d96677,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e840fbdc46
4ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,State:CONTAINER_EXITED,CreatedAt:1711999258918381868,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-054413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 829b5625f7efdcbfc8f40ad5e8b18b1a,},Annotations:map[string]string{io.kubernetes.container.hash: 7a3f4cc8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ae2fed014a84263fbea8754dae39f8fa5cc635f6334478eb5d7f76ba30032cd,PodSandboxId:295c148846937fa239b29fbb20c4cd67d0381a6b94988d6392edd796521b364c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fcfa8f0102326598
8284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,State:CONTAINER_EXITED,CreatedAt:1711999258595357824,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-054413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08289e2ebbd08dec7e0ea0b878ecc803,},Annotations:map[string]string{io.kubernetes.container.hash: e817c594,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a23575fe9bd2e2304920349a60b2c45d477c2c3454cf9462754e3fa3df56985b,PodSandboxId:3778d46611e30e5d930a209c8cb41147bc9a09be0203767cbda370a31cbefa63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909
a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711999236702196622,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bkhgz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32105658-72c3-41d6-9f35-fd73c4a0fddd,},Annotations:map[string]string{io.kubernetes.container.hash: 3962aabf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=999420c7-daeb-4e95-8da4-cad73fa106a4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:21:15 kubernetes-upgrade-054413 crio[2777]: time="2024-04-01 19:21:15.174210592Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1e153cf8-0b92-4705-b7a5-5ca58fc3cae1 name=/runtime.v1.RuntimeService/Version
	Apr 01 19:21:15 kubernetes-upgrade-054413 crio[2777]: time="2024-04-01 19:21:15.174307942Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1e153cf8-0b92-4705-b7a5-5ca58fc3cae1 name=/runtime.v1.RuntimeService/Version
	Apr 01 19:21:15 kubernetes-upgrade-054413 crio[2777]: time="2024-04-01 19:21:15.177145015Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=955cbe36-e01a-4291-b1fb-82ab1a5ad8c3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:21:15 kubernetes-upgrade-054413 crio[2777]: time="2024-04-01 19:21:15.177577789Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711999275177553964,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121225,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=955cbe36-e01a-4291-b1fb-82ab1a5ad8c3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:21:15 kubernetes-upgrade-054413 crio[2777]: time="2024-04-01 19:21:15.178582450Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=677a61ea-4a3f-46b5-8647-c245038e65dd name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:21:15 kubernetes-upgrade-054413 crio[2777]: time="2024-04-01 19:21:15.178637927Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=677a61ea-4a3f-46b5-8647-c245038e65dd name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:21:15 kubernetes-upgrade-054413 crio[2777]: time="2024-04-01 19:21:15.179376350Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7f617236b9c8ec56bfebe1feff3389a201bc5884506267f76d1f1c937ef9add6,PodSandboxId:017aee4b9f7c63dc4f6eab3da4b4b23865875d3a104bc64124a095346ca9a994,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,State:CONTAINER_RUNNING,CreatedAt:1711999271755265526,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pt82p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c20d59c-0a33-40af-b395-d749ad63baac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a694105,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c604e6b0b1ee0ef2705d8f2b9a86f3843014a3fe6054d9aba4401e5a3b05e964,PodSandboxId:3619faec62e5931825d01e4d95615af06b5399486dd02b14204f65a4ee2c7cc9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711999271706517799,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k2rs2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b51a0b59-d774-466d-922c-cba0c429b4dd,},Annotations:map[string]string{io.kubernetes.container.hash: 9695eba9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\
":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23cbb0cb5e78411b8c65a7d6fd71f5dea9cdabe3b4d4ff747f5fde4927132465,PodSandboxId:c79a1c3ce32ebe835e5887cc266e40131ed9ce8512c849713c17d98aaffe67cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711999271736309721,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 0b542f40-5f9f-458a-812a-dfbaf44b09a9,},Annotations:map[string]string{io.kubernetes.container.hash: 941246f8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bce153433a02ad2929f5c23080caf9a296338c36fa1d521cc05526583e638f64,PodSandboxId:15871cfaaa2f07fec3a94ba2e4bfdc2be20ebfb1c7b585f8e0cf9e6187429af8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711999271702343001,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bkhgz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32105658-72c3-41d6-9f35-fd
73c4a0fddd,},Annotations:map[string]string{io.kubernetes.container.hash: 3962aabf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ee781ac7f2ea73e677d8c4cfe993322bb9802303db119ec13e213d58618baff,PodSandboxId:cac627ed1cbb652eee66855f1019760ddfc005fce3bf119462d15d3251e71356,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711999267080858084,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-054413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42ff22d5489dff818501241aae4a19ae,},Annotations:map[string]string{io.kubernetes.container.hash: 7189bf23,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:202ee5218ad97f38d16b43cdb41c2ba858a16b3a78101285572f33291a260237,PodSandboxId:804c3664a8890073473915bad11ce4de850b45ef08801d92f30bf026e3e4c5ed,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,State:CONTAINER_RUNNING,CreatedAt:1711999267056397135,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-054413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46548619beeb5154f75b2c0fd3601055,},Annotations:map[string]string{io.kubernetes.container.hash: e124cbce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82b0fec35b824bd8bba846bae75d60299340ef9efc109491e22ca1a81ea11fda,PodSandboxId:d6a5ad57139949451850a2fd4fd139b1de7534c6965b4eacc38af33dc12a028b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,State:CONTAINER_RUNNING,CreatedAt:1711999267020548022,Labels:map[st
ring]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-054413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08289e2ebbd08dec7e0ea0b878ecc803,},Annotations:map[string]string{io.kubernetes.container.hash: e817c594,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceaa82c28259335b7c591ec9dea73a33142f3407ab267c13cfddab38e71ea3ba,PodSandboxId:1fb6178d98eafee9c1d5bda98caacf11bf40dc1993e751f90c11a22ac33968ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,State:CONTAINER_RUNNING,CreatedAt:1711999267016995868,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-054413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 829b5625f7efdcbfc8f40ad5e8b18b1a,},Annotations:map[string]string{io.kubernetes.container.hash: 7a3f4cc8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd6d21cff6f1f52191a997981952d3cce044bc67e3355e7ca9e556d8dfa6434a,PodSandboxId:0205cf148bab38516d7cf1236c7ba0ef2a519667c1b1120adb1a06fc061f357d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,State:CONTAINER_EXITED,CreatedAt:1711999259492796680,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pt82p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c20d59c-0a33-40af-b395-d749ad63baac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a694105,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95510ef549f6f743571d06e7b7decc2560a22526fa8ac9823b19c1422e4d18c2,PodSandboxId:aba1780ce7b6816704d1f60fef6dc0315b5f18a9c6a6a1b20dca16c1660973c6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1711999258885282120,Labels:map[string]string{io.kubernetes.container.name
: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b542f40-5f9f-458a-812a-dfbaf44b09a9,},Annotations:map[string]string{io.kubernetes.container.hash: 941246f8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c02fd41b50262d7f37b0bfc0aa610e0c8b6bd350d86fa733d490833cecdbd258,PodSandboxId:d29a9da542d5b3bf3c2faed73cad119ec0dd23137cb7ff4875b3cf6f120bbfa0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711999260532177591,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.p
od.name: coredns-7db6d8ff4d-k2rs2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b51a0b59-d774-466d-922c-cba0c429b4dd,},Annotations:map[string]string{io.kubernetes.container.hash: 9695eba9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a50db166eb3d3fb2c801d45c4c4a922cb72019b22ec502d281fbbb6e7a138aa1,PodSandboxId:096e57b3b4dbaa7e48d8f7bb39213967e8ee3d4c9f59af591eee94f27694a188,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandle
r:,},ImageRef:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,State:CONTAINER_EXITED,CreatedAt:1711999259021176396,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-054413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46548619beeb5154f75b2c0fd3601055,},Annotations:map[string]string{io.kubernetes.container.hash: e124cbce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:821b68c93674ce1671f5b97310e7799161ccc60de027e51529488494ef06152e,PodSandboxId:d4ddc567c9e54a56279b257c2587353e57df6cf0bc060ac02ee9dc8a9e314347,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1711999259146439713,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-054413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42ff22d5489dff818501241aae4a19ae,},Annotations:map[string]string{io.kubernetes.container.hash: 7189bf23,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffade4498d1af51c188265ddceb174897e689d4eccc1560cfec7442b477d3da0,PodSandboxId:608026789c067667d786e6d4d57a58b31376e73a2453a1fe48c2810a77d96677,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e840fbdc46
4ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,State:CONTAINER_EXITED,CreatedAt:1711999258918381868,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-054413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 829b5625f7efdcbfc8f40ad5e8b18b1a,},Annotations:map[string]string{io.kubernetes.container.hash: 7a3f4cc8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ae2fed014a84263fbea8754dae39f8fa5cc635f6334478eb5d7f76ba30032cd,PodSandboxId:295c148846937fa239b29fbb20c4cd67d0381a6b94988d6392edd796521b364c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fcfa8f0102326598
8284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,State:CONTAINER_EXITED,CreatedAt:1711999258595357824,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-054413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08289e2ebbd08dec7e0ea0b878ecc803,},Annotations:map[string]string{io.kubernetes.container.hash: e817c594,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a23575fe9bd2e2304920349a60b2c45d477c2c3454cf9462754e3fa3df56985b,PodSandboxId:3778d46611e30e5d930a209c8cb41147bc9a09be0203767cbda370a31cbefa63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909
a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711999236702196622,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bkhgz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32105658-72c3-41d6-9f35-fd73c4a0fddd,},Annotations:map[string]string{io.kubernetes.container.hash: 3962aabf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=677a61ea-4a3f-46b5-8647-c245038e65dd name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:21:15 kubernetes-upgrade-054413 crio[2777]: time="2024-04-01 19:21:15.221554953Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=be912a8e-1e12-4115-8ab3-81a6eebd59c0 name=/runtime.v1.RuntimeService/Version
	Apr 01 19:21:15 kubernetes-upgrade-054413 crio[2777]: time="2024-04-01 19:21:15.221663301Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=be912a8e-1e12-4115-8ab3-81a6eebd59c0 name=/runtime.v1.RuntimeService/Version
	Apr 01 19:21:15 kubernetes-upgrade-054413 crio[2777]: time="2024-04-01 19:21:15.222805629Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=48b26726-0926-453b-9714-4636602422ef name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:21:15 kubernetes-upgrade-054413 crio[2777]: time="2024-04-01 19:21:15.223585942Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711999275223510504,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121225,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=48b26726-0926-453b-9714-4636602422ef name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:21:15 kubernetes-upgrade-054413 crio[2777]: time="2024-04-01 19:21:15.224160578Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9f32eff9-97d2-4c6d-99d8-836c2383ea6f name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:21:15 kubernetes-upgrade-054413 crio[2777]: time="2024-04-01 19:21:15.224249991Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9f32eff9-97d2-4c6d-99d8-836c2383ea6f name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:21:15 kubernetes-upgrade-054413 crio[2777]: time="2024-04-01 19:21:15.224858402Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7f617236b9c8ec56bfebe1feff3389a201bc5884506267f76d1f1c937ef9add6,PodSandboxId:017aee4b9f7c63dc4f6eab3da4b4b23865875d3a104bc64124a095346ca9a994,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,State:CONTAINER_RUNNING,CreatedAt:1711999271755265526,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pt82p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c20d59c-0a33-40af-b395-d749ad63baac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a694105,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c604e6b0b1ee0ef2705d8f2b9a86f3843014a3fe6054d9aba4401e5a3b05e964,PodSandboxId:3619faec62e5931825d01e4d95615af06b5399486dd02b14204f65a4ee2c7cc9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711999271706517799,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k2rs2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b51a0b59-d774-466d-922c-cba0c429b4dd,},Annotations:map[string]string{io.kubernetes.container.hash: 9695eba9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\
":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23cbb0cb5e78411b8c65a7d6fd71f5dea9cdabe3b4d4ff747f5fde4927132465,PodSandboxId:c79a1c3ce32ebe835e5887cc266e40131ed9ce8512c849713c17d98aaffe67cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1711999271736309721,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 0b542f40-5f9f-458a-812a-dfbaf44b09a9,},Annotations:map[string]string{io.kubernetes.container.hash: 941246f8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bce153433a02ad2929f5c23080caf9a296338c36fa1d521cc05526583e638f64,PodSandboxId:15871cfaaa2f07fec3a94ba2e4bfdc2be20ebfb1c7b585f8e0cf9e6187429af8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711999271702343001,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bkhgz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32105658-72c3-41d6-9f35-fd
73c4a0fddd,},Annotations:map[string]string{io.kubernetes.container.hash: 3962aabf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ee781ac7f2ea73e677d8c4cfe993322bb9802303db119ec13e213d58618baff,PodSandboxId:cac627ed1cbb652eee66855f1019760ddfc005fce3bf119462d15d3251e71356,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711999267080858084,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-054413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42ff22d5489dff818501241aae4a19ae,},Annotations:map[string]string{io.kubernetes.container.hash: 7189bf23,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:202ee5218ad97f38d16b43cdb41c2ba858a16b3a78101285572f33291a260237,PodSandboxId:804c3664a8890073473915bad11ce4de850b45ef08801d92f30bf026e3e4c5ed,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,State:CONTAINER_RUNNING,CreatedAt:1711999267056397135,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-054413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46548619beeb5154f75b2c0fd3601055,},Annotations:map[string]string{io.kubernetes.container.hash: e124cbce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82b0fec35b824bd8bba846bae75d60299340ef9efc109491e22ca1a81ea11fda,PodSandboxId:d6a5ad57139949451850a2fd4fd139b1de7534c6965b4eacc38af33dc12a028b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,State:CONTAINER_RUNNING,CreatedAt:1711999267020548022,Labels:map[st
ring]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-054413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08289e2ebbd08dec7e0ea0b878ecc803,},Annotations:map[string]string{io.kubernetes.container.hash: e817c594,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceaa82c28259335b7c591ec9dea73a33142f3407ab267c13cfddab38e71ea3ba,PodSandboxId:1fb6178d98eafee9c1d5bda98caacf11bf40dc1993e751f90c11a22ac33968ba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,State:CONTAINER_RUNNING,CreatedAt:1711999267016995868,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-054413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 829b5625f7efdcbfc8f40ad5e8b18b1a,},Annotations:map[string]string{io.kubernetes.container.hash: 7a3f4cc8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd6d21cff6f1f52191a997981952d3cce044bc67e3355e7ca9e556d8dfa6434a,PodSandboxId:0205cf148bab38516d7cf1236c7ba0ef2a519667c1b1120adb1a06fc061f357d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,State:CONTAINER_EXITED,CreatedAt:1711999259492796680,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pt82p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c20d59c-0a33-40af-b395-d749ad63baac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a694105,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95510ef549f6f743571d06e7b7decc2560a22526fa8ac9823b19c1422e4d18c2,PodSandboxId:aba1780ce7b6816704d1f60fef6dc0315b5f18a9c6a6a1b20dca16c1660973c6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1711999258885282120,Labels:map[string]string{io.kubernetes.container.name
: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b542f40-5f9f-458a-812a-dfbaf44b09a9,},Annotations:map[string]string{io.kubernetes.container.hash: 941246f8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c02fd41b50262d7f37b0bfc0aa610e0c8b6bd350d86fa733d490833cecdbd258,PodSandboxId:d29a9da542d5b3bf3c2faed73cad119ec0dd23137cb7ff4875b3cf6f120bbfa0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711999260532177591,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.p
od.name: coredns-7db6d8ff4d-k2rs2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b51a0b59-d774-466d-922c-cba0c429b4dd,},Annotations:map[string]string{io.kubernetes.container.hash: 9695eba9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a50db166eb3d3fb2c801d45c4c4a922cb72019b22ec502d281fbbb6e7a138aa1,PodSandboxId:096e57b3b4dbaa7e48d8f7bb39213967e8ee3d4c9f59af591eee94f27694a188,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandle
r:,},ImageRef:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,State:CONTAINER_EXITED,CreatedAt:1711999259021176396,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-054413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46548619beeb5154f75b2c0fd3601055,},Annotations:map[string]string{io.kubernetes.container.hash: e124cbce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:821b68c93674ce1671f5b97310e7799161ccc60de027e51529488494ef06152e,PodSandboxId:d4ddc567c9e54a56279b257c2587353e57df6cf0bc060ac02ee9dc8a9e314347,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1711999259146439713,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-054413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42ff22d5489dff818501241aae4a19ae,},Annotations:map[string]string{io.kubernetes.container.hash: 7189bf23,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffade4498d1af51c188265ddceb174897e689d4eccc1560cfec7442b477d3da0,PodSandboxId:608026789c067667d786e6d4d57a58b31376e73a2453a1fe48c2810a77d96677,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e840fbdc46
4ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,State:CONTAINER_EXITED,CreatedAt:1711999258918381868,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-054413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 829b5625f7efdcbfc8f40ad5e8b18b1a,},Annotations:map[string]string{io.kubernetes.container.hash: 7a3f4cc8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ae2fed014a84263fbea8754dae39f8fa5cc635f6334478eb5d7f76ba30032cd,PodSandboxId:295c148846937fa239b29fbb20c4cd67d0381a6b94988d6392edd796521b364c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fcfa8f0102326598
8284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,State:CONTAINER_EXITED,CreatedAt:1711999258595357824,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-054413,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08289e2ebbd08dec7e0ea0b878ecc803,},Annotations:map[string]string{io.kubernetes.container.hash: e817c594,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a23575fe9bd2e2304920349a60b2c45d477c2c3454cf9462754e3fa3df56985b,PodSandboxId:3778d46611e30e5d930a209c8cb41147bc9a09be0203767cbda370a31cbefa63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909
a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711999236702196622,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bkhgz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32105658-72c3-41d6-9f35-fd73c4a0fddd,},Annotations:map[string]string{io.kubernetes.container.hash: 3962aabf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9f32eff9-97d2-4c6d-99d8-836c2383ea6f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7f617236b9c8e       33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652   3 seconds ago       Running             kube-proxy                2                   017aee4b9f7c6       kube-proxy-pt82p
	23cbb0cb5e784       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago       Running             storage-provisioner       2                   c79a1c3ce32eb       storage-provisioner
	c604e6b0b1ee0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   2                   3619faec62e59       coredns-7db6d8ff4d-k2rs2
	bce153433a02a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   1                   15871cfaaa2f0       coredns-7db6d8ff4d-bkhgz
	2ee781ac7f2ea       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   8 seconds ago       Running             etcd                      2                   cac627ed1cbb6       etcd-kubernetes-upgrade-054413
	202ee5218ad97       ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a   8 seconds ago       Running             kube-controller-manager   2                   804c3664a8890       kube-controller-manager-kubernetes-upgrade-054413
	82b0fec35b824       fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5   8 seconds ago       Running             kube-scheduler            2                   d6a5ad5713994       kube-scheduler-kubernetes-upgrade-054413
	ceaa82c282593       e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3   8 seconds ago       Running             kube-apiserver            2                   1fb6178d98eaf       kube-apiserver-kubernetes-upgrade-054413
	c02fd41b50262       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 seconds ago      Exited              coredns                   1                   d29a9da542d5b       coredns-7db6d8ff4d-k2rs2
	bd6d21cff6f1f       33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652   15 seconds ago      Exited              kube-proxy                1                   0205cf148bab3       kube-proxy-pt82p
	821b68c93674c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   16 seconds ago      Exited              etcd                      1                   d4ddc567c9e54       etcd-kubernetes-upgrade-054413
	a50db166eb3d3       ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a   16 seconds ago      Exited              kube-controller-manager   1                   096e57b3b4dba       kube-controller-manager-kubernetes-upgrade-054413
	ffade4498d1af       e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3   16 seconds ago      Exited              kube-apiserver            1                   608026789c067       kube-apiserver-kubernetes-upgrade-054413
	95510ef549f6f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 seconds ago      Exited              storage-provisioner       1                   aba1780ce7b68       storage-provisioner
	2ae2fed014a84       fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5   16 seconds ago      Exited              kube-scheduler            1                   295c148846937       kube-scheduler-kubernetes-upgrade-054413
	a23575fe9bd2e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   38 seconds ago      Exited              coredns                   0                   3778d46611e30       coredns-7db6d8ff4d-bkhgz
	
	
	==> coredns [a23575fe9bd2e2304920349a60b2c45d477c2c3454cf9462754e3fa3df56985b] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/kubernetes: Trace[1154186424]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (01-Apr-2024 19:20:36.955) (total time: 13513ms):
	Trace[1154186424]: [13.513372652s] [13.513372652s] END
	[INFO] plugin/kubernetes: Trace[785892552]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (01-Apr-2024 19:20:36.955) (total time: 13513ms):
	Trace[785892552]: [13.513957891s] [13.513957891s] END
	[INFO] plugin/kubernetes: Trace[98085343]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (01-Apr-2024 19:20:36.954) (total time: 13514ms):
	Trace[98085343]: [13.514885429s] [13.514885429s] END
	
	
	==> coredns [bce153433a02ad2929f5c23080caf9a296338c36fa1d521cc05526583e638f64] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [c02fd41b50262d7f37b0bfc0aa610e0c8b6bd350d86fa733d490833cecdbd258] <==
	
	
	==> coredns [c604e6b0b1ee0ef2705d8f2b9a86f3843014a3fe6054d9aba4401e5a3b05e964] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-054413
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-054413
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Apr 2024 19:20:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-054413
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Apr 2024 19:21:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Apr 2024 19:21:10 +0000   Mon, 01 Apr 2024 19:20:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Apr 2024 19:21:10 +0000   Mon, 01 Apr 2024 19:20:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Apr 2024 19:21:10 +0000   Mon, 01 Apr 2024 19:20:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Apr 2024 19:21:10 +0000   Mon, 01 Apr 2024 19:20:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.39
	  Hostname:    kubernetes-upgrade-054413
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f80520ec8b4b45ad9c16dbda40c16860
	  System UUID:                f80520ec-8b4b-45ad-9c16-dbda40c16860
	  Boot ID:                    8f89d66c-a583-463a-9040-d8a6cd1d163e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0-rc.0
	  Kube-Proxy Version:         v1.30.0-rc.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-bkhgz                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     40s
	  kube-system                 coredns-7db6d8ff4d-k2rs2                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     40s
	  kube-system                 etcd-kubernetes-upgrade-054413                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         52s
	  kube-system                 kube-apiserver-kubernetes-upgrade-054413             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         53s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-054413    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         47s
	  kube-system                 kube-proxy-pt82p                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         41s
	  kube-system                 kube-scheduler-kubernetes-upgrade-054413             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         56s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 38s                kube-proxy       
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  NodeHasNoDiskPressure    62s (x8 over 62s)  kubelet          Node kubernetes-upgrade-054413 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x7 over 62s)  kubelet          Node kubernetes-upgrade-054413 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  62s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  62s (x8 over 62s)  kubelet          Node kubernetes-upgrade-054413 status is now: NodeHasSufficientMemory
	  Normal  Starting                 62s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           41s                node-controller  Node kubernetes-upgrade-054413 event: Registered Node kubernetes-upgrade-054413 in Controller
	  Normal  Starting                 9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s (x8 over 9s)    kubelet          Node kubernetes-upgrade-054413 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x8 over 9s)    kubelet          Node kubernetes-upgrade-054413 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x7 over 9s)    kubelet          Node kubernetes-upgrade-054413 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[  +0.000011] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr 1 19:20] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.067831] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.075518] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.215925] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.159656] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.346793] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +5.739234] systemd-fstab-generator[739]: Ignoring "noauto" option for root device
	[  +0.077928] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.367553] systemd-fstab-generator[863]: Ignoring "noauto" option for root device
	[ +10.102565] systemd-fstab-generator[1252]: Ignoring "noauto" option for root device
	[  +0.081460] kauditd_printk_skb: 97 callbacks suppressed
	[ +12.644282] kauditd_printk_skb: 21 callbacks suppressed
	[ +21.799486] systemd-fstab-generator[2029]: Ignoring "noauto" option for root device
	[  +0.106343] kauditd_printk_skb: 64 callbacks suppressed
	[  +0.268756] systemd-fstab-generator[2133]: Ignoring "noauto" option for root device
	[  +0.929188] systemd-fstab-generator[2425]: Ignoring "noauto" option for root device
	[  +0.436085] systemd-fstab-generator[2527]: Ignoring "noauto" option for root device
	[  +0.623293] systemd-fstab-generator[2591]: Ignoring "noauto" option for root device
	[Apr 1 19:21] systemd-fstab-generator[2913]: Ignoring "noauto" option for root device
	[  +0.803207] kauditd_printk_skb: 235 callbacks suppressed
	[  +2.700621] systemd-fstab-generator[3407]: Ignoring "noauto" option for root device
	[  +5.797526] kauditd_printk_skb: 85 callbacks suppressed
	[  +1.171387] systemd-fstab-generator[3922]: Ignoring "noauto" option for root device
	
	
	==> etcd [2ee781ac7f2ea73e677d8c4cfe993322bb9802303db119ec13e213d58618baff] <==
	{"level":"info","ts":"2024-04-01T19:21:07.525133Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-01T19:21:07.525145Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-01T19:21:07.525315Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec29e853f5cd425a switched to configuration voters=(17017388114299929178)"}
	{"level":"info","ts":"2024-04-01T19:21:07.525386Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"16343206fca1ffcb","local-member-id":"ec29e853f5cd425a","added-peer-id":"ec29e853f5cd425a","added-peer-peer-urls":["https://192.168.50.39:2380"]}
	{"level":"info","ts":"2024-04-01T19:21:07.525486Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"16343206fca1ffcb","local-member-id":"ec29e853f5cd425a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-01T19:21:07.525539Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-01T19:21:07.547018Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-01T19:21:07.547375Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ec29e853f5cd425a","initial-advertise-peer-urls":["https://192.168.50.39:2380"],"listen-peer-urls":["https://192.168.50.39:2380"],"advertise-client-urls":["https://192.168.50.39:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.39:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-01T19:21:07.547451Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-01T19:21:07.547555Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.39:2380"}
	{"level":"info","ts":"2024-04-01T19:21:07.547602Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.39:2380"}
	{"level":"info","ts":"2024-04-01T19:21:08.980909Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec29e853f5cd425a is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-01T19:21:08.98095Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec29e853f5cd425a became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-01T19:21:08.980965Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec29e853f5cd425a received MsgPreVoteResp from ec29e853f5cd425a at term 2"}
	{"level":"info","ts":"2024-04-01T19:21:08.980976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec29e853f5cd425a became candidate at term 3"}
	{"level":"info","ts":"2024-04-01T19:21:08.980997Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec29e853f5cd425a received MsgVoteResp from ec29e853f5cd425a at term 3"}
	{"level":"info","ts":"2024-04-01T19:21:08.981006Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec29e853f5cd425a became leader at term 3"}
	{"level":"info","ts":"2024-04-01T19:21:08.981013Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ec29e853f5cd425a elected leader ec29e853f5cd425a at term 3"}
	{"level":"info","ts":"2024-04-01T19:21:08.987151Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"ec29e853f5cd425a","local-member-attributes":"{Name:kubernetes-upgrade-054413 ClientURLs:[https://192.168.50.39:2379]}","request-path":"/0/members/ec29e853f5cd425a/attributes","cluster-id":"16343206fca1ffcb","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-01T19:21:08.987304Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-01T19:21:08.98763Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-01T19:21:08.987821Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-01T19:21:08.987862Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-01T19:21:08.989603Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.39:2379"}
	{"level":"info","ts":"2024-04-01T19:21:08.990649Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [821b68c93674ce1671f5b97310e7799161ccc60de027e51529488494ef06152e] <==
	{"level":"info","ts":"2024-04-01T19:21:00.047663Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"66.856154ms"}
	{"level":"info","ts":"2024-04-01T19:21:00.15637Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-04-01T19:21:00.198109Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"16343206fca1ffcb","local-member-id":"ec29e853f5cd425a","commit-index":411}
	{"level":"info","ts":"2024-04-01T19:21:00.207625Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec29e853f5cd425a switched to configuration voters=()"}
	{"level":"info","ts":"2024-04-01T19:21:00.207786Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec29e853f5cd425a became follower at term 2"}
	{"level":"info","ts":"2024-04-01T19:21:00.207817Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft ec29e853f5cd425a [peers: [], term: 2, commit: 411, applied: 0, lastindex: 411, lastterm: 2]"}
	{"level":"warn","ts":"2024-04-01T19:21:00.215817Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-04-01T19:21:00.32525Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":398}
	{"level":"info","ts":"2024-04-01T19:21:00.345748Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-04-01T19:21:00.398139Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"ec29e853f5cd425a","timeout":"7s"}
	{"level":"info","ts":"2024-04-01T19:21:00.398976Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"ec29e853f5cd425a"}
	{"level":"info","ts":"2024-04-01T19:21:00.399087Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"ec29e853f5cd425a","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-04-01T19:21:00.399502Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-04-01T19:21:00.404198Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-01T19:21:00.406664Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-01T19:21:00.407044Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-01T19:21:00.407561Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec29e853f5cd425a switched to configuration voters=(17017388114299929178)"}
	{"level":"info","ts":"2024-04-01T19:21:00.407778Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"16343206fca1ffcb","local-member-id":"ec29e853f5cd425a","added-peer-id":"ec29e853f5cd425a","added-peer-peer-urls":["https://192.168.50.39:2380"]}
	{"level":"info","ts":"2024-04-01T19:21:00.408177Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"16343206fca1ffcb","local-member-id":"ec29e853f5cd425a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-01T19:21:00.4083Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-01T19:21:00.462872Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-01T19:21:00.46312Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ec29e853f5cd425a","initial-advertise-peer-urls":["https://192.168.50.39:2380"],"listen-peer-urls":["https://192.168.50.39:2380"],"advertise-client-urls":["https://192.168.50.39:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.39:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-01T19:21:00.463185Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-01T19:21:00.463299Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.39:2380"}
	{"level":"info","ts":"2024-04-01T19:21:00.463335Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.39:2380"}
	
	
	==> kernel <==
	 19:21:15 up 1 min,  0 users,  load average: 2.21, 0.76, 0.27
	Linux kubernetes-upgrade-054413 5.10.207 #1 SMP Wed Mar 27 22:02:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ceaa82c28259335b7c591ec9dea73a33142f3407ab267c13cfddab38e71ea3ba] <==
	I0401 19:21:10.568425       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0401 19:21:10.670385       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0401 19:21:10.688614       1 aggregator.go:165] initial CRD sync complete...
	I0401 19:21:10.688777       1 autoregister_controller.go:141] Starting autoregister controller
	I0401 19:21:10.688810       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0401 19:21:10.699414       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0401 19:21:10.699478       1 policy_source.go:224] refreshing policies
	I0401 19:21:10.714020       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0401 19:21:10.727106       1 shared_informer.go:320] Caches are synced for configmaps
	I0401 19:21:10.727176       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0401 19:21:10.727187       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0401 19:21:10.727290       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0401 19:21:10.729458       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	E0401 19:21:10.734096       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0401 19:21:10.736477       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0401 19:21:10.749237       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0401 19:21:10.784004       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0401 19:21:10.807531       1 cache.go:39] Caches are synced for autoregister controller
	I0401 19:21:11.542507       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0401 19:21:12.129310       1 controller.go:615] quota admission added evaluator for: endpoints
	I0401 19:21:12.711243       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0401 19:21:12.743168       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0401 19:21:12.819441       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0401 19:21:12.855415       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0401 19:21:12.861584       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [ffade4498d1af51c188265ddceb174897e689d4eccc1560cfec7442b477d3da0] <==
	I0401 19:20:59.874221       1 options.go:221] external host was not specified, using 192.168.50.39
	I0401 19:20:59.900823       1 server.go:148] Version: v1.30.0-rc.0
	I0401 19:20:59.900884       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0401 19:21:01.387890       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:21:01.388258       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0401 19:21:01.388394       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0401 19:21:01.409340       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0401 19:21:01.410038       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0401 19:21:01.413792       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0401 19:21:01.414223       1 instance.go:299] Using reconciler: lease
	W0401 19:21:01.415573       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [202ee5218ad97f38d16b43cdb41c2ba858a16b3a78101285572f33291a260237] <==
	I0401 19:21:08.579897       1 serving.go:380] Generated self-signed cert in-memory
	I0401 19:21:08.848555       1 controllermanager.go:189] "Starting" version="v1.30.0-rc.0"
	I0401 19:21:08.848605       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 19:21:08.850808       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0401 19:21:08.850968       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0401 19:21:08.851441       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0401 19:21:08.851539       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0401 19:21:12.722891       1 controllermanager.go:759] "Started controller" controller="serviceaccount-token-controller"
	I0401 19:21:12.723137       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0401 19:21:12.824435       1 shared_informer.go:320] Caches are synced for tokens
	
	
	==> kube-controller-manager [a50db166eb3d3fb2c801d45c4c4a922cb72019b22ec502d281fbbb6e7a138aa1] <==
	
	
	==> kube-proxy [7f617236b9c8ec56bfebe1feff3389a201bc5884506267f76d1f1c937ef9add6] <==
	I0401 19:21:12.219366       1 server_linux.go:69] "Using iptables proxy"
	I0401 19:21:12.241900       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.39"]
	I0401 19:21:12.301918       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0401 19:21:12.301983       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0401 19:21:12.302005       1 server_linux.go:165] "Using iptables Proxier"
	I0401 19:21:12.305285       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0401 19:21:12.305456       1 server.go:872] "Version info" version="v1.30.0-rc.0"
	I0401 19:21:12.305566       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 19:21:12.307349       1 config.go:192] "Starting service config controller"
	I0401 19:21:12.307398       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0401 19:21:12.307424       1 config.go:101] "Starting endpoint slice config controller"
	I0401 19:21:12.307454       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0401 19:21:12.308007       1 config.go:319] "Starting node config controller"
	I0401 19:21:12.308053       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0401 19:21:12.407554       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0401 19:21:12.407583       1 shared_informer.go:320] Caches are synced for service config
	I0401 19:21:12.408123       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [bd6d21cff6f1f52191a997981952d3cce044bc67e3355e7ca9e556d8dfa6434a] <==
	
	
	==> kube-scheduler [2ae2fed014a84263fbea8754dae39f8fa5cc635f6334478eb5d7f76ba30032cd] <==
	
	
	==> kube-scheduler [82b0fec35b824bd8bba846bae75d60299340ef9efc109491e22ca1a81ea11fda] <==
	I0401 19:21:08.387122       1 serving.go:380] Generated self-signed cert in-memory
	W0401 19:21:10.602449       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0401 19:21:10.603211       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0401 19:21:10.603314       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0401 19:21:10.603352       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0401 19:21:10.672408       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0-rc.0"
	I0401 19:21:10.672488       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 19:21:10.674649       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0401 19:21:10.674806       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0401 19:21:10.684880       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0401 19:21:10.674827       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0401 19:21:10.789068       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 01 19:21:06 kubernetes-upgrade-054413 kubelet[3414]: I0401 19:21:06.991000    3414 scope.go:117] "RemoveContainer" containerID="821b68c93674ce1671f5b97310e7799161ccc60de027e51529488494ef06152e"
	Apr 01 19:21:06 kubernetes-upgrade-054413 kubelet[3414]: I0401 19:21:06.992381    3414 scope.go:117] "RemoveContainer" containerID="ffade4498d1af51c188265ddceb174897e689d4eccc1560cfec7442b477d3da0"
	Apr 01 19:21:06 kubernetes-upgrade-054413 kubelet[3414]: I0401 19:21:06.993626    3414 scope.go:117] "RemoveContainer" containerID="a50db166eb3d3fb2c801d45c4c4a922cb72019b22ec502d281fbbb6e7a138aa1"
	Apr 01 19:21:06 kubernetes-upgrade-054413 kubelet[3414]: I0401 19:21:06.994259    3414 scope.go:117] "RemoveContainer" containerID="2ae2fed014a84263fbea8754dae39f8fa5cc635f6334478eb5d7f76ba30032cd"
	Apr 01 19:21:07 kubernetes-upgrade-054413 kubelet[3414]: I0401 19:21:07.060453    3414 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-054413"
	Apr 01 19:21:07 kubernetes-upgrade-054413 kubelet[3414]: E0401 19:21:07.061506    3414 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.39:8443: connect: connection refused" node="kubernetes-upgrade-054413"
	Apr 01 19:21:07 kubernetes-upgrade-054413 kubelet[3414]: I0401 19:21:07.865032    3414 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-054413"
	Apr 01 19:21:10 kubernetes-upgrade-054413 kubelet[3414]: I0401 19:21:10.783436    3414 kubelet_node_status.go:112] "Node was previously registered" node="kubernetes-upgrade-054413"
	Apr 01 19:21:10 kubernetes-upgrade-054413 kubelet[3414]: I0401 19:21:10.783592    3414 kubelet_node_status.go:76] "Successfully registered node" node="kubernetes-upgrade-054413"
	Apr 01 19:21:10 kubernetes-upgrade-054413 kubelet[3414]: I0401 19:21:10.785504    3414 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 01 19:21:10 kubernetes-upgrade-054413 kubelet[3414]: I0401 19:21:10.786928    3414 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 01 19:21:11 kubernetes-upgrade-054413 kubelet[3414]: I0401 19:21:11.323826    3414 apiserver.go:52] "Watching apiserver"
	Apr 01 19:21:11 kubernetes-upgrade-054413 kubelet[3414]: I0401 19:21:11.328379    3414 topology_manager.go:215] "Topology Admit Handler" podUID="0b542f40-5f9f-458a-812a-dfbaf44b09a9" podNamespace="kube-system" podName="storage-provisioner"
	Apr 01 19:21:11 kubernetes-upgrade-054413 kubelet[3414]: I0401 19:21:11.328529    3414 topology_manager.go:215] "Topology Admit Handler" podUID="4c20d59c-0a33-40af-b395-d749ad63baac" podNamespace="kube-system" podName="kube-proxy-pt82p"
	Apr 01 19:21:11 kubernetes-upgrade-054413 kubelet[3414]: I0401 19:21:11.328597    3414 topology_manager.go:215] "Topology Admit Handler" podUID="32105658-72c3-41d6-9f35-fd73c4a0fddd" podNamespace="kube-system" podName="coredns-7db6d8ff4d-bkhgz"
	Apr 01 19:21:11 kubernetes-upgrade-054413 kubelet[3414]: I0401 19:21:11.328661    3414 topology_manager.go:215] "Topology Admit Handler" podUID="b51a0b59-d774-466d-922c-cba0c429b4dd" podNamespace="kube-system" podName="coredns-7db6d8ff4d-k2rs2"
	Apr 01 19:21:11 kubernetes-upgrade-054413 kubelet[3414]: I0401 19:21:11.428841    3414 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 01 19:21:11 kubernetes-upgrade-054413 kubelet[3414]: I0401 19:21:11.517575    3414 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c20d59c-0a33-40af-b395-d749ad63baac-xtables-lock\") pod \"kube-proxy-pt82p\" (UID: \"4c20d59c-0a33-40af-b395-d749ad63baac\") " pod="kube-system/kube-proxy-pt82p"
	Apr 01 19:21:11 kubernetes-upgrade-054413 kubelet[3414]: I0401 19:21:11.518560    3414 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4c20d59c-0a33-40af-b395-d749ad63baac-lib-modules\") pod \"kube-proxy-pt82p\" (UID: \"4c20d59c-0a33-40af-b395-d749ad63baac\") " pod="kube-system/kube-proxy-pt82p"
	Apr 01 19:21:11 kubernetes-upgrade-054413 kubelet[3414]: I0401 19:21:11.519360    3414 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/0b542f40-5f9f-458a-812a-dfbaf44b09a9-tmp\") pod \"storage-provisioner\" (UID: \"0b542f40-5f9f-458a-812a-dfbaf44b09a9\") " pod="kube-system/storage-provisioner"
	Apr 01 19:21:11 kubernetes-upgrade-054413 kubelet[3414]: I0401 19:21:11.629450    3414 scope.go:117] "RemoveContainer" containerID="c02fd41b50262d7f37b0bfc0aa610e0c8b6bd350d86fa733d490833cecdbd258"
	Apr 01 19:21:11 kubernetes-upgrade-054413 kubelet[3414]: I0401 19:21:11.632435    3414 scope.go:117] "RemoveContainer" containerID="95510ef549f6f743571d06e7b7decc2560a22526fa8ac9823b19c1422e4d18c2"
	Apr 01 19:21:11 kubernetes-upgrade-054413 kubelet[3414]: I0401 19:21:11.633017    3414 scope.go:117] "RemoveContainer" containerID="a23575fe9bd2e2304920349a60b2c45d477c2c3454cf9462754e3fa3df56985b"
	Apr 01 19:21:11 kubernetes-upgrade-054413 kubelet[3414]: I0401 19:21:11.633239    3414 scope.go:117] "RemoveContainer" containerID="bd6d21cff6f1f52191a997981952d3cce044bc67e3355e7ca9e556d8dfa6434a"
	Apr 01 19:21:13 kubernetes-upgrade-054413 kubelet[3414]: I0401 19:21:13.770146    3414 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	
	
	==> storage-provisioner [23cbb0cb5e78411b8c65a7d6fd71f5dea9cdabe3b4d4ff747f5fde4927132465] <==
	I0401 19:21:12.088294       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0401 19:21:12.111196       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0401 19:21:12.111278       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0401 19:21:12.164588       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0401 19:21:12.166892       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-054413_c6f7ede4-0c85-4f3f-8076-df64569763d6!
	I0401 19:21:12.167830       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"104ee3f9-4515-4663-9fc4-37d138c16223", APIVersion:"v1", ResourceVersion:"412", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-054413_c6f7ede4-0c85-4f3f-8076-df64569763d6 became leader
	I0401 19:21:12.268855       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-054413_c6f7ede4-0c85-4f3f-8076-df64569763d6!
	
	
	==> storage-provisioner [95510ef549f6f743571d06e7b7decc2560a22526fa8ac9823b19c1422e4d18c2] <==
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0401 19:21:14.665065   63989 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18233-10493/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-054413 -n kubernetes-upgrade-054413
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-054413 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-054413" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-054413
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-054413: (1.146777641s)
--- FAIL: TestKubernetesUpgrade (452.11s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (65.3s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-208693 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-208693 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m0.754214652s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-208693] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18233
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18233-10493/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-10493/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-208693" primary control-plane node in "pause-208693" cluster
	* Updating the running kvm2 "pause-208693" VM ...
	* Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-208693" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 19:17:06.559176   55506 out.go:291] Setting OutFile to fd 1 ...
	I0401 19:17:06.559696   55506 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 19:17:06.559716   55506 out.go:304] Setting ErrFile to fd 2...
	I0401 19:17:06.559725   55506 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 19:17:06.560174   55506 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-10493/.minikube/bin
	I0401 19:17:06.561139   55506 out.go:298] Setting JSON to false
	I0401 19:17:06.562213   55506 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7179,"bootTime":1711991848,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 19:17:06.562280   55506 start.go:139] virtualization: kvm guest
	I0401 19:17:06.564131   55506 out.go:177] * [pause-208693] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 19:17:06.565567   55506 notify.go:220] Checking for updates...
	I0401 19:17:06.565578   55506 out.go:177]   - MINIKUBE_LOCATION=18233
	I0401 19:17:06.567067   55506 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 19:17:06.568498   55506 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 19:17:06.569843   55506 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-10493/.minikube
	I0401 19:17:06.571020   55506 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 19:17:06.572203   55506 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 19:17:06.573864   55506 config.go:182] Loaded profile config "pause-208693": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 19:17:06.574465   55506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:17:06.574509   55506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:17:06.589665   55506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44483
	I0401 19:17:06.590127   55506 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:17:06.590641   55506 main.go:141] libmachine: Using API Version  1
	I0401 19:17:06.590661   55506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:17:06.591067   55506 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:17:06.591266   55506 main.go:141] libmachine: (pause-208693) Calling .DriverName
	I0401 19:17:06.591497   55506 driver.go:392] Setting default libvirt URI to qemu:///system
	I0401 19:17:06.591772   55506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:17:06.591803   55506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:17:06.606712   55506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41555
	I0401 19:17:06.607106   55506 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:17:06.607724   55506 main.go:141] libmachine: Using API Version  1
	I0401 19:17:06.607751   55506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:17:06.608126   55506 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:17:06.608338   55506 main.go:141] libmachine: (pause-208693) Calling .DriverName
	I0401 19:17:06.646606   55506 out.go:177] * Using the kvm2 driver based on existing profile
	I0401 19:17:06.647900   55506 start.go:297] selected driver: kvm2
	I0401 19:17:06.647913   55506 start.go:901] validating driver "kvm2" against &{Name:pause-208693 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.29.3 ClusterName:pause-208693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-dev
ice-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:17:06.648030   55506 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 19:17:06.648418   55506 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 19:17:06.648520   55506 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18233-10493/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0401 19:17:06.666623   55506 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0401 19:17:06.667314   55506 cni.go:84] Creating CNI manager for ""
	I0401 19:17:06.667339   55506 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:17:06.667397   55506 start.go:340] cluster config:
	{Name:pause-208693 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:pause-208693 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:
false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:17:06.667561   55506 iso.go:125] acquiring lock: {Name:mka511ffe42ecd86bd7f46e7a17ddcdd3e5e4327 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 19:17:06.669365   55506 out.go:177] * Starting "pause-208693" primary control-plane node in "pause-208693" cluster
	I0401 19:17:06.670568   55506 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0401 19:17:06.670599   55506 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0401 19:17:06.670633   55506 cache.go:56] Caching tarball of preloaded images
	I0401 19:17:06.670715   55506 preload.go:173] Found /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 19:17:06.670725   55506 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0401 19:17:06.670830   55506 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/pause-208693/config.json ...
	I0401 19:17:06.671016   55506 start.go:360] acquireMachinesLock for pause-208693: {Name:mk6b7472209a8db5f40be4c2f0565da7e0094c19 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 19:17:07.750918   55506 start.go:364] duration metric: took 1.079863315s to acquireMachinesLock for "pause-208693"
	I0401 19:17:07.750971   55506 start.go:96] Skipping create...Using existing machine configuration
	I0401 19:17:07.750985   55506 fix.go:54] fixHost starting: 
	I0401 19:17:07.751434   55506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:17:07.751475   55506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:17:07.768026   55506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41597
	I0401 19:17:07.768442   55506 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:17:07.768953   55506 main.go:141] libmachine: Using API Version  1
	I0401 19:17:07.768978   55506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:17:07.769299   55506 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:17:07.769508   55506 main.go:141] libmachine: (pause-208693) Calling .DriverName
	I0401 19:17:07.769716   55506 main.go:141] libmachine: (pause-208693) Calling .GetState
	I0401 19:17:07.771318   55506 fix.go:112] recreateIfNeeded on pause-208693: state=Running err=<nil>
	W0401 19:17:07.771337   55506 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 19:17:07.773627   55506 out.go:177] * Updating the running kvm2 "pause-208693" VM ...
	I0401 19:17:07.775392   55506 machine.go:94] provisionDockerMachine start ...
	I0401 19:17:07.775417   55506 main.go:141] libmachine: (pause-208693) Calling .DriverName
	I0401 19:17:07.775610   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHHostname
	I0401 19:17:07.778194   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:07.778628   55506 main.go:141] libmachine: (pause-208693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:70:95", ip: ""} in network mk-pause-208693: {Iface:virbr1 ExpiryTime:2024-04-01 20:15:45 +0000 UTC Type:0 Mac:52:54:00:21:70:95 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:pause-208693 Clientid:01:52:54:00:21:70:95}
	I0401 19:17:07.778664   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined IP address 192.168.39.250 and MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:07.778819   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHPort
	I0401 19:17:07.778996   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHKeyPath
	I0401 19:17:07.779176   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHKeyPath
	I0401 19:17:07.779307   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHUsername
	I0401 19:17:07.779470   55506 main.go:141] libmachine: Using SSH client type: native
	I0401 19:17:07.779699   55506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0401 19:17:07.779718   55506 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 19:17:07.891921   55506 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-208693
	
	I0401 19:17:07.891946   55506 main.go:141] libmachine: (pause-208693) Calling .GetMachineName
	I0401 19:17:07.892212   55506 buildroot.go:166] provisioning hostname "pause-208693"
	I0401 19:17:07.892236   55506 main.go:141] libmachine: (pause-208693) Calling .GetMachineName
	I0401 19:17:07.892415   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHHostname
	I0401 19:17:07.895162   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:07.895581   55506 main.go:141] libmachine: (pause-208693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:70:95", ip: ""} in network mk-pause-208693: {Iface:virbr1 ExpiryTime:2024-04-01 20:15:45 +0000 UTC Type:0 Mac:52:54:00:21:70:95 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:pause-208693 Clientid:01:52:54:00:21:70:95}
	I0401 19:17:07.895604   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined IP address 192.168.39.250 and MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:07.895779   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHPort
	I0401 19:17:07.895965   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHKeyPath
	I0401 19:17:07.896115   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHKeyPath
	I0401 19:17:07.896268   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHUsername
	I0401 19:17:07.896424   55506 main.go:141] libmachine: Using SSH client type: native
	I0401 19:17:07.896615   55506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0401 19:17:07.896632   55506 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-208693 && echo "pause-208693" | sudo tee /etc/hostname
	I0401 19:17:08.023301   55506 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-208693
	
	I0401 19:17:08.023334   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHHostname
	I0401 19:17:08.026521   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:08.026994   55506 main.go:141] libmachine: (pause-208693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:70:95", ip: ""} in network mk-pause-208693: {Iface:virbr1 ExpiryTime:2024-04-01 20:15:45 +0000 UTC Type:0 Mac:52:54:00:21:70:95 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:pause-208693 Clientid:01:52:54:00:21:70:95}
	I0401 19:17:08.027030   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined IP address 192.168.39.250 and MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:08.027171   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHPort
	I0401 19:17:08.027344   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHKeyPath
	I0401 19:17:08.027519   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHKeyPath
	I0401 19:17:08.027730   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHUsername
	I0401 19:17:08.027938   55506 main.go:141] libmachine: Using SSH client type: native
	I0401 19:17:08.028129   55506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0401 19:17:08.028153   55506 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-208693' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-208693/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-208693' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 19:17:08.143107   55506 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 19:17:08.143142   55506 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18233-10493/.minikube CaCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18233-10493/.minikube}
	I0401 19:17:08.143185   55506 buildroot.go:174] setting up certificates
	I0401 19:17:08.143222   55506 provision.go:84] configureAuth start
	I0401 19:17:08.143241   55506 main.go:141] libmachine: (pause-208693) Calling .GetMachineName
	I0401 19:17:08.143520   55506 main.go:141] libmachine: (pause-208693) Calling .GetIP
	I0401 19:17:08.146422   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:08.146745   55506 main.go:141] libmachine: (pause-208693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:70:95", ip: ""} in network mk-pause-208693: {Iface:virbr1 ExpiryTime:2024-04-01 20:15:45 +0000 UTC Type:0 Mac:52:54:00:21:70:95 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:pause-208693 Clientid:01:52:54:00:21:70:95}
	I0401 19:17:08.146783   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined IP address 192.168.39.250 and MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:08.146884   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHHostname
	I0401 19:17:08.149275   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:08.149679   55506 main.go:141] libmachine: (pause-208693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:70:95", ip: ""} in network mk-pause-208693: {Iface:virbr1 ExpiryTime:2024-04-01 20:15:45 +0000 UTC Type:0 Mac:52:54:00:21:70:95 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:pause-208693 Clientid:01:52:54:00:21:70:95}
	I0401 19:17:08.149714   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined IP address 192.168.39.250 and MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:08.149799   55506 provision.go:143] copyHostCerts
	I0401 19:17:08.149849   55506 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem, removing ...
	I0401 19:17:08.149858   55506 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem
	I0401 19:17:08.149911   55506 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem (1082 bytes)
	I0401 19:17:08.149990   55506 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem, removing ...
	I0401 19:17:08.149999   55506 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem
	I0401 19:17:08.150017   55506 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem (1123 bytes)
	I0401 19:17:08.150061   55506 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem, removing ...
	I0401 19:17:08.150069   55506 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem
	I0401 19:17:08.150084   55506 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem (1679 bytes)
	I0401 19:17:08.150125   55506 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem org=jenkins.pause-208693 san=[127.0.0.1 192.168.39.250 localhost minikube pause-208693]
	I0401 19:17:08.216818   55506 provision.go:177] copyRemoteCerts
	I0401 19:17:08.216865   55506 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 19:17:08.216887   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHHostname
	I0401 19:17:08.219822   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:08.220083   55506 main.go:141] libmachine: (pause-208693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:70:95", ip: ""} in network mk-pause-208693: {Iface:virbr1 ExpiryTime:2024-04-01 20:15:45 +0000 UTC Type:0 Mac:52:54:00:21:70:95 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:pause-208693 Clientid:01:52:54:00:21:70:95}
	I0401 19:17:08.220111   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined IP address 192.168.39.250 and MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:08.220313   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHPort
	I0401 19:17:08.220515   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHKeyPath
	I0401 19:17:08.220676   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHUsername
	I0401 19:17:08.220787   55506 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/pause-208693/id_rsa Username:docker}
	I0401 19:17:08.312676   55506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 19:17:08.349102   55506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0401 19:17:08.382636   55506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 19:17:08.412753   55506 provision.go:87] duration metric: took 269.516315ms to configureAuth
	I0401 19:17:08.412778   55506 buildroot.go:189] setting minikube options for container-runtime
	I0401 19:17:08.412973   55506 config.go:182] Loaded profile config "pause-208693": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 19:17:08.413039   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHHostname
	I0401 19:17:08.415885   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:08.416289   55506 main.go:141] libmachine: (pause-208693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:70:95", ip: ""} in network mk-pause-208693: {Iface:virbr1 ExpiryTime:2024-04-01 20:15:45 +0000 UTC Type:0 Mac:52:54:00:21:70:95 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:pause-208693 Clientid:01:52:54:00:21:70:95}
	I0401 19:17:08.416325   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined IP address 192.168.39.250 and MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:08.416568   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHPort
	I0401 19:17:08.416749   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHKeyPath
	I0401 19:17:08.416918   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHKeyPath
	I0401 19:17:08.417071   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHUsername
	I0401 19:17:08.417255   55506 main.go:141] libmachine: Using SSH client type: native
	I0401 19:17:08.417432   55506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0401 19:17:08.417455   55506 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 19:17:14.011593   55506 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 19:17:14.011620   55506 machine.go:97] duration metric: took 6.236210277s to provisionDockerMachine
	I0401 19:17:14.011635   55506 start.go:293] postStartSetup for "pause-208693" (driver="kvm2")
	I0401 19:17:14.011647   55506 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 19:17:14.011667   55506 main.go:141] libmachine: (pause-208693) Calling .DriverName
	I0401 19:17:14.012056   55506 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 19:17:14.012091   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHHostname
	I0401 19:17:14.015013   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:14.015398   55506 main.go:141] libmachine: (pause-208693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:70:95", ip: ""} in network mk-pause-208693: {Iface:virbr1 ExpiryTime:2024-04-01 20:15:45 +0000 UTC Type:0 Mac:52:54:00:21:70:95 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:pause-208693 Clientid:01:52:54:00:21:70:95}
	I0401 19:17:14.015426   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined IP address 192.168.39.250 and MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:14.015761   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHPort
	I0401 19:17:14.015941   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHKeyPath
	I0401 19:17:14.016116   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHUsername
	I0401 19:17:14.016263   55506 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/pause-208693/id_rsa Username:docker}
	I0401 19:17:14.104053   55506 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 19:17:14.110496   55506 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 19:17:14.110522   55506 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/addons for local assets ...
	I0401 19:17:14.110577   55506 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/files for local assets ...
	I0401 19:17:14.110670   55506 filesync.go:149] local asset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> 177512.pem in /etc/ssl/certs
	I0401 19:17:14.110781   55506 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 19:17:14.124676   55506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:17:14.152875   55506 start.go:296] duration metric: took 141.227889ms for postStartSetup
	I0401 19:17:14.152908   55506 fix.go:56] duration metric: took 6.401924265s for fixHost
	I0401 19:17:14.152932   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHHostname
	I0401 19:17:14.156060   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:14.156464   55506 main.go:141] libmachine: (pause-208693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:70:95", ip: ""} in network mk-pause-208693: {Iface:virbr1 ExpiryTime:2024-04-01 20:15:45 +0000 UTC Type:0 Mac:52:54:00:21:70:95 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:pause-208693 Clientid:01:52:54:00:21:70:95}
	I0401 19:17:14.156489   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined IP address 192.168.39.250 and MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:14.156750   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHPort
	I0401 19:17:14.156939   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHKeyPath
	I0401 19:17:14.157124   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHKeyPath
	I0401 19:17:14.157360   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHUsername
	I0401 19:17:14.157553   55506 main.go:141] libmachine: Using SSH client type: native
	I0401 19:17:14.157770   55506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0401 19:17:14.157786   55506 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0401 19:17:14.271277   55506 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711999034.266271505
	
	I0401 19:17:14.271297   55506 fix.go:216] guest clock: 1711999034.266271505
	I0401 19:17:14.271306   55506 fix.go:229] Guest: 2024-04-01 19:17:14.266271505 +0000 UTC Remote: 2024-04-01 19:17:14.152913142 +0000 UTC m=+7.640372274 (delta=113.358363ms)
	I0401 19:17:14.271330   55506 fix.go:200] guest clock delta is within tolerance: 113.358363ms
	I0401 19:17:14.271336   55506 start.go:83] releasing machines lock for "pause-208693", held for 6.520387462s
	I0401 19:17:14.271352   55506 main.go:141] libmachine: (pause-208693) Calling .DriverName
	I0401 19:17:14.271584   55506 main.go:141] libmachine: (pause-208693) Calling .GetIP
	I0401 19:17:14.274284   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:14.274692   55506 main.go:141] libmachine: (pause-208693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:70:95", ip: ""} in network mk-pause-208693: {Iface:virbr1 ExpiryTime:2024-04-01 20:15:45 +0000 UTC Type:0 Mac:52:54:00:21:70:95 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:pause-208693 Clientid:01:52:54:00:21:70:95}
	I0401 19:17:14.274734   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined IP address 192.168.39.250 and MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:14.274873   55506 main.go:141] libmachine: (pause-208693) Calling .DriverName
	I0401 19:17:14.275375   55506 main.go:141] libmachine: (pause-208693) Calling .DriverName
	I0401 19:17:14.275542   55506 main.go:141] libmachine: (pause-208693) Calling .DriverName
	I0401 19:17:14.275629   55506 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 19:17:14.275665   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHHostname
	I0401 19:17:14.275781   55506 ssh_runner.go:195] Run: cat /version.json
	I0401 19:17:14.275813   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHHostname
	I0401 19:17:14.278769   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:14.278948   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:14.279327   55506 main.go:141] libmachine: (pause-208693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:70:95", ip: ""} in network mk-pause-208693: {Iface:virbr1 ExpiryTime:2024-04-01 20:15:45 +0000 UTC Type:0 Mac:52:54:00:21:70:95 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:pause-208693 Clientid:01:52:54:00:21:70:95}
	I0401 19:17:14.279360   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined IP address 192.168.39.250 and MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:14.279397   55506 main.go:141] libmachine: (pause-208693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:70:95", ip: ""} in network mk-pause-208693: {Iface:virbr1 ExpiryTime:2024-04-01 20:15:45 +0000 UTC Type:0 Mac:52:54:00:21:70:95 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:pause-208693 Clientid:01:52:54:00:21:70:95}
	I0401 19:17:14.279413   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined IP address 192.168.39.250 and MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:14.279584   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHPort
	I0401 19:17:14.279788   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHPort
	I0401 19:17:14.279796   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHKeyPath
	I0401 19:17:14.280018   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHUsername
	I0401 19:17:14.280024   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHKeyPath
	I0401 19:17:14.280273   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHUsername
	I0401 19:17:14.280294   55506 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/pause-208693/id_rsa Username:docker}
	I0401 19:17:14.280375   55506 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/pause-208693/id_rsa Username:docker}
	I0401 19:17:14.398299   55506 ssh_runner.go:195] Run: systemctl --version
	I0401 19:17:14.407522   55506 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 19:17:14.589987   55506 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 19:17:14.597998   55506 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 19:17:14.598056   55506 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 19:17:14.610680   55506 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 19:17:14.610700   55506 start.go:494] detecting cgroup driver to use...
	I0401 19:17:14.610753   55506 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 19:17:14.633419   55506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 19:17:14.649012   55506 docker.go:217] disabling cri-docker service (if available) ...
	I0401 19:17:14.649069   55506 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 19:17:14.665238   55506 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 19:17:14.681036   55506 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 19:17:14.839573   55506 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 19:17:15.029768   55506 docker.go:233] disabling docker service ...
	I0401 19:17:15.029834   55506 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 19:17:15.055505   55506 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 19:17:15.072613   55506 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 19:17:15.249142   55506 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 19:17:15.437700   55506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 19:17:15.458179   55506 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 19:17:15.485128   55506 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0401 19:17:15.485204   55506 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:17:15.499058   55506 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 19:17:15.499145   55506 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:17:15.512574   55506 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:17:15.525783   55506 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:17:15.538961   55506 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 19:17:15.552336   55506 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:17:15.565451   55506 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:17:15.578535   55506 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:17:15.595981   55506 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 19:17:15.610546   55506 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 19:17:15.623143   55506 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:17:15.776688   55506 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 19:17:21.084965   55506 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.308237386s)
	I0401 19:17:21.085002   55506 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 19:17:21.085067   55506 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 19:17:21.093408   55506 start.go:562] Will wait 60s for crictl version
	I0401 19:17:21.093481   55506 ssh_runner.go:195] Run: which crictl
	I0401 19:17:21.098328   55506 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 19:17:21.143587   55506 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 19:17:21.143673   55506 ssh_runner.go:195] Run: crio --version
	I0401 19:17:21.178725   55506 ssh_runner.go:195] Run: crio --version
	I0401 19:17:21.314941   55506 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0401 19:17:21.316408   55506 main.go:141] libmachine: (pause-208693) Calling .GetIP
	I0401 19:17:21.319848   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:21.320268   55506 main.go:141] libmachine: (pause-208693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:70:95", ip: ""} in network mk-pause-208693: {Iface:virbr1 ExpiryTime:2024-04-01 20:15:45 +0000 UTC Type:0 Mac:52:54:00:21:70:95 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:pause-208693 Clientid:01:52:54:00:21:70:95}
	I0401 19:17:21.320288   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined IP address 192.168.39.250 and MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:21.320664   55506 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0401 19:17:21.327439   55506 kubeadm.go:877] updating cluster {Name:pause-208693 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3
ClusterName:pause-208693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fals
e olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 19:17:21.327563   55506 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0401 19:17:21.327623   55506 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:17:21.391343   55506 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 19:17:21.391373   55506 crio.go:433] Images already preloaded, skipping extraction
	I0401 19:17:21.391425   55506 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:17:21.430944   55506 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 19:17:21.430969   55506 cache_images.go:84] Images are preloaded, skipping loading
	I0401 19:17:21.430978   55506 kubeadm.go:928] updating node { 192.168.39.250 8443 v1.29.3 crio true true} ...
	I0401 19:17:21.431097   55506 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-208693 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.250
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:pause-208693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 19:17:21.431183   55506 ssh_runner.go:195] Run: crio config
	I0401 19:17:21.490699   55506 cni.go:84] Creating CNI manager for ""
	I0401 19:17:21.490720   55506 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:17:21.490733   55506 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 19:17:21.490752   55506 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.250 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-208693 NodeName:pause-208693 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.250"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.250 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 19:17:21.490874   55506 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.250
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-208693"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.250
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.250"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 19:17:21.490940   55506 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0401 19:17:21.505292   55506 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 19:17:21.505353   55506 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 19:17:21.518774   55506 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0401 19:17:21.539416   55506 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 19:17:21.559611   55506 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0401 19:17:21.583677   55506 ssh_runner.go:195] Run: grep 192.168.39.250	control-plane.minikube.internal$ /etc/hosts
	I0401 19:17:21.589587   55506 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:17:21.742565   55506 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:17:21.760339   55506 certs.go:68] Setting up /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/pause-208693 for IP: 192.168.39.250
	I0401 19:17:21.760372   55506 certs.go:194] generating shared ca certs ...
	I0401 19:17:21.760393   55506 certs.go:226] acquiring lock for ca certs: {Name:mk348b3e250c104b662139cd7212c6c6dfda3180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:17:21.760561   55506 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key
	I0401 19:17:21.760614   55506 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key
	I0401 19:17:21.760629   55506 certs.go:256] generating profile certs ...
	I0401 19:17:21.760753   55506 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/pause-208693/client.key
	I0401 19:17:21.760853   55506 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/pause-208693/apiserver.key.640a01f7
	I0401 19:17:21.760894   55506 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/pause-208693/proxy-client.key
	I0401 19:17:21.760997   55506 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem (1338 bytes)
	W0401 19:17:21.761029   55506 certs.go:480] ignoring /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751_empty.pem, impossibly tiny 0 bytes
	I0401 19:17:21.761038   55506 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem (1675 bytes)
	I0401 19:17:21.761067   55506 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem (1082 bytes)
	I0401 19:17:21.761094   55506 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem (1123 bytes)
	I0401 19:17:21.761118   55506 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem (1679 bytes)
	I0401 19:17:21.761152   55506 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:17:21.762220   55506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 19:17:21.789295   55506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 19:17:21.816554   55506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 19:17:22.006815   55506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 19:17:22.163516   55506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/pause-208693/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0401 19:17:22.366533   55506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/pause-208693/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0401 19:17:22.619189   55506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/pause-208693/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 19:17:22.751245   55506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/pause-208693/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 19:17:23.002508   55506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /usr/share/ca-certificates/177512.pem (1708 bytes)
	I0401 19:17:23.074573   55506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 19:17:23.110680   55506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem --> /usr/share/ca-certificates/17751.pem (1338 bytes)
	I0401 19:17:23.149436   55506 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0401 19:17:23.177136   55506 ssh_runner.go:195] Run: openssl version
	I0401 19:17:23.188108   55506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177512.pem && ln -fs /usr/share/ca-certificates/177512.pem /etc/ssl/certs/177512.pem"
	I0401 19:17:23.205226   55506 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177512.pem
	I0401 19:17:23.213298   55506 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 18:15 /usr/share/ca-certificates/177512.pem
	I0401 19:17:23.213366   55506 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177512.pem
	I0401 19:17:23.224544   55506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177512.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 19:17:23.240162   55506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 19:17:23.257743   55506 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:17:23.263476   55506 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 18:07 /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:17:23.263542   55506 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:17:23.272318   55506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 19:17:23.286967   55506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17751.pem && ln -fs /usr/share/ca-certificates/17751.pem /etc/ssl/certs/17751.pem"
	I0401 19:17:23.299084   55506 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17751.pem
	I0401 19:17:23.304168   55506 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 18:15 /usr/share/ca-certificates/17751.pem
	I0401 19:17:23.304227   55506 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17751.pem
	I0401 19:17:23.310638   55506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17751.pem /etc/ssl/certs/51391683.0"
	I0401 19:17:23.321219   55506 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 19:17:23.326672   55506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 19:17:23.336598   55506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 19:17:23.343670   55506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 19:17:23.354373   55506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 19:17:23.363233   55506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 19:17:23.376742   55506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 19:17:23.390188   55506 kubeadm.go:391] StartCluster: {Name:pause-208693 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cl
usterName:pause-208693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:17:23.390370   55506 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 19:17:23.390470   55506 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:17:23.488359   55506 cri.go:89] found id: "51f0eff30b7c7d78a434ac0cebb793087012ebc1a4e3af4377acb07b114c7b1b"
	I0401 19:17:23.488385   55506 cri.go:89] found id: "ffbcd93cd4dc6dd7bdba8817fe9464043c1441e48a2f0a339d8e2f90465c23b2"
	I0401 19:17:23.488390   55506 cri.go:89] found id: "2d9857c1d11f7699cdda344ccc35292880e1e966398923e7c7c8a221bb17fbb4"
	I0401 19:17:23.488395   55506 cri.go:89] found id: "eb0be624c77f6b7779d07398d8ac81b11ff2d1e2491332385f9bc7abd08da4d1"
	I0401 19:17:23.488399   55506 cri.go:89] found id: "90827beb7d452b745ca6b9be1e1cbf187b22f2a83733fa1cf32f65dd51871a94"
	I0401 19:17:23.488403   55506 cri.go:89] found id: "a191a299d42de032a4e1b058d778aeb8a768699852f90479ed27525750c39dcb"
	I0401 19:17:23.488407   55506 cri.go:89] found id: "13440e4b058772c288c91430cc8b93d4ee93f6c2dc002c58c42364841c37537c"
	I0401 19:17:23.488411   55506 cri.go:89] found id: "f4e035677728cfc3e8fdacccbe9c2074622432687c5ffea26e9297dab2bc7e5f"
	I0401 19:17:23.488415   55506 cri.go:89] found id: "4c96330c5da0385157221a32550935b344f8d450869645cdb302bf6d7d24d50a"
	I0401 19:17:23.488422   55506 cri.go:89] found id: "60bec38260e22141e8ef66a6e954a86d22216f47a8023678c8c9ec31a28ed3cd"
	I0401 19:17:23.488426   55506 cri.go:89] found id: "9ae44e1e9ca77a159598d47d87a284b50262d7feed6af8939a521854ddf86ff4"
	I0401 19:17:23.488448   55506 cri.go:89] found id: "44a8c17316feb68f3c977baa3f7431c716167f78518eb63e20017a200ca17ad4"
	I0401 19:17:23.488457   55506 cri.go:89] found id: ""
	I0401 19:17:23.488513   55506 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-208693 -n pause-208693
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-208693 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-208693 logs -n 25: (1.516311787s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p cilium-408543 sudo crio            | cilium-408543             | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:12 UTC |                     |
	|         | config                                |                           |         |                |                     |                     |
	| delete  | -p cilium-408543                      | cilium-408543             | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:12 UTC | 01 Apr 24 19:12 UTC |
	| start   | -p running-upgrade-349166             | minikube                  | jenkins | v1.26.0        | 01 Apr 24 19:12 UTC | 01 Apr 24 19:14 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |                |                     |                     |
	|         |  --container-runtime=crio             |                           |         |                |                     |                     |
	| ssh     | -p NoKubernetes-249249 sudo           | NoKubernetes-249249       | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:12 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |                |                     |                     |
	|         | service kubelet                       |                           |         |                |                     |                     |
	| stop    | -p NoKubernetes-249249                | NoKubernetes-249249       | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:12 UTC | 01 Apr 24 19:13 UTC |
	| start   | -p NoKubernetes-249249                | NoKubernetes-249249       | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:13 UTC | 01 Apr 24 19:13 UTC |
	|         | --driver=kvm2                         |                           |         |                |                     |                     |
	|         | --container-runtime=crio              |                           |         |                |                     |                     |
	| ssh     | cert-options-444257 ssh               | cert-options-444257       | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:13 UTC | 01 Apr 24 19:13 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |                |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |                |                     |                     |
	| ssh     | -p cert-options-444257 -- sudo        | cert-options-444257       | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:13 UTC | 01 Apr 24 19:13 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |                |                     |                     |
	| delete  | -p cert-options-444257                | cert-options-444257       | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:13 UTC | 01 Apr 24 19:13 UTC |
	| start   | -p kubernetes-upgrade-054413          | kubernetes-upgrade-054413 | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:13 UTC |                     |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |                |                     |                     |
	|         | --alsologtostderr                     |                           |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |                |                     |                     |
	|         | --container-runtime=crio              |                           |         |                |                     |                     |
	| ssh     | -p NoKubernetes-249249 sudo           | NoKubernetes-249249       | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:13 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |                |                     |                     |
	|         | service kubelet                       |                           |         |                |                     |                     |
	| delete  | -p NoKubernetes-249249                | NoKubernetes-249249       | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:13 UTC | 01 Apr 24 19:13 UTC |
	| start   | -p stopped-upgrade-246129             | minikube                  | jenkins | v1.26.0        | 01 Apr 24 19:13 UTC | 01 Apr 24 19:15 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |                |                     |                     |
	|         |  --container-runtime=crio             |                           |         |                |                     |                     |
	| start   | -p cert-expiration-385547             | cert-expiration-385547    | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:13 UTC | 01 Apr 24 19:15 UTC |
	|         | --memory=2048                         |                           |         |                |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |                |                     |                     |
	|         | --driver=kvm2                         |                           |         |                |                     |                     |
	|         | --container-runtime=crio              |                           |         |                |                     |                     |
	| start   | -p running-upgrade-349166             | running-upgrade-349166    | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:14 UTC | 01 Apr 24 19:16 UTC |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --alsologtostderr                     |                           |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |                |                     |                     |
	|         | --container-runtime=crio              |                           |         |                |                     |                     |
	| delete  | -p cert-expiration-385547             | cert-expiration-385547    | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:15 UTC | 01 Apr 24 19:15 UTC |
	| start   | -p pause-208693 --memory=2048         | pause-208693              | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:15 UTC | 01 Apr 24 19:17 UTC |
	|         | --install-addons=false                |                           |         |                |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |                |                     |                     |
	|         | --container-runtime=crio              |                           |         |                |                     |                     |
	| stop    | stopped-upgrade-246129 stop           | minikube                  | jenkins | v1.26.0        | 01 Apr 24 19:15 UTC | 01 Apr 24 19:15 UTC |
	| start   | -p stopped-upgrade-246129             | stopped-upgrade-246129    | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:15 UTC | 01 Apr 24 19:16 UTC |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --alsologtostderr                     |                           |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |                |                     |                     |
	|         | --container-runtime=crio              |                           |         |                |                     |                     |
	| delete  | -p running-upgrade-349166             | running-upgrade-349166    | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:16 UTC | 01 Apr 24 19:16 UTC |
	| start   | -p auto-408543 --memory=3072          | auto-408543               | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:16 UTC | 01 Apr 24 19:17 UTC |
	|         | --alsologtostderr --wait=true         |                           |         |                |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |                |                     |                     |
	|         | --driver=kvm2                         |                           |         |                |                     |                     |
	|         | --container-runtime=crio              |                           |         |                |                     |                     |
	| delete  | -p stopped-upgrade-246129             | stopped-upgrade-246129    | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:16 UTC | 01 Apr 24 19:16 UTC |
	| start   | -p kindnet-408543                     | kindnet-408543            | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:16 UTC |                     |
	|         | --memory=3072                         |                           |         |                |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |                |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |                |                     |                     |
	|         | --cni=kindnet --driver=kvm2           |                           |         |                |                     |                     |
	|         | --container-runtime=crio              |                           |         |                |                     |                     |
	| start   | -p pause-208693                       | pause-208693              | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:17 UTC | 01 Apr 24 19:18 UTC |
	|         | --alsologtostderr                     |                           |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |                |                     |                     |
	|         | --container-runtime=crio              |                           |         |                |                     |                     |
	| ssh     | -p auto-408543 pgrep -a               | auto-408543               | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:17 UTC | 01 Apr 24 19:17 UTC |
	|         | kubelet                               |                           |         |                |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/01 19:17:06
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 19:17:06.559176   55506 out.go:291] Setting OutFile to fd 1 ...
	I0401 19:17:06.559696   55506 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 19:17:06.559716   55506 out.go:304] Setting ErrFile to fd 2...
	I0401 19:17:06.559725   55506 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 19:17:06.560174   55506 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-10493/.minikube/bin
	I0401 19:17:06.561139   55506 out.go:298] Setting JSON to false
	I0401 19:17:06.562213   55506 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7179,"bootTime":1711991848,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 19:17:06.562280   55506 start.go:139] virtualization: kvm guest
	I0401 19:17:06.564131   55506 out.go:177] * [pause-208693] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 19:17:06.565567   55506 notify.go:220] Checking for updates...
	I0401 19:17:06.565578   55506 out.go:177]   - MINIKUBE_LOCATION=18233
	I0401 19:17:06.567067   55506 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 19:17:06.568498   55506 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 19:17:06.569843   55506 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-10493/.minikube
	I0401 19:17:06.571020   55506 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 19:17:06.572203   55506 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 19:17:06.573864   55506 config.go:182] Loaded profile config "pause-208693": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 19:17:06.574465   55506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:17:06.574509   55506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:17:06.589665   55506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44483
	I0401 19:17:06.590127   55506 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:17:06.590641   55506 main.go:141] libmachine: Using API Version  1
	I0401 19:17:06.590661   55506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:17:06.591067   55506 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:17:06.591266   55506 main.go:141] libmachine: (pause-208693) Calling .DriverName
	I0401 19:17:06.591497   55506 driver.go:392] Setting default libvirt URI to qemu:///system
	I0401 19:17:06.591772   55506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:17:06.591803   55506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:17:06.606712   55506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41555
	I0401 19:17:06.607106   55506 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:17:06.607724   55506 main.go:141] libmachine: Using API Version  1
	I0401 19:17:06.607751   55506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:17:06.608126   55506 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:17:06.608338   55506 main.go:141] libmachine: (pause-208693) Calling .DriverName
	I0401 19:17:06.646606   55506 out.go:177] * Using the kvm2 driver based on existing profile
	I0401 19:17:06.647900   55506 start.go:297] selected driver: kvm2
	I0401 19:17:06.647913   55506 start.go:901] validating driver "kvm2" against &{Name:pause-208693 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.29.3 ClusterName:pause-208693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-dev
ice-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:17:06.648030   55506 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 19:17:06.648418   55506 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 19:17:06.648520   55506 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18233-10493/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0401 19:17:06.666623   55506 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0401 19:17:06.667314   55506 cni.go:84] Creating CNI manager for ""
	I0401 19:17:06.667339   55506 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:17:06.667397   55506 start.go:340] cluster config:
	{Name:pause-208693 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:pause-208693 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:
false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:17:06.667561   55506 iso.go:125] acquiring lock: {Name:mka511ffe42ecd86bd7f46e7a17ddcdd3e5e4327 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 19:17:06.669365   55506 out.go:177] * Starting "pause-208693" primary control-plane node in "pause-208693" cluster
	I0401 19:17:06.670568   55506 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0401 19:17:06.670599   55506 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0401 19:17:06.670633   55506 cache.go:56] Caching tarball of preloaded images
	I0401 19:17:06.670715   55506 preload.go:173] Found /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 19:17:06.670725   55506 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0401 19:17:06.670830   55506 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/pause-208693/config.json ...
	I0401 19:17:06.671016   55506 start.go:360] acquireMachinesLock for pause-208693: {Name:mk6b7472209a8db5f40be4c2f0565da7e0094c19 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 19:17:07.750918   55506 start.go:364] duration metric: took 1.079863315s to acquireMachinesLock for "pause-208693"
	I0401 19:17:07.750971   55506 start.go:96] Skipping create...Using existing machine configuration
	I0401 19:17:07.750985   55506 fix.go:54] fixHost starting: 
	I0401 19:17:07.751434   55506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:17:07.751475   55506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:17:07.768026   55506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41597
	I0401 19:17:07.768442   55506 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:17:07.768953   55506 main.go:141] libmachine: Using API Version  1
	I0401 19:17:07.768978   55506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:17:07.769299   55506 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:17:07.769508   55506 main.go:141] libmachine: (pause-208693) Calling .DriverName
	I0401 19:17:07.769716   55506 main.go:141] libmachine: (pause-208693) Calling .GetState
	I0401 19:17:07.771318   55506 fix.go:112] recreateIfNeeded on pause-208693: state=Running err=<nil>
	W0401 19:17:07.771337   55506 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 19:17:07.773627   55506 out.go:177] * Updating the running kvm2 "pause-208693" VM ...
	I0401 19:17:04.788178   54819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:17:05.287583   54819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:17:05.787429   54819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:17:06.288310   54819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:17:06.788078   54819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:17:07.287554   54819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:17:07.787437   54819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:17:08.288358   54819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:17:08.788348   54819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:17:09.288213   54819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:17:06.485407   55206 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 19:17:06.485426   55206 main.go:141] libmachine: Detecting the provisioner...
	I0401 19:17:06.485434   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHHostname
	I0401 19:17:06.488587   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:06.488950   55206 main.go:141] libmachine: (kindnet-408543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:4e:5e", ip: ""} in network mk-kindnet-408543: {Iface:virbr3 ExpiryTime:2024-04-01 20:17:00 +0000 UTC Type:0 Mac:52:54:00:dc:4e:5e Iaid: IPaddr:192.168.72.92 Prefix:24 Hostname:kindnet-408543 Clientid:01:52:54:00:dc:4e:5e}
	I0401 19:17:06.488979   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined IP address 192.168.72.92 and MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:06.489188   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHPort
	I0401 19:17:06.489364   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHKeyPath
	I0401 19:17:06.489550   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHKeyPath
	I0401 19:17:06.489710   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHUsername
	I0401 19:17:06.489884   55206 main.go:141] libmachine: Using SSH client type: native
	I0401 19:17:06.490047   55206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.92 22 <nil> <nil>}
	I0401 19:17:06.490060   55206 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0401 19:17:06.607155   55206 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0401 19:17:06.607233   55206 main.go:141] libmachine: found compatible host: buildroot
	I0401 19:17:06.607246   55206 main.go:141] libmachine: Provisioning with buildroot...
	I0401 19:17:06.607256   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetMachineName
	I0401 19:17:06.607447   55206 buildroot.go:166] provisioning hostname "kindnet-408543"
	I0401 19:17:06.607468   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetMachineName
	I0401 19:17:06.607632   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHHostname
	I0401 19:17:06.610975   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:06.611366   55206 main.go:141] libmachine: (kindnet-408543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:4e:5e", ip: ""} in network mk-kindnet-408543: {Iface:virbr3 ExpiryTime:2024-04-01 20:17:00 +0000 UTC Type:0 Mac:52:54:00:dc:4e:5e Iaid: IPaddr:192.168.72.92 Prefix:24 Hostname:kindnet-408543 Clientid:01:52:54:00:dc:4e:5e}
	I0401 19:17:06.611391   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined IP address 192.168.72.92 and MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:06.611577   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHPort
	I0401 19:17:06.611749   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHKeyPath
	I0401 19:17:06.611915   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHKeyPath
	I0401 19:17:06.612093   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHUsername
	I0401 19:17:06.612285   55206 main.go:141] libmachine: Using SSH client type: native
	I0401 19:17:06.612506   55206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.92 22 <nil> <nil>}
	I0401 19:17:06.612534   55206 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-408543 && echo "kindnet-408543" | sudo tee /etc/hostname
	I0401 19:17:06.742155   55206 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-408543
	
	I0401 19:17:06.742182   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHHostname
	I0401 19:17:06.745139   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:06.745460   55206 main.go:141] libmachine: (kindnet-408543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:4e:5e", ip: ""} in network mk-kindnet-408543: {Iface:virbr3 ExpiryTime:2024-04-01 20:17:00 +0000 UTC Type:0 Mac:52:54:00:dc:4e:5e Iaid: IPaddr:192.168.72.92 Prefix:24 Hostname:kindnet-408543 Clientid:01:52:54:00:dc:4e:5e}
	I0401 19:17:06.745487   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined IP address 192.168.72.92 and MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:06.745698   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHPort
	I0401 19:17:06.745915   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHKeyPath
	I0401 19:17:06.746065   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHKeyPath
	I0401 19:17:06.746208   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHUsername
	I0401 19:17:06.746378   55206 main.go:141] libmachine: Using SSH client type: native
	I0401 19:17:06.746539   55206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.92 22 <nil> <nil>}
	I0401 19:17:06.746555   55206 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-408543' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-408543/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-408543' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 19:17:06.871320   55206 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 19:17:06.871410   55206 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18233-10493/.minikube CaCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18233-10493/.minikube}
	I0401 19:17:06.871464   55206 buildroot.go:174] setting up certificates
	I0401 19:17:06.871479   55206 provision.go:84] configureAuth start
	I0401 19:17:06.871492   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetMachineName
	I0401 19:17:06.871796   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetIP
	I0401 19:17:06.874935   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:06.875382   55206 main.go:141] libmachine: (kindnet-408543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:4e:5e", ip: ""} in network mk-kindnet-408543: {Iface:virbr3 ExpiryTime:2024-04-01 20:17:00 +0000 UTC Type:0 Mac:52:54:00:dc:4e:5e Iaid: IPaddr:192.168.72.92 Prefix:24 Hostname:kindnet-408543 Clientid:01:52:54:00:dc:4e:5e}
	I0401 19:17:06.875413   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined IP address 192.168.72.92 and MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:06.875607   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHHostname
	I0401 19:17:06.878191   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:06.878544   55206 main.go:141] libmachine: (kindnet-408543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:4e:5e", ip: ""} in network mk-kindnet-408543: {Iface:virbr3 ExpiryTime:2024-04-01 20:17:00 +0000 UTC Type:0 Mac:52:54:00:dc:4e:5e Iaid: IPaddr:192.168.72.92 Prefix:24 Hostname:kindnet-408543 Clientid:01:52:54:00:dc:4e:5e}
	I0401 19:17:06.878592   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined IP address 192.168.72.92 and MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:06.878754   55206 provision.go:143] copyHostCerts
	I0401 19:17:06.878820   55206 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem, removing ...
	I0401 19:17:06.878833   55206 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem
	I0401 19:17:06.878900   55206 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem (1123 bytes)
	I0401 19:17:06.879035   55206 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem, removing ...
	I0401 19:17:06.879048   55206 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem
	I0401 19:17:06.879089   55206 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem (1679 bytes)
	I0401 19:17:06.879180   55206 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem, removing ...
	I0401 19:17:06.879195   55206 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem
	I0401 19:17:06.879227   55206 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem (1082 bytes)
	I0401 19:17:06.879295   55206 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem org=jenkins.kindnet-408543 san=[127.0.0.1 192.168.72.92 kindnet-408543 localhost minikube]
	I0401 19:17:07.029206   55206 provision.go:177] copyRemoteCerts
	I0401 19:17:07.029257   55206 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 19:17:07.029281   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHHostname
	I0401 19:17:07.031818   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:07.032147   55206 main.go:141] libmachine: (kindnet-408543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:4e:5e", ip: ""} in network mk-kindnet-408543: {Iface:virbr3 ExpiryTime:2024-04-01 20:17:00 +0000 UTC Type:0 Mac:52:54:00:dc:4e:5e Iaid: IPaddr:192.168.72.92 Prefix:24 Hostname:kindnet-408543 Clientid:01:52:54:00:dc:4e:5e}
	I0401 19:17:07.032183   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined IP address 192.168.72.92 and MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:07.032376   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHPort
	I0401 19:17:07.032599   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHKeyPath
	I0401 19:17:07.032742   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHUsername
	I0401 19:17:07.032872   55206 sshutil.go:53] new ssh client: &{IP:192.168.72.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/kindnet-408543/id_rsa Username:docker}
	I0401 19:17:07.121994   55206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 19:17:07.150612   55206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 19:17:07.177014   55206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0401 19:17:07.203202   55206 provision.go:87] duration metric: took 331.709585ms to configureAuth
	I0401 19:17:07.203233   55206 buildroot.go:189] setting minikube options for container-runtime
	I0401 19:17:07.203429   55206 config.go:182] Loaded profile config "kindnet-408543": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 19:17:07.203503   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHHostname
	I0401 19:17:07.206180   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:07.206594   55206 main.go:141] libmachine: (kindnet-408543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:4e:5e", ip: ""} in network mk-kindnet-408543: {Iface:virbr3 ExpiryTime:2024-04-01 20:17:00 +0000 UTC Type:0 Mac:52:54:00:dc:4e:5e Iaid: IPaddr:192.168.72.92 Prefix:24 Hostname:kindnet-408543 Clientid:01:52:54:00:dc:4e:5e}
	I0401 19:17:07.206632   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined IP address 192.168.72.92 and MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:07.206870   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHPort
	I0401 19:17:07.207072   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHKeyPath
	I0401 19:17:07.207275   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHKeyPath
	I0401 19:17:07.207456   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHUsername
	I0401 19:17:07.207663   55206 main.go:141] libmachine: Using SSH client type: native
	I0401 19:17:07.207820   55206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.92 22 <nil> <nil>}
	I0401 19:17:07.207835   55206 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 19:17:07.496135   55206 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 19:17:07.496162   55206 main.go:141] libmachine: Checking connection to Docker...
	I0401 19:17:07.496170   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetURL
	I0401 19:17:07.497546   55206 main.go:141] libmachine: (kindnet-408543) DBG | Using libvirt version 6000000
	I0401 19:17:07.499988   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:07.500368   55206 main.go:141] libmachine: (kindnet-408543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:4e:5e", ip: ""} in network mk-kindnet-408543: {Iface:virbr3 ExpiryTime:2024-04-01 20:17:00 +0000 UTC Type:0 Mac:52:54:00:dc:4e:5e Iaid: IPaddr:192.168.72.92 Prefix:24 Hostname:kindnet-408543 Clientid:01:52:54:00:dc:4e:5e}
	I0401 19:17:07.500387   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined IP address 192.168.72.92 and MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:07.500547   55206 main.go:141] libmachine: Docker is up and running!
	I0401 19:17:07.500565   55206 main.go:141] libmachine: Reticulating splines...
	I0401 19:17:07.500574   55206 client.go:171] duration metric: took 23.890274845s to LocalClient.Create
	I0401 19:17:07.500597   55206 start.go:167] duration metric: took 23.890367542s to libmachine.API.Create "kindnet-408543"
	I0401 19:17:07.500609   55206 start.go:293] postStartSetup for "kindnet-408543" (driver="kvm2")
	I0401 19:17:07.500622   55206 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 19:17:07.500639   55206 main.go:141] libmachine: (kindnet-408543) Calling .DriverName
	I0401 19:17:07.500877   55206 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 19:17:07.500902   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHHostname
	I0401 19:17:07.503198   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:07.503543   55206 main.go:141] libmachine: (kindnet-408543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:4e:5e", ip: ""} in network mk-kindnet-408543: {Iface:virbr3 ExpiryTime:2024-04-01 20:17:00 +0000 UTC Type:0 Mac:52:54:00:dc:4e:5e Iaid: IPaddr:192.168.72.92 Prefix:24 Hostname:kindnet-408543 Clientid:01:52:54:00:dc:4e:5e}
	I0401 19:17:07.503572   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined IP address 192.168.72.92 and MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:07.503682   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHPort
	I0401 19:17:07.503868   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHKeyPath
	I0401 19:17:07.504059   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHUsername
	I0401 19:17:07.504214   55206 sshutil.go:53] new ssh client: &{IP:192.168.72.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/kindnet-408543/id_rsa Username:docker}
	I0401 19:17:07.589123   55206 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 19:17:07.594165   55206 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 19:17:07.594186   55206 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/addons for local assets ...
	I0401 19:17:07.594242   55206 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/files for local assets ...
	I0401 19:17:07.594326   55206 filesync.go:149] local asset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> 177512.pem in /etc/ssl/certs
	I0401 19:17:07.594421   55206 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 19:17:07.605127   55206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:17:07.632251   55206 start.go:296] duration metric: took 131.62714ms for postStartSetup
	I0401 19:17:07.632309   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetConfigRaw
	I0401 19:17:07.632997   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetIP
	I0401 19:17:07.635648   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:07.636018   55206 main.go:141] libmachine: (kindnet-408543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:4e:5e", ip: ""} in network mk-kindnet-408543: {Iface:virbr3 ExpiryTime:2024-04-01 20:17:00 +0000 UTC Type:0 Mac:52:54:00:dc:4e:5e Iaid: IPaddr:192.168.72.92 Prefix:24 Hostname:kindnet-408543 Clientid:01:52:54:00:dc:4e:5e}
	I0401 19:17:07.636049   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined IP address 192.168.72.92 and MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:07.636252   55206 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/config.json ...
	I0401 19:17:07.636473   55206 start.go:128] duration metric: took 24.048696741s to createHost
	I0401 19:17:07.636497   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHHostname
	I0401 19:17:07.638784   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:07.639134   55206 main.go:141] libmachine: (kindnet-408543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:4e:5e", ip: ""} in network mk-kindnet-408543: {Iface:virbr3 ExpiryTime:2024-04-01 20:17:00 +0000 UTC Type:0 Mac:52:54:00:dc:4e:5e Iaid: IPaddr:192.168.72.92 Prefix:24 Hostname:kindnet-408543 Clientid:01:52:54:00:dc:4e:5e}
	I0401 19:17:07.639173   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined IP address 192.168.72.92 and MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:07.639294   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHPort
	I0401 19:17:07.639493   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHKeyPath
	I0401 19:17:07.639637   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHKeyPath
	I0401 19:17:07.639786   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHUsername
	I0401 19:17:07.639933   55206 main.go:141] libmachine: Using SSH client type: native
	I0401 19:17:07.640087   55206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.92 22 <nil> <nil>}
	I0401 19:17:07.640096   55206 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 19:17:07.750728   55206 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711999027.726162054
	
	I0401 19:17:07.750746   55206 fix.go:216] guest clock: 1711999027.726162054
	I0401 19:17:07.750753   55206 fix.go:229] Guest: 2024-04-01 19:17:07.726162054 +0000 UTC Remote: 2024-04-01 19:17:07.636486289 +0000 UTC m=+26.248393320 (delta=89.675765ms)
	I0401 19:17:07.750808   55206 fix.go:200] guest clock delta is within tolerance: 89.675765ms
	I0401 19:17:07.750818   55206 start.go:83] releasing machines lock for "kindnet-408543", held for 24.163227136s
	I0401 19:17:07.750845   55206 main.go:141] libmachine: (kindnet-408543) Calling .DriverName
	I0401 19:17:07.751146   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetIP
	I0401 19:17:07.753917   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:07.754252   55206 main.go:141] libmachine: (kindnet-408543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:4e:5e", ip: ""} in network mk-kindnet-408543: {Iface:virbr3 ExpiryTime:2024-04-01 20:17:00 +0000 UTC Type:0 Mac:52:54:00:dc:4e:5e Iaid: IPaddr:192.168.72.92 Prefix:24 Hostname:kindnet-408543 Clientid:01:52:54:00:dc:4e:5e}
	I0401 19:17:07.754296   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined IP address 192.168.72.92 and MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:07.754444   55206 main.go:141] libmachine: (kindnet-408543) Calling .DriverName
	I0401 19:17:07.755059   55206 main.go:141] libmachine: (kindnet-408543) Calling .DriverName
	I0401 19:17:07.755228   55206 main.go:141] libmachine: (kindnet-408543) Calling .DriverName
	I0401 19:17:07.755279   55206 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 19:17:07.755316   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHHostname
	I0401 19:17:07.755431   55206 ssh_runner.go:195] Run: cat /version.json
	I0401 19:17:07.755454   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHHostname
	I0401 19:17:07.757994   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:07.758243   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:07.758373   55206 main.go:141] libmachine: (kindnet-408543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:4e:5e", ip: ""} in network mk-kindnet-408543: {Iface:virbr3 ExpiryTime:2024-04-01 20:17:00 +0000 UTC Type:0 Mac:52:54:00:dc:4e:5e Iaid: IPaddr:192.168.72.92 Prefix:24 Hostname:kindnet-408543 Clientid:01:52:54:00:dc:4e:5e}
	I0401 19:17:07.758406   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined IP address 192.168.72.92 and MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:07.758510   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHPort
	I0401 19:17:07.758652   55206 main.go:141] libmachine: (kindnet-408543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:4e:5e", ip: ""} in network mk-kindnet-408543: {Iface:virbr3 ExpiryTime:2024-04-01 20:17:00 +0000 UTC Type:0 Mac:52:54:00:dc:4e:5e Iaid: IPaddr:192.168.72.92 Prefix:24 Hostname:kindnet-408543 Clientid:01:52:54:00:dc:4e:5e}
	I0401 19:17:07.758671   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined IP address 192.168.72.92 and MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:07.758683   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHKeyPath
	I0401 19:17:07.758872   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHPort
	I0401 19:17:07.758887   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHUsername
	I0401 19:17:07.759030   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHKeyPath
	I0401 19:17:07.759048   55206 sshutil.go:53] new ssh client: &{IP:192.168.72.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/kindnet-408543/id_rsa Username:docker}
	I0401 19:17:07.759161   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHUsername
	I0401 19:17:07.759278   55206 sshutil.go:53] new ssh client: &{IP:192.168.72.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/kindnet-408543/id_rsa Username:docker}
	I0401 19:17:07.838972   55206 ssh_runner.go:195] Run: systemctl --version
	I0401 19:17:07.865172   55206 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 19:17:08.027681   55206 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 19:17:08.035280   55206 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 19:17:08.035353   55206 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 19:17:08.054108   55206 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 19:17:08.054127   55206 start.go:494] detecting cgroup driver to use...
	I0401 19:17:08.054196   55206 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 19:17:08.073250   55206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 19:17:08.088640   55206 docker.go:217] disabling cri-docker service (if available) ...
	I0401 19:17:08.088712   55206 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 19:17:08.105038   55206 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 19:17:08.118983   55206 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 19:17:08.249133   55206 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 19:17:08.423752   55206 docker.go:233] disabling docker service ...
	I0401 19:17:08.423813   55206 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 19:17:08.442559   55206 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 19:17:08.457483   55206 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 19:17:08.618055   55206 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 19:17:08.743313   55206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 19:17:08.759409   55206 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 19:17:08.780620   55206 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0401 19:17:08.780675   55206 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:17:08.792894   55206 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 19:17:08.792973   55206 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:17:08.804316   55206 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:17:08.818260   55206 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:17:08.829578   55206 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 19:17:08.841347   55206 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:17:08.852604   55206 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:17:08.874245   55206 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:17:08.886011   55206 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 19:17:08.896200   55206 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0401 19:17:08.896257   55206 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0401 19:17:08.911966   55206 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 19:17:08.927243   55206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:17:09.053370   55206 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 19:17:09.224608   55206 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 19:17:09.224688   55206 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 19:17:09.231151   55206 start.go:562] Will wait 60s for crictl version
	I0401 19:17:09.231208   55206 ssh_runner.go:195] Run: which crictl
	I0401 19:17:09.235439   55206 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 19:17:09.278477   55206 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 19:17:09.278580   55206 ssh_runner.go:195] Run: crio --version
	I0401 19:17:09.313173   55206 ssh_runner.go:195] Run: crio --version
	I0401 19:17:09.348807   55206 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0401 19:17:09.350131   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetIP
	I0401 19:17:09.352723   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:09.353046   55206 main.go:141] libmachine: (kindnet-408543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:4e:5e", ip: ""} in network mk-kindnet-408543: {Iface:virbr3 ExpiryTime:2024-04-01 20:17:00 +0000 UTC Type:0 Mac:52:54:00:dc:4e:5e Iaid: IPaddr:192.168.72.92 Prefix:24 Hostname:kindnet-408543 Clientid:01:52:54:00:dc:4e:5e}
	I0401 19:17:09.353073   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined IP address 192.168.72.92 and MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:09.353311   55206 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0401 19:17:09.357952   55206 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:17:09.371265   55206 kubeadm.go:877] updating cluster {Name:kindnet-408543 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.3 ClusterName:kindnet-408543 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.72.92 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 19:17:09.371368   55206 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0401 19:17:09.371428   55206 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:17:09.411850   55206 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0401 19:17:09.411923   55206 ssh_runner.go:195] Run: which lz4
	I0401 19:17:09.416278   55206 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0401 19:17:09.420866   55206 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 19:17:09.420889   55206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0401 19:17:11.148399   55206 crio.go:462] duration metric: took 1.732145245s to copy over tarball
	I0401 19:17:11.148506   55206 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 19:17:07.775392   55506 machine.go:94] provisionDockerMachine start ...
	I0401 19:17:07.775417   55506 main.go:141] libmachine: (pause-208693) Calling .DriverName
	I0401 19:17:07.775610   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHHostname
	I0401 19:17:07.778194   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:07.778628   55506 main.go:141] libmachine: (pause-208693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:70:95", ip: ""} in network mk-pause-208693: {Iface:virbr1 ExpiryTime:2024-04-01 20:15:45 +0000 UTC Type:0 Mac:52:54:00:21:70:95 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:pause-208693 Clientid:01:52:54:00:21:70:95}
	I0401 19:17:07.778664   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined IP address 192.168.39.250 and MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:07.778819   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHPort
	I0401 19:17:07.778996   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHKeyPath
	I0401 19:17:07.779176   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHKeyPath
	I0401 19:17:07.779307   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHUsername
	I0401 19:17:07.779470   55506 main.go:141] libmachine: Using SSH client type: native
	I0401 19:17:07.779699   55506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0401 19:17:07.779718   55506 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 19:17:07.891921   55506 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-208693
	
	I0401 19:17:07.891946   55506 main.go:141] libmachine: (pause-208693) Calling .GetMachineName
	I0401 19:17:07.892212   55506 buildroot.go:166] provisioning hostname "pause-208693"
	I0401 19:17:07.892236   55506 main.go:141] libmachine: (pause-208693) Calling .GetMachineName
	I0401 19:17:07.892415   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHHostname
	I0401 19:17:07.895162   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:07.895581   55506 main.go:141] libmachine: (pause-208693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:70:95", ip: ""} in network mk-pause-208693: {Iface:virbr1 ExpiryTime:2024-04-01 20:15:45 +0000 UTC Type:0 Mac:52:54:00:21:70:95 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:pause-208693 Clientid:01:52:54:00:21:70:95}
	I0401 19:17:07.895604   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined IP address 192.168.39.250 and MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:07.895779   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHPort
	I0401 19:17:07.895965   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHKeyPath
	I0401 19:17:07.896115   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHKeyPath
	I0401 19:17:07.896268   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHUsername
	I0401 19:17:07.896424   55506 main.go:141] libmachine: Using SSH client type: native
	I0401 19:17:07.896615   55506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0401 19:17:07.896632   55506 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-208693 && echo "pause-208693" | sudo tee /etc/hostname
	I0401 19:17:08.023301   55506 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-208693
	
	I0401 19:17:08.023334   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHHostname
	I0401 19:17:08.026521   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:08.026994   55506 main.go:141] libmachine: (pause-208693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:70:95", ip: ""} in network mk-pause-208693: {Iface:virbr1 ExpiryTime:2024-04-01 20:15:45 +0000 UTC Type:0 Mac:52:54:00:21:70:95 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:pause-208693 Clientid:01:52:54:00:21:70:95}
	I0401 19:17:08.027030   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined IP address 192.168.39.250 and MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:08.027171   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHPort
	I0401 19:17:08.027344   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHKeyPath
	I0401 19:17:08.027519   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHKeyPath
	I0401 19:17:08.027730   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHUsername
	I0401 19:17:08.027938   55506 main.go:141] libmachine: Using SSH client type: native
	I0401 19:17:08.028129   55506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0401 19:17:08.028153   55506 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-208693' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-208693/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-208693' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 19:17:08.143107   55506 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 19:17:08.143142   55506 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18233-10493/.minikube CaCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18233-10493/.minikube}
	I0401 19:17:08.143185   55506 buildroot.go:174] setting up certificates
	I0401 19:17:08.143222   55506 provision.go:84] configureAuth start
	I0401 19:17:08.143241   55506 main.go:141] libmachine: (pause-208693) Calling .GetMachineName
	I0401 19:17:08.143520   55506 main.go:141] libmachine: (pause-208693) Calling .GetIP
	I0401 19:17:08.146422   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:08.146745   55506 main.go:141] libmachine: (pause-208693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:70:95", ip: ""} in network mk-pause-208693: {Iface:virbr1 ExpiryTime:2024-04-01 20:15:45 +0000 UTC Type:0 Mac:52:54:00:21:70:95 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:pause-208693 Clientid:01:52:54:00:21:70:95}
	I0401 19:17:08.146783   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined IP address 192.168.39.250 and MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:08.146884   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHHostname
	I0401 19:17:08.149275   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:08.149679   55506 main.go:141] libmachine: (pause-208693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:70:95", ip: ""} in network mk-pause-208693: {Iface:virbr1 ExpiryTime:2024-04-01 20:15:45 +0000 UTC Type:0 Mac:52:54:00:21:70:95 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:pause-208693 Clientid:01:52:54:00:21:70:95}
	I0401 19:17:08.149714   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined IP address 192.168.39.250 and MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:08.149799   55506 provision.go:143] copyHostCerts
	I0401 19:17:08.149849   55506 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem, removing ...
	I0401 19:17:08.149858   55506 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem
	I0401 19:17:08.149911   55506 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem (1082 bytes)
	I0401 19:17:08.149990   55506 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem, removing ...
	I0401 19:17:08.149999   55506 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem
	I0401 19:17:08.150017   55506 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem (1123 bytes)
	I0401 19:17:08.150061   55506 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem, removing ...
	I0401 19:17:08.150069   55506 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem
	I0401 19:17:08.150084   55506 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem (1679 bytes)
	I0401 19:17:08.150125   55506 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem org=jenkins.pause-208693 san=[127.0.0.1 192.168.39.250 localhost minikube pause-208693]
	I0401 19:17:08.216818   55506 provision.go:177] copyRemoteCerts
	I0401 19:17:08.216865   55506 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 19:17:08.216887   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHHostname
	I0401 19:17:08.219822   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:08.220083   55506 main.go:141] libmachine: (pause-208693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:70:95", ip: ""} in network mk-pause-208693: {Iface:virbr1 ExpiryTime:2024-04-01 20:15:45 +0000 UTC Type:0 Mac:52:54:00:21:70:95 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:pause-208693 Clientid:01:52:54:00:21:70:95}
	I0401 19:17:08.220111   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined IP address 192.168.39.250 and MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:08.220313   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHPort
	I0401 19:17:08.220515   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHKeyPath
	I0401 19:17:08.220676   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHUsername
	I0401 19:17:08.220787   55506 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/pause-208693/id_rsa Username:docker}
	I0401 19:17:08.312676   55506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 19:17:08.349102   55506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0401 19:17:08.382636   55506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 19:17:08.412753   55506 provision.go:87] duration metric: took 269.516315ms to configureAuth
	I0401 19:17:08.412778   55506 buildroot.go:189] setting minikube options for container-runtime
	I0401 19:17:08.412973   55506 config.go:182] Loaded profile config "pause-208693": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 19:17:08.413039   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHHostname
	I0401 19:17:08.415885   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:08.416289   55506 main.go:141] libmachine: (pause-208693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:70:95", ip: ""} in network mk-pause-208693: {Iface:virbr1 ExpiryTime:2024-04-01 20:15:45 +0000 UTC Type:0 Mac:52:54:00:21:70:95 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:pause-208693 Clientid:01:52:54:00:21:70:95}
	I0401 19:17:08.416325   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined IP address 192.168.39.250 and MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:08.416568   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHPort
	I0401 19:17:08.416749   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHKeyPath
	I0401 19:17:08.416918   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHKeyPath
	I0401 19:17:08.417071   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHUsername
	I0401 19:17:08.417255   55506 main.go:141] libmachine: Using SSH client type: native
	I0401 19:17:08.417432   55506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0401 19:17:08.417455   55506 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 19:17:09.788039   54819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:17:10.287920   54819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:17:10.788095   54819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:17:11.288111   54819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:17:11.788069   54819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:17:12.287917   54819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:17:12.787487   54819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:17:13.288353   54819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:17:13.787577   54819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:17:14.288183   54819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:17:14.788291   54819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:17:15.287861   54819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:17:15.787559   54819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:17:16.035332   54819 kubeadm.go:1107] duration metric: took 12.955081663s to wait for elevateKubeSystemPrivileges
	W0401 19:17:16.035382   54819 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0401 19:17:16.035393   54819 kubeadm.go:393] duration metric: took 24.17974423s to StartCluster
	I0401 19:17:16.035414   54819 settings.go:142] acquiring lock: {Name:mk5cd3d9600680d3808ad7ff6310a5e71b09e71d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:17:16.035522   54819 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 19:17:16.036911   54819 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/kubeconfig: {Name:mkbd988e40ba29769e9f8a43c4d876f38e957f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:17:16.037211   54819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0401 19:17:16.037210   54819 start.go:234] Will wait 15m0s for node &{Name: IP:192.168.61.127 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 19:17:16.038824   54819 out.go:177] * Verifying Kubernetes components...
	I0401 19:17:16.037247   54819 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0401 19:17:16.037422   54819 config.go:182] Loaded profile config "auto-408543": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 19:17:16.040218   54819 addons.go:69] Setting storage-provisioner=true in profile "auto-408543"
	I0401 19:17:16.040233   54819 addons.go:69] Setting default-storageclass=true in profile "auto-408543"
	I0401 19:17:16.040252   54819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:17:16.040253   54819 addons.go:234] Setting addon storage-provisioner=true in "auto-408543"
	I0401 19:17:16.040381   54819 host.go:66] Checking if "auto-408543" exists ...
	I0401 19:17:16.040255   54819 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-408543"
	I0401 19:17:16.040851   54819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:17:16.040895   54819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:17:16.040853   54819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:17:16.040959   54819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:17:16.059614   54819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46729
	I0401 19:17:16.060198   54819 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:17:16.060775   54819 main.go:141] libmachine: Using API Version  1
	I0401 19:17:16.060806   54819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:17:16.061394   54819 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:17:16.061593   54819 main.go:141] libmachine: (auto-408543) Calling .GetState
	I0401 19:17:16.062087   54819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40573
	I0401 19:17:16.062492   54819 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:17:16.062992   54819 main.go:141] libmachine: Using API Version  1
	I0401 19:17:16.063012   54819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:17:16.063480   54819 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:17:16.065284   54819 addons.go:234] Setting addon default-storageclass=true in "auto-408543"
	I0401 19:17:16.065324   54819 host.go:66] Checking if "auto-408543" exists ...
	I0401 19:17:16.065720   54819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:17:16.065756   54819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:17:16.066096   54819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:17:16.066140   54819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:17:16.086294   54819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39459
	I0401 19:17:16.086647   54819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41265
	I0401 19:17:16.086788   54819 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:17:16.087144   54819 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:17:16.087324   54819 main.go:141] libmachine: Using API Version  1
	I0401 19:17:16.087346   54819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:17:16.087756   54819 main.go:141] libmachine: Using API Version  1
	I0401 19:17:16.087773   54819 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:17:16.087778   54819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:17:16.087948   54819 main.go:141] libmachine: (auto-408543) Calling .GetState
	I0401 19:17:16.088190   54819 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:17:16.088645   54819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:17:16.088677   54819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:17:16.090037   54819 main.go:141] libmachine: (auto-408543) Calling .DriverName
	I0401 19:17:16.094249   54819 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:17:13.885372   55206 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.736827952s)
	I0401 19:17:13.885404   55206 crio.go:469] duration metric: took 2.736972181s to extract the tarball
	I0401 19:17:13.885413   55206 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 19:17:13.942285   55206 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:17:14.003344   55206 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 19:17:14.003372   55206 cache_images.go:84] Images are preloaded, skipping loading
	I0401 19:17:14.003382   55206 kubeadm.go:928] updating node { 192.168.72.92 8443 v1.29.3 crio true true} ...
	I0401 19:17:14.003501   55206 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-408543 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.92
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:kindnet-408543 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I0401 19:17:14.003584   55206 ssh_runner.go:195] Run: crio config
	I0401 19:17:14.059192   55206 cni.go:84] Creating CNI manager for "kindnet"
	I0401 19:17:14.059216   55206 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 19:17:14.059236   55206 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.92 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-408543 NodeName:kindnet-408543 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.92"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.92 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 19:17:14.059414   55206 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.92
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-408543"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.92
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.92"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 19:17:14.059496   55206 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0401 19:17:14.071485   55206 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 19:17:14.071554   55206 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 19:17:14.084240   55206 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0401 19:17:14.103763   55206 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 19:17:14.124626   55206 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0401 19:17:14.145947   55206 ssh_runner.go:195] Run: grep 192.168.72.92	control-plane.minikube.internal$ /etc/hosts
	I0401 19:17:14.151714   55206 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.92	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:17:14.169166   55206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:17:14.321864   55206 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:17:14.345428   55206 certs.go:68] Setting up /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543 for IP: 192.168.72.92
	I0401 19:17:14.345528   55206 certs.go:194] generating shared ca certs ...
	I0401 19:17:14.345562   55206 certs.go:226] acquiring lock for ca certs: {Name:mk348b3e250c104b662139cd7212c6c6dfda3180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:17:14.345817   55206 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key
	I0401 19:17:14.345934   55206 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key
	I0401 19:17:14.345964   55206 certs.go:256] generating profile certs ...
	I0401 19:17:14.346048   55206 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/client.key
	I0401 19:17:14.346078   55206 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/client.crt with IP's: []
	I0401 19:17:14.442359   55206 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/client.crt ...
	I0401 19:17:14.442391   55206 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/client.crt: {Name:mk52e00fc29ffd19c9a2b30834373c76d62ba370 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:17:14.442592   55206 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/client.key ...
	I0401 19:17:14.442609   55206 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/client.key: {Name:mkfb32fe74a4738b41955f3158c5ab785c0ce205 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:17:14.442707   55206 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/apiserver.key.a2470e84
	I0401 19:17:14.442724   55206 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/apiserver.crt.a2470e84 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.92]
	I0401 19:17:14.827946   55206 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/apiserver.crt.a2470e84 ...
	I0401 19:17:14.827976   55206 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/apiserver.crt.a2470e84: {Name:mk6812340c695a1e1f31ec3dea722cd5be0d8126 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:17:14.828266   55206 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/apiserver.key.a2470e84 ...
	I0401 19:17:14.828289   55206 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/apiserver.key.a2470e84: {Name:mk80c61b212f0ab6460d076a4bade987a4094bd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:17:14.828408   55206 certs.go:381] copying /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/apiserver.crt.a2470e84 -> /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/apiserver.crt
	I0401 19:17:14.828478   55206 certs.go:385] copying /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/apiserver.key.a2470e84 -> /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/apiserver.key
	I0401 19:17:14.828528   55206 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/proxy-client.key
	I0401 19:17:14.828543   55206 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/proxy-client.crt with IP's: []
	I0401 19:17:14.892456   55206 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/proxy-client.crt ...
	I0401 19:17:14.892482   55206 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/proxy-client.crt: {Name:mk63ee518ab918e14123406405ce835ba65124de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:17:14.899607   55206 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/proxy-client.key ...
	I0401 19:17:14.899639   55206 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/proxy-client.key: {Name:mk8542b61cf9b1422dadef200f8ad7ad4236c22f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:17:14.899908   55206 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem (1338 bytes)
	W0401 19:17:14.899956   55206 certs.go:480] ignoring /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751_empty.pem, impossibly tiny 0 bytes
	I0401 19:17:14.899972   55206 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem (1675 bytes)
	I0401 19:17:14.900006   55206 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem (1082 bytes)
	I0401 19:17:14.900038   55206 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem (1123 bytes)
	I0401 19:17:14.900077   55206 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem (1679 bytes)
	I0401 19:17:14.900135   55206 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:17:14.900935   55206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 19:17:14.934460   55206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 19:17:14.963396   55206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 19:17:14.992990   55206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 19:17:15.025273   55206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0401 19:17:15.061058   55206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 19:17:15.088253   55206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 19:17:15.122145   55206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 19:17:15.159320   55206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /usr/share/ca-certificates/177512.pem (1708 bytes)
	I0401 19:17:15.188143   55206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 19:17:15.217610   55206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem --> /usr/share/ca-certificates/17751.pem (1338 bytes)
	I0401 19:17:15.248558   55206 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0401 19:17:15.273634   55206 ssh_runner.go:195] Run: openssl version
	I0401 19:17:15.281865   55206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177512.pem && ln -fs /usr/share/ca-certificates/177512.pem /etc/ssl/certs/177512.pem"
	I0401 19:17:15.298525   55206 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177512.pem
	I0401 19:17:15.306217   55206 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 18:15 /usr/share/ca-certificates/177512.pem
	I0401 19:17:15.306288   55206 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177512.pem
	I0401 19:17:15.322732   55206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177512.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 19:17:15.342000   55206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 19:17:15.358775   55206 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:17:15.365948   55206 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 18:07 /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:17:15.366046   55206 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:17:15.374736   55206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 19:17:15.389455   55206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17751.pem && ln -fs /usr/share/ca-certificates/17751.pem /etc/ssl/certs/17751.pem"
	I0401 19:17:15.405049   55206 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17751.pem
	I0401 19:17:15.411197   55206 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 18:15 /usr/share/ca-certificates/17751.pem
	I0401 19:17:15.411279   55206 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17751.pem
	I0401 19:17:15.420850   55206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17751.pem /etc/ssl/certs/51391683.0"
	I0401 19:17:15.433969   55206 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 19:17:15.440124   55206 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 19:17:15.440187   55206 kubeadm.go:391] StartCluster: {Name:kindnet-408543 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3
ClusterName:kindnet-408543 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.72.92 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:17:15.440280   55206 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 19:17:15.440340   55206 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:17:15.488289   55206 cri.go:89] found id: ""
	I0401 19:17:15.488361   55206 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 19:17:15.499888   55206 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:17:15.510289   55206 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:17:15.521028   55206 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:17:15.521049   55206 kubeadm.go:156] found existing configuration files:
	
	I0401 19:17:15.521096   55206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 19:17:15.531378   55206 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:17:15.531430   55206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:17:15.544065   55206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 19:17:15.555896   55206 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:17:15.555969   55206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:17:15.566613   55206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 19:17:15.576262   55206 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:17:15.576315   55206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:17:15.592591   55206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 19:17:15.603742   55206 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:17:15.603807   55206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:17:15.616380   55206 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 19:17:15.681004   55206 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0401 19:17:15.681097   55206 kubeadm.go:309] [preflight] Running pre-flight checks
	I0401 19:17:15.843965   55206 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 19:17:15.844124   55206 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 19:17:15.844288   55206 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 19:17:16.143175   55206 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 19:17:16.145842   55206 out.go:204]   - Generating certificates and keys ...
	I0401 19:17:16.145959   55206 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0401 19:17:16.146087   55206 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0401 19:17:16.208334   55206 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 19:17:16.401256   55206 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0401 19:17:14.011593   55506 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 19:17:14.011620   55506 machine.go:97] duration metric: took 6.236210277s to provisionDockerMachine
	I0401 19:17:14.011635   55506 start.go:293] postStartSetup for "pause-208693" (driver="kvm2")
	I0401 19:17:14.011647   55506 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 19:17:14.011667   55506 main.go:141] libmachine: (pause-208693) Calling .DriverName
	I0401 19:17:14.012056   55506 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 19:17:14.012091   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHHostname
	I0401 19:17:14.015013   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:14.015398   55506 main.go:141] libmachine: (pause-208693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:70:95", ip: ""} in network mk-pause-208693: {Iface:virbr1 ExpiryTime:2024-04-01 20:15:45 +0000 UTC Type:0 Mac:52:54:00:21:70:95 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:pause-208693 Clientid:01:52:54:00:21:70:95}
	I0401 19:17:14.015426   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined IP address 192.168.39.250 and MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:14.015761   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHPort
	I0401 19:17:14.015941   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHKeyPath
	I0401 19:17:14.016116   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHUsername
	I0401 19:17:14.016263   55506 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/pause-208693/id_rsa Username:docker}
	I0401 19:17:14.104053   55506 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 19:17:14.110496   55506 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 19:17:14.110522   55506 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/addons for local assets ...
	I0401 19:17:14.110577   55506 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/files for local assets ...
	I0401 19:17:14.110670   55506 filesync.go:149] local asset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> 177512.pem in /etc/ssl/certs
	I0401 19:17:14.110781   55506 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 19:17:14.124676   55506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:17:14.152875   55506 start.go:296] duration metric: took 141.227889ms for postStartSetup
	I0401 19:17:14.152908   55506 fix.go:56] duration metric: took 6.401924265s for fixHost
	I0401 19:17:14.152932   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHHostname
	I0401 19:17:14.156060   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:14.156464   55506 main.go:141] libmachine: (pause-208693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:70:95", ip: ""} in network mk-pause-208693: {Iface:virbr1 ExpiryTime:2024-04-01 20:15:45 +0000 UTC Type:0 Mac:52:54:00:21:70:95 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:pause-208693 Clientid:01:52:54:00:21:70:95}
	I0401 19:17:14.156489   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined IP address 192.168.39.250 and MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:14.156750   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHPort
	I0401 19:17:14.156939   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHKeyPath
	I0401 19:17:14.157124   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHKeyPath
	I0401 19:17:14.157360   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHUsername
	I0401 19:17:14.157553   55506 main.go:141] libmachine: Using SSH client type: native
	I0401 19:17:14.157770   55506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0401 19:17:14.157786   55506 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 19:17:14.271277   55506 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711999034.266271505
	
	I0401 19:17:14.271297   55506 fix.go:216] guest clock: 1711999034.266271505
	I0401 19:17:14.271306   55506 fix.go:229] Guest: 2024-04-01 19:17:14.266271505 +0000 UTC Remote: 2024-04-01 19:17:14.152913142 +0000 UTC m=+7.640372274 (delta=113.358363ms)
	I0401 19:17:14.271330   55506 fix.go:200] guest clock delta is within tolerance: 113.358363ms
	I0401 19:17:14.271336   55506 start.go:83] releasing machines lock for "pause-208693", held for 6.520387462s
	I0401 19:17:14.271352   55506 main.go:141] libmachine: (pause-208693) Calling .DriverName
	I0401 19:17:14.271584   55506 main.go:141] libmachine: (pause-208693) Calling .GetIP
	I0401 19:17:14.274284   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:14.274692   55506 main.go:141] libmachine: (pause-208693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:70:95", ip: ""} in network mk-pause-208693: {Iface:virbr1 ExpiryTime:2024-04-01 20:15:45 +0000 UTC Type:0 Mac:52:54:00:21:70:95 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:pause-208693 Clientid:01:52:54:00:21:70:95}
	I0401 19:17:14.274734   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined IP address 192.168.39.250 and MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:14.274873   55506 main.go:141] libmachine: (pause-208693) Calling .DriverName
	I0401 19:17:14.275375   55506 main.go:141] libmachine: (pause-208693) Calling .DriverName
	I0401 19:17:14.275542   55506 main.go:141] libmachine: (pause-208693) Calling .DriverName
	I0401 19:17:14.275629   55506 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 19:17:14.275665   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHHostname
	I0401 19:17:14.275781   55506 ssh_runner.go:195] Run: cat /version.json
	I0401 19:17:14.275813   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHHostname
	I0401 19:17:14.278769   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:14.278948   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:14.279327   55506 main.go:141] libmachine: (pause-208693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:70:95", ip: ""} in network mk-pause-208693: {Iface:virbr1 ExpiryTime:2024-04-01 20:15:45 +0000 UTC Type:0 Mac:52:54:00:21:70:95 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:pause-208693 Clientid:01:52:54:00:21:70:95}
	I0401 19:17:14.279360   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined IP address 192.168.39.250 and MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:14.279397   55506 main.go:141] libmachine: (pause-208693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:70:95", ip: ""} in network mk-pause-208693: {Iface:virbr1 ExpiryTime:2024-04-01 20:15:45 +0000 UTC Type:0 Mac:52:54:00:21:70:95 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:pause-208693 Clientid:01:52:54:00:21:70:95}
	I0401 19:17:14.279413   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined IP address 192.168.39.250 and MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:14.279584   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHPort
	I0401 19:17:14.279788   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHPort
	I0401 19:17:14.279796   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHKeyPath
	I0401 19:17:14.280018   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHUsername
	I0401 19:17:14.280024   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHKeyPath
	I0401 19:17:14.280273   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHUsername
	I0401 19:17:14.280294   55506 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/pause-208693/id_rsa Username:docker}
	I0401 19:17:14.280375   55506 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/pause-208693/id_rsa Username:docker}
	I0401 19:17:14.398299   55506 ssh_runner.go:195] Run: systemctl --version
	I0401 19:17:14.407522   55506 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 19:17:14.589987   55506 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 19:17:14.597998   55506 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 19:17:14.598056   55506 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 19:17:14.610680   55506 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 19:17:14.610700   55506 start.go:494] detecting cgroup driver to use...
	I0401 19:17:14.610753   55506 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 19:17:14.633419   55506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 19:17:14.649012   55506 docker.go:217] disabling cri-docker service (if available) ...
	I0401 19:17:14.649069   55506 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 19:17:14.665238   55506 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 19:17:14.681036   55506 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 19:17:14.839573   55506 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 19:17:15.029768   55506 docker.go:233] disabling docker service ...
	I0401 19:17:15.029834   55506 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 19:17:15.055505   55506 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 19:17:15.072613   55506 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 19:17:15.249142   55506 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 19:17:15.437700   55506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 19:17:15.458179   55506 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 19:17:15.485128   55506 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0401 19:17:15.485204   55506 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:17:15.499058   55506 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 19:17:15.499145   55506 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:17:15.512574   55506 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:17:15.525783   55506 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:17:15.538961   55506 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 19:17:15.552336   55506 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:17:15.565451   55506 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:17:15.578535   55506 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:17:15.595981   55506 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 19:17:15.610546   55506 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 19:17:15.623143   55506 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:17:15.776688   55506 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 19:17:16.095725   54819 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 19:17:16.095744   54819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 19:17:16.095765   54819 main.go:141] libmachine: (auto-408543) Calling .GetSSHHostname
	I0401 19:17:16.099329   54819 main.go:141] libmachine: (auto-408543) DBG | domain auto-408543 has defined MAC address 52:54:00:f0:64:9b in network mk-auto-408543
	I0401 19:17:16.099898   54819 main.go:141] libmachine: (auto-408543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:64:9b", ip: ""} in network mk-auto-408543: {Iface:virbr4 ExpiryTime:2024-04-01 20:16:32 +0000 UTC Type:0 Mac:52:54:00:f0:64:9b Iaid: IPaddr:192.168.61.127 Prefix:24 Hostname:auto-408543 Clientid:01:52:54:00:f0:64:9b}
	I0401 19:17:16.099922   54819 main.go:141] libmachine: (auto-408543) DBG | domain auto-408543 has defined IP address 192.168.61.127 and MAC address 52:54:00:f0:64:9b in network mk-auto-408543
	I0401 19:17:16.100322   54819 main.go:141] libmachine: (auto-408543) Calling .GetSSHPort
	I0401 19:17:16.101936   54819 main.go:141] libmachine: (auto-408543) Calling .GetSSHKeyPath
	I0401 19:17:16.102104   54819 main.go:141] libmachine: (auto-408543) Calling .GetSSHUsername
	I0401 19:17:16.102364   54819 sshutil.go:53] new ssh client: &{IP:192.168.61.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/auto-408543/id_rsa Username:docker}
	I0401 19:17:16.107053   54819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42953
	I0401 19:17:16.107434   54819 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:17:16.107889   54819 main.go:141] libmachine: Using API Version  1
	I0401 19:17:16.107908   54819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:17:16.108372   54819 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:17:16.108577   54819 main.go:141] libmachine: (auto-408543) Calling .GetState
	I0401 19:17:16.110608   54819 main.go:141] libmachine: (auto-408543) Calling .DriverName
	I0401 19:17:16.110861   54819 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 19:17:16.110876   54819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 19:17:16.110895   54819 main.go:141] libmachine: (auto-408543) Calling .GetSSHHostname
	I0401 19:17:16.114068   54819 main.go:141] libmachine: (auto-408543) DBG | domain auto-408543 has defined MAC address 52:54:00:f0:64:9b in network mk-auto-408543
	I0401 19:17:16.114737   54819 main.go:141] libmachine: (auto-408543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:64:9b", ip: ""} in network mk-auto-408543: {Iface:virbr4 ExpiryTime:2024-04-01 20:16:32 +0000 UTC Type:0 Mac:52:54:00:f0:64:9b Iaid: IPaddr:192.168.61.127 Prefix:24 Hostname:auto-408543 Clientid:01:52:54:00:f0:64:9b}
	I0401 19:17:16.114759   54819 main.go:141] libmachine: (auto-408543) DBG | domain auto-408543 has defined IP address 192.168.61.127 and MAC address 52:54:00:f0:64:9b in network mk-auto-408543
	I0401 19:17:16.114792   54819 main.go:141] libmachine: (auto-408543) Calling .GetSSHPort
	I0401 19:17:16.114980   54819 main.go:141] libmachine: (auto-408543) Calling .GetSSHKeyPath
	I0401 19:17:16.115134   54819 main.go:141] libmachine: (auto-408543) Calling .GetSSHUsername
	I0401 19:17:16.115251   54819 sshutil.go:53] new ssh client: &{IP:192.168.61.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/auto-408543/id_rsa Username:docker}
	I0401 19:17:16.397636   54819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 19:17:16.450592   54819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 19:17:16.688176   54819 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:17:16.688265   54819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0401 19:17:17.070607   54819 main.go:141] libmachine: Making call to close driver server
	I0401 19:17:17.070634   54819 main.go:141] libmachine: (auto-408543) Calling .Close
	I0401 19:17:17.070660   54819 start.go:946] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0401 19:17:17.070617   54819 main.go:141] libmachine: Making call to close driver server
	I0401 19:17:17.070722   54819 main.go:141] libmachine: (auto-408543) Calling .Close
	I0401 19:17:17.070910   54819 main.go:141] libmachine: (auto-408543) DBG | Closing plugin on server side
	I0401 19:17:17.070943   54819 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:17:17.070951   54819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:17:17.070960   54819 main.go:141] libmachine: Making call to close driver server
	I0401 19:17:17.070968   54819 main.go:141] libmachine: (auto-408543) Calling .Close
	I0401 19:17:17.071025   54819 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:17:17.071041   54819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:17:17.071050   54819 main.go:141] libmachine: Making call to close driver server
	I0401 19:17:17.071062   54819 main.go:141] libmachine: (auto-408543) Calling .Close
	I0401 19:17:17.071871   54819 node_ready.go:35] waiting up to 15m0s for node "auto-408543" to be "Ready" ...
	I0401 19:17:17.072012   54819 main.go:141] libmachine: (auto-408543) DBG | Closing plugin on server side
	I0401 19:17:17.072051   54819 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:17:17.072051   54819 main.go:141] libmachine: (auto-408543) DBG | Closing plugin on server side
	I0401 19:17:17.072060   54819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:17:17.072069   54819 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:17:17.072086   54819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:17:17.097225   54819 node_ready.go:49] node "auto-408543" has status "Ready":"True"
	I0401 19:17:17.097251   54819 node_ready.go:38] duration metric: took 25.350762ms for node "auto-408543" to be "Ready" ...
	I0401 19:17:17.097261   54819 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:17:17.108022   54819 main.go:141] libmachine: Making call to close driver server
	I0401 19:17:17.108043   54819 main.go:141] libmachine: (auto-408543) Calling .Close
	I0401 19:17:17.108399   54819 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:17:17.108417   54819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:17:17.110345   54819 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0401 19:17:17.003923   55206 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0401 19:17:17.244916   55206 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0401 19:17:17.476146   55206 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0401 19:17:17.476460   55206 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [kindnet-408543 localhost] and IPs [192.168.72.92 127.0.0.1 ::1]
	I0401 19:17:17.654961   55206 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0401 19:17:17.655319   55206 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [kindnet-408543 localhost] and IPs [192.168.72.92 127.0.0.1 ::1]
	I0401 19:17:17.726884   55206 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 19:17:17.872355   55206 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 19:17:17.956635   55206 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0401 19:17:17.956965   55206 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 19:17:18.232213   55206 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 19:17:18.452754   55206 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 19:17:18.620605   55206 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 19:17:18.965570   55206 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 19:17:19.062913   55206 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 19:17:19.063563   55206 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 19:17:19.069009   55206 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 19:17:17.111755   54819 addons.go:505] duration metric: took 1.074510474s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0401 19:17:17.110373   54819 pod_ready.go:78] waiting up to 15m0s for pod "coredns-76f75df574-2dqzc" in "kube-system" namespace to be "Ready" ...
	I0401 19:17:17.575128   54819 kapi.go:248] "coredns" deployment in "kube-system" namespace and "auto-408543" context rescaled to 1 replicas
	I0401 19:17:19.119721   54819 pod_ready.go:102] pod "coredns-76f75df574-2dqzc" in "kube-system" namespace has status "Ready":"False"
	I0401 19:17:21.084965   55506 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.308237386s)
	I0401 19:17:21.085002   55506 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 19:17:21.085067   55506 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 19:17:21.093408   55506 start.go:562] Will wait 60s for crictl version
	I0401 19:17:21.093481   55506 ssh_runner.go:195] Run: which crictl
	I0401 19:17:21.098328   55506 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 19:17:21.143587   55506 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 19:17:21.143673   55506 ssh_runner.go:195] Run: crio --version
	I0401 19:17:21.178725   55506 ssh_runner.go:195] Run: crio --version
	I0401 19:17:21.314941   55506 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0401 19:17:19.070621   55206 out.go:204]   - Booting up control plane ...
	I0401 19:17:19.070728   55206 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 19:17:19.070859   55206 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 19:17:19.070964   55206 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 19:17:19.088627   55206 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 19:17:19.089523   55206 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 19:17:19.089567   55206 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0401 19:17:19.235705   55206 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 19:17:21.316408   55506 main.go:141] libmachine: (pause-208693) Calling .GetIP
	I0401 19:17:21.319848   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:21.320268   55506 main.go:141] libmachine: (pause-208693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:70:95", ip: ""} in network mk-pause-208693: {Iface:virbr1 ExpiryTime:2024-04-01 20:15:45 +0000 UTC Type:0 Mac:52:54:00:21:70:95 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:pause-208693 Clientid:01:52:54:00:21:70:95}
	I0401 19:17:21.320288   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined IP address 192.168.39.250 and MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:21.320664   55506 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0401 19:17:21.327439   55506 kubeadm.go:877] updating cluster {Name:pause-208693 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3
ClusterName:pause-208693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fals
e olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 19:17:21.327563   55506 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0401 19:17:21.327623   55506 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:17:21.391343   55506 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 19:17:21.391373   55506 crio.go:433] Images already preloaded, skipping extraction
	I0401 19:17:21.391425   55506 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:17:21.430944   55506 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 19:17:21.430969   55506 cache_images.go:84] Images are preloaded, skipping loading
	I0401 19:17:21.430978   55506 kubeadm.go:928] updating node { 192.168.39.250 8443 v1.29.3 crio true true} ...
	I0401 19:17:21.431097   55506 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-208693 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.250
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:pause-208693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 19:17:21.431183   55506 ssh_runner.go:195] Run: crio config
	I0401 19:17:21.490699   55506 cni.go:84] Creating CNI manager for ""
	I0401 19:17:21.490720   55506 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:17:21.490733   55506 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 19:17:21.490752   55506 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.250 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-208693 NodeName:pause-208693 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.250"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.250 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 19:17:21.490874   55506 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.250
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-208693"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.250
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.250"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 19:17:21.490940   55506 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0401 19:17:21.505292   55506 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 19:17:21.505353   55506 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 19:17:21.518774   55506 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0401 19:17:21.539416   55506 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 19:17:19.615260   54819 pod_ready.go:97] error getting pod "coredns-76f75df574-2dqzc" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-2dqzc" not found
	I0401 19:17:19.615286   54819 pod_ready.go:81] duration metric: took 2.503509036s for pod "coredns-76f75df574-2dqzc" in "kube-system" namespace to be "Ready" ...
	E0401 19:17:19.615295   54819 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-76f75df574-2dqzc" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-2dqzc" not found
	I0401 19:17:19.615301   54819 pod_ready.go:78] waiting up to 15m0s for pod "coredns-76f75df574-gvklv" in "kube-system" namespace to be "Ready" ...
	I0401 19:17:21.624808   54819 pod_ready.go:102] pod "coredns-76f75df574-gvklv" in "kube-system" namespace has status "Ready":"False"
	I0401 19:17:23.626993   54819 pod_ready.go:102] pod "coredns-76f75df574-gvklv" in "kube-system" namespace has status "Ready":"False"
	I0401 19:17:25.237712   55206 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.002736 seconds
	I0401 19:17:25.260027   55206 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 19:17:25.281769   55206 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 19:17:25.826211   55206 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 19:17:25.826461   55206 kubeadm.go:309] [mark-control-plane] Marking the node kindnet-408543 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 19:17:26.344378   55206 kubeadm.go:309] [bootstrap-token] Using token: rwpz73.tx8m08htc5fha9yu
	I0401 19:17:26.346060   55206 out.go:204]   - Configuring RBAC rules ...
	I0401 19:17:26.346203   55206 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 19:17:26.358311   55206 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 19:17:26.374042   55206 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 19:17:26.378908   55206 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 19:17:26.387350   55206 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 19:17:26.391275   55206 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 19:17:26.412368   55206 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 19:17:21.559611   55506 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0401 19:17:21.583677   55506 ssh_runner.go:195] Run: grep 192.168.39.250	control-plane.minikube.internal$ /etc/hosts
	I0401 19:17:21.589587   55506 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:17:21.742565   55506 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:17:21.760339   55506 certs.go:68] Setting up /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/pause-208693 for IP: 192.168.39.250
	I0401 19:17:21.760372   55506 certs.go:194] generating shared ca certs ...
	I0401 19:17:21.760393   55506 certs.go:226] acquiring lock for ca certs: {Name:mk348b3e250c104b662139cd7212c6c6dfda3180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:17:21.760561   55506 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key
	I0401 19:17:21.760614   55506 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key
	I0401 19:17:21.760629   55506 certs.go:256] generating profile certs ...
	I0401 19:17:21.760753   55506 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/pause-208693/client.key
	I0401 19:17:21.760853   55506 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/pause-208693/apiserver.key.640a01f7
	I0401 19:17:21.760894   55506 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/pause-208693/proxy-client.key
	I0401 19:17:21.760997   55506 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem (1338 bytes)
	W0401 19:17:21.761029   55506 certs.go:480] ignoring /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751_empty.pem, impossibly tiny 0 bytes
	I0401 19:17:21.761038   55506 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem (1675 bytes)
	I0401 19:17:21.761067   55506 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem (1082 bytes)
	I0401 19:17:21.761094   55506 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem (1123 bytes)
	I0401 19:17:21.761118   55506 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem (1679 bytes)
	I0401 19:17:21.761152   55506 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:17:21.762220   55506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 19:17:21.789295   55506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 19:17:21.816554   55506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 19:17:22.006815   55506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 19:17:22.163516   55506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/pause-208693/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0401 19:17:22.366533   55506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/pause-208693/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0401 19:17:22.619189   55506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/pause-208693/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 19:17:22.751245   55506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/pause-208693/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 19:17:23.002508   55506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /usr/share/ca-certificates/177512.pem (1708 bytes)
	I0401 19:17:23.074573   55506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 19:17:23.110680   55506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem --> /usr/share/ca-certificates/17751.pem (1338 bytes)
	I0401 19:17:23.149436   55506 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0401 19:17:23.177136   55506 ssh_runner.go:195] Run: openssl version
	I0401 19:17:23.188108   55506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177512.pem && ln -fs /usr/share/ca-certificates/177512.pem /etc/ssl/certs/177512.pem"
	I0401 19:17:23.205226   55506 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177512.pem
	I0401 19:17:23.213298   55506 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 18:15 /usr/share/ca-certificates/177512.pem
	I0401 19:17:23.213366   55506 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177512.pem
	I0401 19:17:23.224544   55506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177512.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 19:17:23.240162   55506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 19:17:23.257743   55506 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:17:23.263476   55506 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 18:07 /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:17:23.263542   55506 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:17:23.272318   55506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 19:17:23.286967   55506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17751.pem && ln -fs /usr/share/ca-certificates/17751.pem /etc/ssl/certs/17751.pem"
	I0401 19:17:23.299084   55506 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17751.pem
	I0401 19:17:23.304168   55506 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 18:15 /usr/share/ca-certificates/17751.pem
	I0401 19:17:23.304227   55506 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17751.pem
	I0401 19:17:23.310638   55506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17751.pem /etc/ssl/certs/51391683.0"
	I0401 19:17:23.321219   55506 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 19:17:23.326672   55506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 19:17:23.336598   55506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 19:17:23.343670   55506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 19:17:23.354373   55506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 19:17:23.363233   55506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 19:17:23.376742   55506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 19:17:23.390188   55506 kubeadm.go:391] StartCluster: {Name:pause-208693 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cl
usterName:pause-208693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:17:23.390370   55506 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 19:17:23.390470   55506 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:17:23.488359   55506 cri.go:89] found id: "51f0eff30b7c7d78a434ac0cebb793087012ebc1a4e3af4377acb07b114c7b1b"
	I0401 19:17:23.488385   55506 cri.go:89] found id: "ffbcd93cd4dc6dd7bdba8817fe9464043c1441e48a2f0a339d8e2f90465c23b2"
	I0401 19:17:23.488390   55506 cri.go:89] found id: "2d9857c1d11f7699cdda344ccc35292880e1e966398923e7c7c8a221bb17fbb4"
	I0401 19:17:23.488395   55506 cri.go:89] found id: "eb0be624c77f6b7779d07398d8ac81b11ff2d1e2491332385f9bc7abd08da4d1"
	I0401 19:17:23.488399   55506 cri.go:89] found id: "90827beb7d452b745ca6b9be1e1cbf187b22f2a83733fa1cf32f65dd51871a94"
	I0401 19:17:23.488403   55506 cri.go:89] found id: "a191a299d42de032a4e1b058d778aeb8a768699852f90479ed27525750c39dcb"
	I0401 19:17:23.488407   55506 cri.go:89] found id: "13440e4b058772c288c91430cc8b93d4ee93f6c2dc002c58c42364841c37537c"
	I0401 19:17:23.488411   55506 cri.go:89] found id: "f4e035677728cfc3e8fdacccbe9c2074622432687c5ffea26e9297dab2bc7e5f"
	I0401 19:17:23.488415   55506 cri.go:89] found id: "4c96330c5da0385157221a32550935b344f8d450869645cdb302bf6d7d24d50a"
	I0401 19:17:23.488422   55506 cri.go:89] found id: "60bec38260e22141e8ef66a6e954a86d22216f47a8023678c8c9ec31a28ed3cd"
	I0401 19:17:23.488426   55506 cri.go:89] found id: "9ae44e1e9ca77a159598d47d87a284b50262d7feed6af8939a521854ddf86ff4"
	I0401 19:17:23.488448   55506 cri.go:89] found id: "44a8c17316feb68f3c977baa3f7431c716167f78518eb63e20017a200ca17ad4"
	I0401 19:17:23.488457   55506 cri.go:89] found id: ""
	I0401 19:17:23.488513   55506 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 01 19:18:07 pause-208693 crio[2141]: time="2024-04-01 19:18:07.970052542Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711999087970032681,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121209,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1e1c6a78-a8b8-4282-831b-c4bedddbb68a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:18:07 pause-208693 crio[2141]: time="2024-04-01 19:18:07.971232210Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c6b8c1cb-1ed8-4c9e-946e-233e05eb03bb name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:18:07 pause-208693 crio[2141]: time="2024-04-01 19:18:07.971302595Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c6b8c1cb-1ed8-4c9e-946e-233e05eb03bb name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:18:07 pause-208693 crio[2141]: time="2024-04-01 19:18:07.971555989Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d51e592bf39a8faf7132b084687b1accbdb18416ca3406e14ab22c4cc914398f,PodSandboxId:f8530e9e195d4d7e3093c22d1ed84743b6d1388e4e2682b416e6c41595984fc0,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711999071294496603,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rldp9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9bf80ea-9ada-4a47-bab1-e78b9223d2a8,},Annotations:map[string]string{io.kubernetes.container.hash: dae92949,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee316adbd86a389d2cee2a243bb53d9623bdc19f7f4ada9f6d1dca071d0882d0,PodSandboxId:6c9433488dd88aea169cba5d687c2b18ffb7bacf191081aad083c7e1b83bb2eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711999071239675723,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-df6ns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: acb3c498-4e8d-4a02-b9d7-8a368f9303d0,},Annotations:map[string]string{io.kubernetes.container.hash: d19b4992,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f63a9c00dde6c19df299f9c1f4a733df97d6952398eceec187693dc9f073374,PodSandboxId:048b641526bb6ff733a7c91666f55b7716aab4d9aeea375fde25db0b088b73ae,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711999066608613618,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-208693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ba31f6ed98991b7270f0eb1bd6de561,},Annot
ations:map[string]string{io.kubernetes.container.hash: e5a2f95e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a040f7057b6a33daba995297514b7b885fc9fe29532005c42b5e51d13105fdb9,PodSandboxId:cf7a984a2a9d307920f4f8475a386958d0e0b0ef51bdaa8e8792df7e58a19df4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711999066645334372,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-208693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a372662f6344b45a1f4085d401140c1a,},Annotations:map[string]
string{io.kubernetes.container.hash: d6729e28,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:354ccfac9527c54ac400591dde36f20a21b0f39232cee1442492d045c16195b2,PodSandboxId:073f93c45437f090972f867049697da94b76179da01cae458f4787eed49f9346,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711999066641810226,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-208693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9aa6509d897132c9a939f9cc49fd2164,},Annotations:map[string]string{io.kubernet
es.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d177654682bcbda844c640fa087f542ecebee05bce474ac4d3d8194d5ef6b06,PodSandboxId:7220ef36a671b8f828ba91ff79b645337994d459c971539c2e70edde07b7577b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711999066590692066,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-208693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c1e1b9ed1eae1e76338c581a974e1b2,},Annotations:map[string]string{io
.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51f0eff30b7c7d78a434ac0cebb793087012ebc1a4e3af4377acb07b114c7b1b,PodSandboxId:f8530e9e195d4d7e3093c22d1ed84743b6d1388e4e2682b416e6c41595984fc0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711999042746299873,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rldp9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9bf80ea-9ada-4a47-bab1-e78b9223d2a8,},Annotations:map[string]string{io.kubernetes.container.hash: dae9
2949,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffbcd93cd4dc6dd7bdba8817fe9464043c1441e48a2f0a339d8e2f90465c23b2,PodSandboxId:6c9433488dd88aea169cba5d687c2b18ffb7bacf191081aad083c7e1b83bb2eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1711999042518565519,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-df6ns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acb3c498-4e8d-4a02-b9d7-8a368f9303d0,},Annotations:map[string]string{io.kubernetes.container.hash: d19b4992,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb0be624c77f6b7779d07398d8ac81b11ff2d1e2491332385f9bc7abd08da4d1,PodSandboxId:073f93c45437f090972f867049697da94b76179da01cae458f4787eed49f9346,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1711999042384301994,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-208693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9aa6509d897132c9a939f9cc49fd2164,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d9857c1d11f7699cdda344ccc35292880e1e966398923e7c7c8a221bb17fbb4,PodSandboxId:048b641526bb6ff733a7c91666f55b7716aab4d9aeea375fde25db0b088b73ae,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1711999042408662170,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-208693,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 3ba31f6ed98991b7270f0eb1bd6de561,},Annotations:map[string]string{io.kubernetes.container.hash: e5a2f95e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90827beb7d452b745ca6b9be1e1cbf187b22f2a83733fa1cf32f65dd51871a94,PodSandboxId:cf7a984a2a9d307920f4f8475a386958d0e0b0ef51bdaa8e8792df7e58a19df4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711999042354498532,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-208693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: a372662f6344b45a1f4085d401140c1a,},Annotations:map[string]string{io.kubernetes.container.hash: d6729e28,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a191a299d42de032a4e1b058d778aeb8a768699852f90479ed27525750c39dcb,PodSandboxId:7220ef36a671b8f828ba91ff79b645337994d459c971539c2e70edde07b7577b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1711999042250503194,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-208693,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 9c1e1b9ed1eae1e76338c581a974e1b2,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c6b8c1cb-1ed8-4c9e-946e-233e05eb03bb name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:18:08 pause-208693 crio[2141]: time="2024-04-01 19:18:08.019127587Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ff73cf45-b927-41b6-a5d1-8aa2dd091d72 name=/runtime.v1.RuntimeService/Version
	Apr 01 19:18:08 pause-208693 crio[2141]: time="2024-04-01 19:18:08.019220972Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ff73cf45-b927-41b6-a5d1-8aa2dd091d72 name=/runtime.v1.RuntimeService/Version
	Apr 01 19:18:08 pause-208693 crio[2141]: time="2024-04-01 19:18:08.021042975Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2de5ef45-3e70-454a-b9d7-9fd29bfb9cae name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:18:08 pause-208693 crio[2141]: time="2024-04-01 19:18:08.021414302Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711999088021393951,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121209,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2de5ef45-3e70-454a-b9d7-9fd29bfb9cae name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:18:08 pause-208693 crio[2141]: time="2024-04-01 19:18:08.021967044Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=95d7f441-209a-4409-b3b1-caae9bc694a6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:18:08 pause-208693 crio[2141]: time="2024-04-01 19:18:08.022040901Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=95d7f441-209a-4409-b3b1-caae9bc694a6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:18:08 pause-208693 crio[2141]: time="2024-04-01 19:18:08.022282633Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d51e592bf39a8faf7132b084687b1accbdb18416ca3406e14ab22c4cc914398f,PodSandboxId:f8530e9e195d4d7e3093c22d1ed84743b6d1388e4e2682b416e6c41595984fc0,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711999071294496603,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rldp9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9bf80ea-9ada-4a47-bab1-e78b9223d2a8,},Annotations:map[string]string{io.kubernetes.container.hash: dae92949,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee316adbd86a389d2cee2a243bb53d9623bdc19f7f4ada9f6d1dca071d0882d0,PodSandboxId:6c9433488dd88aea169cba5d687c2b18ffb7bacf191081aad083c7e1b83bb2eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711999071239675723,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-df6ns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: acb3c498-4e8d-4a02-b9d7-8a368f9303d0,},Annotations:map[string]string{io.kubernetes.container.hash: d19b4992,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f63a9c00dde6c19df299f9c1f4a733df97d6952398eceec187693dc9f073374,PodSandboxId:048b641526bb6ff733a7c91666f55b7716aab4d9aeea375fde25db0b088b73ae,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711999066608613618,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-208693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ba31f6ed98991b7270f0eb1bd6de561,},Annot
ations:map[string]string{io.kubernetes.container.hash: e5a2f95e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a040f7057b6a33daba995297514b7b885fc9fe29532005c42b5e51d13105fdb9,PodSandboxId:cf7a984a2a9d307920f4f8475a386958d0e0b0ef51bdaa8e8792df7e58a19df4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711999066645334372,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-208693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a372662f6344b45a1f4085d401140c1a,},Annotations:map[string]
string{io.kubernetes.container.hash: d6729e28,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:354ccfac9527c54ac400591dde36f20a21b0f39232cee1442492d045c16195b2,PodSandboxId:073f93c45437f090972f867049697da94b76179da01cae458f4787eed49f9346,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711999066641810226,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-208693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9aa6509d897132c9a939f9cc49fd2164,},Annotations:map[string]string{io.kubernet
es.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d177654682bcbda844c640fa087f542ecebee05bce474ac4d3d8194d5ef6b06,PodSandboxId:7220ef36a671b8f828ba91ff79b645337994d459c971539c2e70edde07b7577b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711999066590692066,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-208693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c1e1b9ed1eae1e76338c581a974e1b2,},Annotations:map[string]string{io
.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51f0eff30b7c7d78a434ac0cebb793087012ebc1a4e3af4377acb07b114c7b1b,PodSandboxId:f8530e9e195d4d7e3093c22d1ed84743b6d1388e4e2682b416e6c41595984fc0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711999042746299873,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rldp9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9bf80ea-9ada-4a47-bab1-e78b9223d2a8,},Annotations:map[string]string{io.kubernetes.container.hash: dae9
2949,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffbcd93cd4dc6dd7bdba8817fe9464043c1441e48a2f0a339d8e2f90465c23b2,PodSandboxId:6c9433488dd88aea169cba5d687c2b18ffb7bacf191081aad083c7e1b83bb2eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1711999042518565519,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-df6ns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acb3c498-4e8d-4a02-b9d7-8a368f9303d0,},Annotations:map[string]string{io.kubernetes.container.hash: d19b4992,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb0be624c77f6b7779d07398d8ac81b11ff2d1e2491332385f9bc7abd08da4d1,PodSandboxId:073f93c45437f090972f867049697da94b76179da01cae458f4787eed49f9346,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1711999042384301994,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-208693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9aa6509d897132c9a939f9cc49fd2164,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d9857c1d11f7699cdda344ccc35292880e1e966398923e7c7c8a221bb17fbb4,PodSandboxId:048b641526bb6ff733a7c91666f55b7716aab4d9aeea375fde25db0b088b73ae,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1711999042408662170,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-208693,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 3ba31f6ed98991b7270f0eb1bd6de561,},Annotations:map[string]string{io.kubernetes.container.hash: e5a2f95e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90827beb7d452b745ca6b9be1e1cbf187b22f2a83733fa1cf32f65dd51871a94,PodSandboxId:cf7a984a2a9d307920f4f8475a386958d0e0b0ef51bdaa8e8792df7e58a19df4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711999042354498532,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-208693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: a372662f6344b45a1f4085d401140c1a,},Annotations:map[string]string{io.kubernetes.container.hash: d6729e28,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a191a299d42de032a4e1b058d778aeb8a768699852f90479ed27525750c39dcb,PodSandboxId:7220ef36a671b8f828ba91ff79b645337994d459c971539c2e70edde07b7577b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1711999042250503194,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-208693,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 9c1e1b9ed1eae1e76338c581a974e1b2,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=95d7f441-209a-4409-b3b1-caae9bc694a6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:18:08 pause-208693 crio[2141]: time="2024-04-01 19:18:08.079514926Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=011dbba7-bb1b-49c4-9c7c-0aa08d300fc2 name=/runtime.v1.RuntimeService/Version
	Apr 01 19:18:08 pause-208693 crio[2141]: time="2024-04-01 19:18:08.079614591Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=011dbba7-bb1b-49c4-9c7c-0aa08d300fc2 name=/runtime.v1.RuntimeService/Version
	Apr 01 19:18:08 pause-208693 crio[2141]: time="2024-04-01 19:18:08.080949386Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fe9581e8-025b-451f-b66a-8e68701aedda name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:18:08 pause-208693 crio[2141]: time="2024-04-01 19:18:08.081457631Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711999088081434687,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121209,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fe9581e8-025b-451f-b66a-8e68701aedda name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:18:08 pause-208693 crio[2141]: time="2024-04-01 19:18:08.082314322Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7c31b9cf-5d5a-4d3a-9f5b-95b4610c6b7c name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:18:08 pause-208693 crio[2141]: time="2024-04-01 19:18:08.082394244Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7c31b9cf-5d5a-4d3a-9f5b-95b4610c6b7c name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:18:08 pause-208693 crio[2141]: time="2024-04-01 19:18:08.082697097Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d51e592bf39a8faf7132b084687b1accbdb18416ca3406e14ab22c4cc914398f,PodSandboxId:f8530e9e195d4d7e3093c22d1ed84743b6d1388e4e2682b416e6c41595984fc0,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711999071294496603,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rldp9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9bf80ea-9ada-4a47-bab1-e78b9223d2a8,},Annotations:map[string]string{io.kubernetes.container.hash: dae92949,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee316adbd86a389d2cee2a243bb53d9623bdc19f7f4ada9f6d1dca071d0882d0,PodSandboxId:6c9433488dd88aea169cba5d687c2b18ffb7bacf191081aad083c7e1b83bb2eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711999071239675723,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-df6ns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: acb3c498-4e8d-4a02-b9d7-8a368f9303d0,},Annotations:map[string]string{io.kubernetes.container.hash: d19b4992,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f63a9c00dde6c19df299f9c1f4a733df97d6952398eceec187693dc9f073374,PodSandboxId:048b641526bb6ff733a7c91666f55b7716aab4d9aeea375fde25db0b088b73ae,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711999066608613618,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-208693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ba31f6ed98991b7270f0eb1bd6de561,},Annot
ations:map[string]string{io.kubernetes.container.hash: e5a2f95e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a040f7057b6a33daba995297514b7b885fc9fe29532005c42b5e51d13105fdb9,PodSandboxId:cf7a984a2a9d307920f4f8475a386958d0e0b0ef51bdaa8e8792df7e58a19df4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711999066645334372,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-208693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a372662f6344b45a1f4085d401140c1a,},Annotations:map[string]
string{io.kubernetes.container.hash: d6729e28,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:354ccfac9527c54ac400591dde36f20a21b0f39232cee1442492d045c16195b2,PodSandboxId:073f93c45437f090972f867049697da94b76179da01cae458f4787eed49f9346,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711999066641810226,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-208693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9aa6509d897132c9a939f9cc49fd2164,},Annotations:map[string]string{io.kubernet
es.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d177654682bcbda844c640fa087f542ecebee05bce474ac4d3d8194d5ef6b06,PodSandboxId:7220ef36a671b8f828ba91ff79b645337994d459c971539c2e70edde07b7577b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711999066590692066,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-208693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c1e1b9ed1eae1e76338c581a974e1b2,},Annotations:map[string]string{io
.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51f0eff30b7c7d78a434ac0cebb793087012ebc1a4e3af4377acb07b114c7b1b,PodSandboxId:f8530e9e195d4d7e3093c22d1ed84743b6d1388e4e2682b416e6c41595984fc0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711999042746299873,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rldp9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9bf80ea-9ada-4a47-bab1-e78b9223d2a8,},Annotations:map[string]string{io.kubernetes.container.hash: dae9
2949,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffbcd93cd4dc6dd7bdba8817fe9464043c1441e48a2f0a339d8e2f90465c23b2,PodSandboxId:6c9433488dd88aea169cba5d687c2b18ffb7bacf191081aad083c7e1b83bb2eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1711999042518565519,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-df6ns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acb3c498-4e8d-4a02-b9d7-8a368f9303d0,},Annotations:map[string]string{io.kubernetes.container.hash: d19b4992,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb0be624c77f6b7779d07398d8ac81b11ff2d1e2491332385f9bc7abd08da4d1,PodSandboxId:073f93c45437f090972f867049697da94b76179da01cae458f4787eed49f9346,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1711999042384301994,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-208693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9aa6509d897132c9a939f9cc49fd2164,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d9857c1d11f7699cdda344ccc35292880e1e966398923e7c7c8a221bb17fbb4,PodSandboxId:048b641526bb6ff733a7c91666f55b7716aab4d9aeea375fde25db0b088b73ae,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1711999042408662170,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-208693,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 3ba31f6ed98991b7270f0eb1bd6de561,},Annotations:map[string]string{io.kubernetes.container.hash: e5a2f95e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90827beb7d452b745ca6b9be1e1cbf187b22f2a83733fa1cf32f65dd51871a94,PodSandboxId:cf7a984a2a9d307920f4f8475a386958d0e0b0ef51bdaa8e8792df7e58a19df4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711999042354498532,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-208693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: a372662f6344b45a1f4085d401140c1a,},Annotations:map[string]string{io.kubernetes.container.hash: d6729e28,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a191a299d42de032a4e1b058d778aeb8a768699852f90479ed27525750c39dcb,PodSandboxId:7220ef36a671b8f828ba91ff79b645337994d459c971539c2e70edde07b7577b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1711999042250503194,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-208693,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 9c1e1b9ed1eae1e76338c581a974e1b2,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7c31b9cf-5d5a-4d3a-9f5b-95b4610c6b7c name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:18:08 pause-208693 crio[2141]: time="2024-04-01 19:18:08.140192560Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=71e43c81-2c5d-4b51-aac0-37264f491916 name=/runtime.v1.RuntimeService/Version
	Apr 01 19:18:08 pause-208693 crio[2141]: time="2024-04-01 19:18:08.140267241Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=71e43c81-2c5d-4b51-aac0-37264f491916 name=/runtime.v1.RuntimeService/Version
	Apr 01 19:18:08 pause-208693 crio[2141]: time="2024-04-01 19:18:08.141766052Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3fb83026-850d-45fd-b2bd-a437a2cceb36 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:18:08 pause-208693 crio[2141]: time="2024-04-01 19:18:08.142275097Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711999088142251626,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121209,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3fb83026-850d-45fd-b2bd-a437a2cceb36 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:18:08 pause-208693 crio[2141]: time="2024-04-01 19:18:08.142967073Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b63b98be-148f-4d65-bf39-b6887183ef61 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:18:08 pause-208693 crio[2141]: time="2024-04-01 19:18:08.143044164Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b63b98be-148f-4d65-bf39-b6887183ef61 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:18:08 pause-208693 crio[2141]: time="2024-04-01 19:18:08.143273648Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d51e592bf39a8faf7132b084687b1accbdb18416ca3406e14ab22c4cc914398f,PodSandboxId:f8530e9e195d4d7e3093c22d1ed84743b6d1388e4e2682b416e6c41595984fc0,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711999071294496603,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rldp9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9bf80ea-9ada-4a47-bab1-e78b9223d2a8,},Annotations:map[string]string{io.kubernetes.container.hash: dae92949,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee316adbd86a389d2cee2a243bb53d9623bdc19f7f4ada9f6d1dca071d0882d0,PodSandboxId:6c9433488dd88aea169cba5d687c2b18ffb7bacf191081aad083c7e1b83bb2eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711999071239675723,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-df6ns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: acb3c498-4e8d-4a02-b9d7-8a368f9303d0,},Annotations:map[string]string{io.kubernetes.container.hash: d19b4992,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f63a9c00dde6c19df299f9c1f4a733df97d6952398eceec187693dc9f073374,PodSandboxId:048b641526bb6ff733a7c91666f55b7716aab4d9aeea375fde25db0b088b73ae,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711999066608613618,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-208693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ba31f6ed98991b7270f0eb1bd6de561,},Annot
ations:map[string]string{io.kubernetes.container.hash: e5a2f95e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a040f7057b6a33daba995297514b7b885fc9fe29532005c42b5e51d13105fdb9,PodSandboxId:cf7a984a2a9d307920f4f8475a386958d0e0b0ef51bdaa8e8792df7e58a19df4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711999066645334372,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-208693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a372662f6344b45a1f4085d401140c1a,},Annotations:map[string]
string{io.kubernetes.container.hash: d6729e28,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:354ccfac9527c54ac400591dde36f20a21b0f39232cee1442492d045c16195b2,PodSandboxId:073f93c45437f090972f867049697da94b76179da01cae458f4787eed49f9346,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711999066641810226,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-208693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9aa6509d897132c9a939f9cc49fd2164,},Annotations:map[string]string{io.kubernet
es.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d177654682bcbda844c640fa087f542ecebee05bce474ac4d3d8194d5ef6b06,PodSandboxId:7220ef36a671b8f828ba91ff79b645337994d459c971539c2e70edde07b7577b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711999066590692066,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-208693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c1e1b9ed1eae1e76338c581a974e1b2,},Annotations:map[string]string{io
.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51f0eff30b7c7d78a434ac0cebb793087012ebc1a4e3af4377acb07b114c7b1b,PodSandboxId:f8530e9e195d4d7e3093c22d1ed84743b6d1388e4e2682b416e6c41595984fc0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711999042746299873,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rldp9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9bf80ea-9ada-4a47-bab1-e78b9223d2a8,},Annotations:map[string]string{io.kubernetes.container.hash: dae9
2949,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffbcd93cd4dc6dd7bdba8817fe9464043c1441e48a2f0a339d8e2f90465c23b2,PodSandboxId:6c9433488dd88aea169cba5d687c2b18ffb7bacf191081aad083c7e1b83bb2eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1711999042518565519,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-df6ns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acb3c498-4e8d-4a02-b9d7-8a368f9303d0,},Annotations:map[string]string{io.kubernetes.container.hash: d19b4992,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb0be624c77f6b7779d07398d8ac81b11ff2d1e2491332385f9bc7abd08da4d1,PodSandboxId:073f93c45437f090972f867049697da94b76179da01cae458f4787eed49f9346,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1711999042384301994,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-208693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9aa6509d897132c9a939f9cc49fd2164,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d9857c1d11f7699cdda344ccc35292880e1e966398923e7c7c8a221bb17fbb4,PodSandboxId:048b641526bb6ff733a7c91666f55b7716aab4d9aeea375fde25db0b088b73ae,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1711999042408662170,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-208693,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 3ba31f6ed98991b7270f0eb1bd6de561,},Annotations:map[string]string{io.kubernetes.container.hash: e5a2f95e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90827beb7d452b745ca6b9be1e1cbf187b22f2a83733fa1cf32f65dd51871a94,PodSandboxId:cf7a984a2a9d307920f4f8475a386958d0e0b0ef51bdaa8e8792df7e58a19df4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711999042354498532,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-208693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: a372662f6344b45a1f4085d401140c1a,},Annotations:map[string]string{io.kubernetes.container.hash: d6729e28,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a191a299d42de032a4e1b058d778aeb8a768699852f90479ed27525750c39dcb,PodSandboxId:7220ef36a671b8f828ba91ff79b645337994d459c971539c2e70edde07b7577b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1711999042250503194,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-208693,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 9c1e1b9ed1eae1e76338c581a974e1b2,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b63b98be-148f-4d65-bf39-b6887183ef61 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d51e592bf39a8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 seconds ago      Running             coredns                   2                   f8530e9e195d4       coredns-76f75df574-rldp9
	ee316adbd86a3       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392   16 seconds ago      Running             kube-proxy                2                   6c9433488dd88       kube-proxy-df6ns
	a040f7057b6a3       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   21 seconds ago      Running             kube-apiserver            2                   cf7a984a2a9d3       kube-apiserver-pause-208693
	354ccfac9527c       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b   21 seconds ago      Running             kube-scheduler            2                   073f93c45437f       kube-scheduler-pause-208693
	2f63a9c00dde6       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   21 seconds ago      Running             etcd                      2                   048b641526bb6       etcd-pause-208693
	4d177654682bc       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3   21 seconds ago      Running             kube-controller-manager   2                   7220ef36a671b       kube-controller-manager-pause-208693
	51f0eff30b7c7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   45 seconds ago      Exited              coredns                   1                   f8530e9e195d4       coredns-76f75df574-rldp9
	ffbcd93cd4dc6       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392   45 seconds ago      Exited              kube-proxy                1                   6c9433488dd88       kube-proxy-df6ns
	2d9857c1d11f7       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   45 seconds ago      Exited              etcd                      1                   048b641526bb6       etcd-pause-208693
	eb0be624c77f6       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b   45 seconds ago      Exited              kube-scheduler            1                   073f93c45437f       kube-scheduler-pause-208693
	90827beb7d452       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   45 seconds ago      Exited              kube-apiserver            1                   cf7a984a2a9d3       kube-apiserver-pause-208693
	a191a299d42de       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3   45 seconds ago      Exited              kube-controller-manager   1                   7220ef36a671b       kube-controller-manager-pause-208693
	
	
	==> coredns [51f0eff30b7c7d78a434ac0cebb793087012ebc1a4e3af4377acb07b114c7b1b] <==
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:48134 - 19721 "HINFO IN 6185462386848793682.7824587969644291913. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009841976s
	
	
	==> coredns [d51e592bf39a8faf7132b084687b1accbdb18416ca3406e14ab22c4cc914398f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:52870 - 27388 "HINFO IN 7710611610440155977.7233152509346432534. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009741877s
	
	
	==> describe nodes <==
	Name:               pause-208693
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-208693
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2
	                    minikube.k8s.io/name=pause-208693
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_01T19_16_10_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Apr 2024 19:16:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-208693
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Apr 2024 19:18:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Apr 2024 19:17:50 +0000   Mon, 01 Apr 2024 19:16:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Apr 2024 19:17:50 +0000   Mon, 01 Apr 2024 19:16:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Apr 2024 19:17:50 +0000   Mon, 01 Apr 2024 19:16:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Apr 2024 19:17:50 +0000   Mon, 01 Apr 2024 19:16:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.250
	  Hostname:    pause-208693
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 708746cc343f4c35bfb83cd045a64864
	  System UUID:                708746cc-343f-4c35-bfb8-3cd045a64864
	  Boot ID:                    33c40c98-8ea2-46b4-a76e-553079d53cc1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-rldp9                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     105s
	  kube-system                 etcd-pause-208693                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         118s
	  kube-system                 kube-apiserver-pause-208693             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         118s
	  kube-system                 kube-controller-manager-pause-208693    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         118s
	  kube-system                 kube-proxy-df6ns                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         105s
	  kube-system                 kube-scheduler-pause-208693             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         118s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 103s                 kube-proxy       
	  Normal   Starting                 16s                  kube-proxy       
	  Normal   Starting                 41s                  kube-proxy       
	  Normal   NodeHasSufficientPID     2m5s (x7 over 2m5s)  kubelet          Node pause-208693 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m5s (x8 over 2m5s)  kubelet          Node pause-208693 status is now: NodeHasSufficientMemory
	  Normal   Starting                 2m5s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    2m5s (x8 over 2m5s)  kubelet          Node pause-208693 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 118s                 kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  118s                 kubelet          Node pause-208693 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    118s                 kubelet          Node pause-208693 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     118s                 kubelet          Node pause-208693 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  118s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeReady                117s                 kubelet          Node pause-208693 status is now: NodeReady
	  Normal   RegisteredNode           106s                 node-controller  Node pause-208693 event: Registered Node pause-208693 in Controller
	  Warning  ContainerGCFailed        58s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 23s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  22s (x8 over 22s)    kubelet          Node pause-208693 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    22s (x8 over 22s)    kubelet          Node pause-208693 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     22s (x7 over 22s)    kubelet          Node pause-208693 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           6s                   node-controller  Node pause-208693 event: Registered Node pause-208693 in Controller
	
	
	==> dmesg <==
	[  +0.073673] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.218396] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.148137] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.315544] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +5.124377] systemd-fstab-generator[770]: Ignoring "noauto" option for root device
	[  +0.068461] kauditd_printk_skb: 130 callbacks suppressed
	[Apr 1 19:16] systemd-fstab-generator[947]: Ignoring "noauto" option for root device
	[  +1.104446] kauditd_printk_skb: 57 callbacks suppressed
	[  +6.212560] systemd-fstab-generator[1280]: Ignoring "noauto" option for root device
	[  +0.089272] kauditd_printk_skb: 30 callbacks suppressed
	[  +5.031692] kauditd_printk_skb: 18 callbacks suppressed
	[  +8.268388] systemd-fstab-generator[1488]: Ignoring "noauto" option for root device
	[Apr 1 19:17] kauditd_printk_skb: 63 callbacks suppressed
	[  +9.705573] systemd-fstab-generator[2061]: Ignoring "noauto" option for root device
	[  +0.159308] systemd-fstab-generator[2073]: Ignoring "noauto" option for root device
	[  +0.236138] systemd-fstab-generator[2087]: Ignoring "noauto" option for root device
	[  +0.162323] systemd-fstab-generator[2099]: Ignoring "noauto" option for root device
	[  +0.373734] systemd-fstab-generator[2127]: Ignoring "noauto" option for root device
	[  +5.970510] systemd-fstab-generator[2219]: Ignoring "noauto" option for root device
	[  +0.082510] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.689758] kauditd_printk_skb: 81 callbacks suppressed
	[ +18.211221] systemd-fstab-generator[3043]: Ignoring "noauto" option for root device
	[  +5.666109] kauditd_printk_skb: 40 callbacks suppressed
	[Apr 1 19:18] kauditd_printk_skb: 2 callbacks suppressed
	[  +1.405568] systemd-fstab-generator[3483]: Ignoring "noauto" option for root device
	
	
	==> etcd [2d9857c1d11f7699cdda344ccc35292880e1e966398923e7c7c8a221bb17fbb4] <==
	{"level":"info","ts":"2024-04-01T19:17:23.21841Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-01T19:17:24.152944Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a69e859ffe38fcde is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-01T19:17:24.153043Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a69e859ffe38fcde became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-01T19:17:24.153085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a69e859ffe38fcde received MsgPreVoteResp from a69e859ffe38fcde at term 2"}
	{"level":"info","ts":"2024-04-01T19:17:24.153117Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a69e859ffe38fcde became candidate at term 3"}
	{"level":"info","ts":"2024-04-01T19:17:24.153148Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a69e859ffe38fcde received MsgVoteResp from a69e859ffe38fcde at term 3"}
	{"level":"info","ts":"2024-04-01T19:17:24.153182Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a69e859ffe38fcde became leader at term 3"}
	{"level":"info","ts":"2024-04-01T19:17:24.153215Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a69e859ffe38fcde elected leader a69e859ffe38fcde at term 3"}
	{"level":"info","ts":"2024-04-01T19:17:24.15812Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"a69e859ffe38fcde","local-member-attributes":"{Name:pause-208693 ClientURLs:[https://192.168.39.250:2379]}","request-path":"/0/members/a69e859ffe38fcde/attributes","cluster-id":"f7a04275a0bf31","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-01T19:17:24.15899Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-01T19:17:24.159089Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-01T19:17:24.162062Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-01T19:17:24.162134Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-01T19:17:24.163619Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-01T19:17:24.204144Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.250:2379"}
	{"level":"info","ts":"2024-04-01T19:17:33.648515Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-04-01T19:17:33.648574Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"pause-208693","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.250:2380"],"advertise-client-urls":["https://192.168.39.250:2379"]}
	{"level":"warn","ts":"2024-04-01T19:17:33.648678Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-01T19:17:33.648789Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-01T19:17:33.66658Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.250:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-01T19:17:33.666716Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.250:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-01T19:17:33.666819Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"a69e859ffe38fcde","current-leader-member-id":"a69e859ffe38fcde"}
	{"level":"info","ts":"2024-04-01T19:17:33.670779Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.250:2380"}
	{"level":"info","ts":"2024-04-01T19:17:33.67105Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.250:2380"}
	{"level":"info","ts":"2024-04-01T19:17:33.67109Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"pause-208693","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.250:2380"],"advertise-client-urls":["https://192.168.39.250:2379"]}
	
	
	==> etcd [2f63a9c00dde6c19df299f9c1f4a733df97d6952398eceec187693dc9f073374] <==
	{"level":"info","ts":"2024-04-01T19:17:47.396926Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-01T19:17:47.396976Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-01T19:17:47.397207Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a69e859ffe38fcde switched to configuration voters=(12006180578827762910)"}
	{"level":"info","ts":"2024-04-01T19:17:47.39796Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f7a04275a0bf31","local-member-id":"a69e859ffe38fcde","added-peer-id":"a69e859ffe38fcde","added-peer-peer-urls":["https://192.168.39.250:2380"]}
	{"level":"info","ts":"2024-04-01T19:17:47.398097Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f7a04275a0bf31","local-member-id":"a69e859ffe38fcde","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-01T19:17:47.399959Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-01T19:17:47.422065Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.250:2380"}
	{"level":"info","ts":"2024-04-01T19:17:47.422108Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.250:2380"}
	{"level":"info","ts":"2024-04-01T19:17:47.417836Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-01T19:17:47.423823Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"a69e859ffe38fcde","initial-advertise-peer-urls":["https://192.168.39.250:2380"],"listen-peer-urls":["https://192.168.39.250:2380"],"advertise-client-urls":["https://192.168.39.250:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.250:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-01T19:17:47.428939Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-01T19:17:48.514901Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a69e859ffe38fcde is starting a new election at term 3"}
	{"level":"info","ts":"2024-04-01T19:17:48.515016Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a69e859ffe38fcde became pre-candidate at term 3"}
	{"level":"info","ts":"2024-04-01T19:17:48.515058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a69e859ffe38fcde received MsgPreVoteResp from a69e859ffe38fcde at term 3"}
	{"level":"info","ts":"2024-04-01T19:17:48.515089Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a69e859ffe38fcde became candidate at term 4"}
	{"level":"info","ts":"2024-04-01T19:17:48.515113Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a69e859ffe38fcde received MsgVoteResp from a69e859ffe38fcde at term 4"}
	{"level":"info","ts":"2024-04-01T19:17:48.51514Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a69e859ffe38fcde became leader at term 4"}
	{"level":"info","ts":"2024-04-01T19:17:48.515172Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a69e859ffe38fcde elected leader a69e859ffe38fcde at term 4"}
	{"level":"info","ts":"2024-04-01T19:17:48.521243Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"a69e859ffe38fcde","local-member-attributes":"{Name:pause-208693 ClientURLs:[https://192.168.39.250:2379]}","request-path":"/0/members/a69e859ffe38fcde/attributes","cluster-id":"f7a04275a0bf31","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-01T19:17:48.52132Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-01T19:17:48.521743Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-01T19:17:48.523811Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-01T19:17:48.524028Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.250:2379"}
	{"level":"info","ts":"2024-04-01T19:17:48.524195Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-01T19:17:48.524234Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 19:18:08 up 2 min,  0 users,  load average: 0.98, 0.44, 0.17
	Linux pause-208693 5.10.207 #1 SMP Wed Mar 27 22:02:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [90827beb7d452b745ca6b9be1e1cbf187b22f2a83733fa1cf32f65dd51871a94] <==
	W0401 19:17:42.975776       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:17:42.993135       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:17:43.031665       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:17:43.114071       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:17:43.147042       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:17:43.151607       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:17:43.163833       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:17:43.171771       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:17:43.181549       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:17:43.232142       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:17:43.270973       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:17:43.293088       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:17:43.315953       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:17:43.373722       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:17:43.413141       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:17:43.435343       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:17:43.437845       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:17:43.443545       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:17:43.495226       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:17:43.505338       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:17:43.533202       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:17:43.568291       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:17:43.569695       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:17:43.618256       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:17:43.942305       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [a040f7057b6a33daba995297514b7b885fc9fe29532005c42b5e51d13105fdb9] <==
	I0401 19:17:49.972800       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0401 19:17:49.974947       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0401 19:17:49.975065       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0401 19:17:50.013986       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0401 19:17:50.014075       1 shared_informer.go:318] Caches are synced for configmaps
	I0401 19:17:50.013996       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0401 19:17:50.014148       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0401 19:17:50.014232       1 aggregator.go:165] initial CRD sync complete...
	I0401 19:17:50.014256       1 autoregister_controller.go:141] Starting autoregister controller
	I0401 19:17:50.014277       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0401 19:17:50.014300       1 cache.go:39] Caches are synced for autoregister controller
	I0401 19:17:50.014009       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0401 19:17:50.019329       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0401 19:17:50.020298       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0401 19:17:50.027511       1 shared_informer.go:318] Caches are synced for node_authorizer
	E0401 19:17:50.038613       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0401 19:17:50.050727       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0401 19:17:50.915767       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0401 19:17:51.724531       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0401 19:17:51.751001       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0401 19:17:51.809560       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0401 19:17:51.843107       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0401 19:17:51.852162       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0401 19:18:02.558408       1 controller.go:624] quota admission added evaluator for: endpoints
	I0401 19:18:02.579084       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [4d177654682bcbda844c640fa087f542ecebee05bce474ac4d3d8194d5ef6b06] <==
	I0401 19:18:02.508734       1 shared_informer.go:318] Caches are synced for legacy-service-account-token-cleaner
	I0401 19:18:02.510569       1 shared_informer.go:318] Caches are synced for attach detach
	I0401 19:18:02.520736       1 shared_informer.go:318] Caches are synced for taint
	I0401 19:18:02.520984       1 node_lifecycle_controller.go:1222] "Initializing eviction metric for zone" zone=""
	I0401 19:18:02.521131       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-208693"
	I0401 19:18:02.521172       1 node_lifecycle_controller.go:1068] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0401 19:18:02.521363       1 shared_informer.go:318] Caches are synced for resource quota
	I0401 19:18:02.521826       1 event.go:376] "Event occurred" object="pause-208693" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-208693 event: Registered Node pause-208693 in Controller"
	I0401 19:18:02.527005       1 shared_informer.go:318] Caches are synced for job
	I0401 19:18:02.527272       1 shared_informer.go:318] Caches are synced for persistent volume
	I0401 19:18:02.533505       1 shared_informer.go:318] Caches are synced for ephemeral
	I0401 19:18:02.533521       1 shared_informer.go:318] Caches are synced for daemon sets
	I0401 19:18:02.540423       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0401 19:18:02.545970       1 shared_informer.go:318] Caches are synced for disruption
	I0401 19:18:02.549474       1 shared_informer.go:318] Caches are synced for endpoint
	I0401 19:18:02.552991       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0401 19:18:02.553197       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="110.265µs"
	I0401 19:18:02.559080       1 shared_informer.go:318] Caches are synced for taint-eviction-controller
	I0401 19:18:02.568059       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0401 19:18:02.569735       1 shared_informer.go:318] Caches are synced for stateful set
	I0401 19:18:02.569917       1 shared_informer.go:318] Caches are synced for resource quota
	I0401 19:18:02.612284       1 shared_informer.go:318] Caches are synced for HPA
	I0401 19:18:02.961171       1 shared_informer.go:318] Caches are synced for garbage collector
	I0401 19:18:03.017168       1 shared_informer.go:318] Caches are synced for garbage collector
	I0401 19:18:03.017330       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	
	==> kube-controller-manager [a191a299d42de032a4e1b058d778aeb8a768699852f90479ed27525750c39dcb] <==
	I0401 19:17:24.547397       1 controllermanager.go:187] "Starting" version="v1.29.3"
	I0401 19:17:24.547446       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 19:17:24.549735       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0401 19:17:24.550108       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0401 19:17:24.551144       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0401 19:17:24.551268       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0401 19:17:27.970811       1 controllermanager.go:735] "Started controller" controller="serviceaccount-token-controller"
	I0401 19:17:27.970910       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0401 19:17:27.977125       1 controllermanager.go:735] "Started controller" controller="persistentvolume-protection-controller"
	I0401 19:17:27.977268       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0401 19:17:27.977302       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0401 19:17:27.980038       1 controllermanager.go:735] "Started controller" controller="endpointslice-mirroring-controller"
	I0401 19:17:27.980361       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
	I0401 19:17:27.980397       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0401 19:17:27.983084       1 controllermanager.go:735] "Started controller" controller="replicationcontroller-controller"
	I0401 19:17:27.983193       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0401 19:17:27.983440       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0401 19:17:27.992120       1 controllermanager.go:735] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0401 19:17:27.992321       1 horizontal.go:200] "Starting HPA controller"
	I0401 19:17:27.992364       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0401 19:17:27.994619       1 controllermanager.go:735] "Started controller" controller="token-cleaner-controller"
	I0401 19:17:27.994937       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0401 19:17:27.994975       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0401 19:17:27.994982       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0401 19:17:28.072162       1 shared_informer.go:318] Caches are synced for tokens
	
	
	==> kube-proxy [ee316adbd86a389d2cee2a243bb53d9623bdc19f7f4ada9f6d1dca071d0882d0] <==
	I0401 19:17:51.440496       1 server_others.go:72] "Using iptables proxy"
	I0401 19:17:51.460403       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.250"]
	I0401 19:17:51.544498       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0401 19:17:51.544541       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0401 19:17:51.544560       1 server_others.go:168] "Using iptables Proxier"
	I0401 19:17:51.557985       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0401 19:17:51.558271       1 server.go:865] "Version info" version="v1.29.3"
	I0401 19:17:51.558316       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 19:17:51.562322       1 config.go:188] "Starting service config controller"
	I0401 19:17:51.562370       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0401 19:17:51.562402       1 config.go:97] "Starting endpoint slice config controller"
	I0401 19:17:51.562407       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0401 19:17:51.563736       1 config.go:315] "Starting node config controller"
	I0401 19:17:51.563779       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0401 19:17:51.664116       1 shared_informer.go:318] Caches are synced for node config
	I0401 19:17:51.664140       1 shared_informer.go:318] Caches are synced for service config
	I0401 19:17:51.664180       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [ffbcd93cd4dc6dd7bdba8817fe9464043c1441e48a2f0a339d8e2f90465c23b2] <==
	I0401 19:17:23.918297       1 server_others.go:72] "Using iptables proxy"
	E0401 19:17:25.993969       1 server.go:1039] "Failed to retrieve node info" err="nodes \"pause-208693\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot get resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found]"
	I0401 19:17:27.084108       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.250"]
	I0401 19:17:27.233419       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0401 19:17:27.233560       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0401 19:17:27.233591       1 server_others.go:168] "Using iptables Proxier"
	I0401 19:17:27.247364       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0401 19:17:27.247627       1 server.go:865] "Version info" version="v1.29.3"
	I0401 19:17:27.247686       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 19:17:27.252289       1 config.go:188] "Starting service config controller"
	I0401 19:17:27.252403       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0401 19:17:27.253206       1 config.go:97] "Starting endpoint slice config controller"
	I0401 19:17:27.253260       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0401 19:17:27.253775       1 config.go:315] "Starting node config controller"
	I0401 19:17:27.253923       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0401 19:17:27.352579       1 shared_informer.go:318] Caches are synced for service config
	I0401 19:17:27.353961       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0401 19:17:27.354407       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [354ccfac9527c54ac400591dde36f20a21b0f39232cee1442492d045c16195b2] <==
	I0401 19:17:47.851110       1 serving.go:380] Generated self-signed cert in-memory
	W0401 19:17:49.977593       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0401 19:17:49.977710       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0401 19:17:49.977800       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0401 19:17:49.977834       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0401 19:17:50.012944       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0401 19:17:50.013094       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 19:17:50.016954       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0401 19:17:50.017059       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0401 19:17:50.021707       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0401 19:17:50.022459       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0401 19:17:50.118149       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [eb0be624c77f6b7779d07398d8ac81b11ff2d1e2491332385f9bc7abd08da4d1] <==
	I0401 19:17:24.176095       1 serving.go:380] Generated self-signed cert in-memory
	W0401 19:17:25.986695       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0401 19:17:25.986794       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0401 19:17:25.986810       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0401 19:17:25.986816       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0401 19:17:26.021044       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0401 19:17:26.021214       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 19:17:26.024953       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0401 19:17:26.025265       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0401 19:17:26.025314       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0401 19:17:26.025352       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0401 19:17:26.126450       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0401 19:17:33.797292       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0401 19:17:33.797396       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0401 19:17:33.797529       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0401 19:17:33.798240       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Apr 01 19:17:46 pause-208693 kubelet[3050]: I0401 19:17:46.568331    3050 scope.go:117] "RemoveContainer" containerID="90827beb7d452b745ca6b9be1e1cbf187b22f2a83733fa1cf32f65dd51871a94"
	Apr 01 19:17:46 pause-208693 kubelet[3050]: I0401 19:17:46.669430    3050 kubelet_node_status.go:73] "Attempting to register node" node="pause-208693"
	Apr 01 19:17:46 pause-208693 kubelet[3050]: E0401 19:17:46.670438    3050 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.250:8443: connect: connection refused" node="pause-208693"
	Apr 01 19:17:46 pause-208693 kubelet[3050]: W0401 19:17:46.874223    3050 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-208693&limit=500&resourceVersion=0": dial tcp 192.168.39.250:8443: connect: connection refused
	Apr 01 19:17:46 pause-208693 kubelet[3050]: E0401 19:17:46.874314    3050 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-208693&limit=500&resourceVersion=0": dial tcp 192.168.39.250:8443: connect: connection refused
	Apr 01 19:17:46 pause-208693 kubelet[3050]: W0401 19:17:46.891122    3050 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.250:8443: connect: connection refused
	Apr 01 19:17:46 pause-208693 kubelet[3050]: E0401 19:17:46.891204    3050 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.250:8443: connect: connection refused
	Apr 01 19:17:47 pause-208693 kubelet[3050]: W0401 19:17:47.051783    3050 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.250:8443: connect: connection refused
	Apr 01 19:17:47 pause-208693 kubelet[3050]: E0401 19:17:47.051968    3050 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.250:8443: connect: connection refused
	Apr 01 19:17:47 pause-208693 kubelet[3050]: W0401 19:17:47.065777    3050 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.250:8443: connect: connection refused
	Apr 01 19:17:47 pause-208693 kubelet[3050]: E0401 19:17:47.065836    3050 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.250:8443: connect: connection refused
	Apr 01 19:17:47 pause-208693 kubelet[3050]: I0401 19:17:47.473358    3050 kubelet_node_status.go:73] "Attempting to register node" node="pause-208693"
	Apr 01 19:17:50 pause-208693 kubelet[3050]: I0401 19:17:50.084995    3050 kubelet_node_status.go:112] "Node was previously registered" node="pause-208693"
	Apr 01 19:17:50 pause-208693 kubelet[3050]: I0401 19:17:50.085124    3050 kubelet_node_status.go:76] "Successfully registered node" node="pause-208693"
	Apr 01 19:17:50 pause-208693 kubelet[3050]: I0401 19:17:50.087055    3050 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 01 19:17:50 pause-208693 kubelet[3050]: I0401 19:17:50.088308    3050 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 01 19:17:50 pause-208693 kubelet[3050]: I0401 19:17:50.920320    3050 apiserver.go:52] "Watching apiserver"
	Apr 01 19:17:50 pause-208693 kubelet[3050]: I0401 19:17:50.923173    3050 topology_manager.go:215] "Topology Admit Handler" podUID="acb3c498-4e8d-4a02-b9d7-8a368f9303d0" podNamespace="kube-system" podName="kube-proxy-df6ns"
	Apr 01 19:17:50 pause-208693 kubelet[3050]: I0401 19:17:50.923721    3050 topology_manager.go:215] "Topology Admit Handler" podUID="c9bf80ea-9ada-4a47-bab1-e78b9223d2a8" podNamespace="kube-system" podName="coredns-76f75df574-rldp9"
	Apr 01 19:17:50 pause-208693 kubelet[3050]: I0401 19:17:50.936131    3050 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Apr 01 19:17:50 pause-208693 kubelet[3050]: I0401 19:17:50.996957    3050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/acb3c498-4e8d-4a02-b9d7-8a368f9303d0-lib-modules\") pod \"kube-proxy-df6ns\" (UID: \"acb3c498-4e8d-4a02-b9d7-8a368f9303d0\") " pod="kube-system/kube-proxy-df6ns"
	Apr 01 19:17:50 pause-208693 kubelet[3050]: I0401 19:17:50.997039    3050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/acb3c498-4e8d-4a02-b9d7-8a368f9303d0-xtables-lock\") pod \"kube-proxy-df6ns\" (UID: \"acb3c498-4e8d-4a02-b9d7-8a368f9303d0\") " pod="kube-system/kube-proxy-df6ns"
	Apr 01 19:17:51 pause-208693 kubelet[3050]: I0401 19:17:51.224204    3050 scope.go:117] "RemoveContainer" containerID="ffbcd93cd4dc6dd7bdba8817fe9464043c1441e48a2f0a339d8e2f90465c23b2"
	Apr 01 19:17:51 pause-208693 kubelet[3050]: I0401 19:17:51.224649    3050 scope.go:117] "RemoveContainer" containerID="51f0eff30b7c7d78a434ac0cebb793087012ebc1a4e3af4377acb07b114c7b1b"
	Apr 01 19:17:56 pause-208693 kubelet[3050]: I0401 19:17:56.037293    3050 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0401 19:18:07.648962   56352 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18233-10493/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-208693 -n pause-208693
helpers_test.go:261: (dbg) Run:  kubectl --context pause-208693 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-208693 -n pause-208693
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-208693 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-208693 logs -n 25: (1.641634266s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p cilium-408543 sudo crio            | cilium-408543             | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:12 UTC |                     |
	|         | config                                |                           |         |                |                     |                     |
	| delete  | -p cilium-408543                      | cilium-408543             | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:12 UTC | 01 Apr 24 19:12 UTC |
	| start   | -p running-upgrade-349166             | minikube                  | jenkins | v1.26.0        | 01 Apr 24 19:12 UTC | 01 Apr 24 19:14 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |                |                     |                     |
	|         |  --container-runtime=crio             |                           |         |                |                     |                     |
	| ssh     | -p NoKubernetes-249249 sudo           | NoKubernetes-249249       | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:12 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |                |                     |                     |
	|         | service kubelet                       |                           |         |                |                     |                     |
	| stop    | -p NoKubernetes-249249                | NoKubernetes-249249       | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:12 UTC | 01 Apr 24 19:13 UTC |
	| start   | -p NoKubernetes-249249                | NoKubernetes-249249       | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:13 UTC | 01 Apr 24 19:13 UTC |
	|         | --driver=kvm2                         |                           |         |                |                     |                     |
	|         | --container-runtime=crio              |                           |         |                |                     |                     |
	| ssh     | cert-options-444257 ssh               | cert-options-444257       | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:13 UTC | 01 Apr 24 19:13 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |                |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |                |                     |                     |
	| ssh     | -p cert-options-444257 -- sudo        | cert-options-444257       | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:13 UTC | 01 Apr 24 19:13 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |                |                     |                     |
	| delete  | -p cert-options-444257                | cert-options-444257       | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:13 UTC | 01 Apr 24 19:13 UTC |
	| start   | -p kubernetes-upgrade-054413          | kubernetes-upgrade-054413 | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:13 UTC |                     |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |                |                     |                     |
	|         | --alsologtostderr                     |                           |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |                |                     |                     |
	|         | --container-runtime=crio              |                           |         |                |                     |                     |
	| ssh     | -p NoKubernetes-249249 sudo           | NoKubernetes-249249       | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:13 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |                |                     |                     |
	|         | service kubelet                       |                           |         |                |                     |                     |
	| delete  | -p NoKubernetes-249249                | NoKubernetes-249249       | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:13 UTC | 01 Apr 24 19:13 UTC |
	| start   | -p stopped-upgrade-246129             | minikube                  | jenkins | v1.26.0        | 01 Apr 24 19:13 UTC | 01 Apr 24 19:15 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |                |                     |                     |
	|         |  --container-runtime=crio             |                           |         |                |                     |                     |
	| start   | -p cert-expiration-385547             | cert-expiration-385547    | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:13 UTC | 01 Apr 24 19:15 UTC |
	|         | --memory=2048                         |                           |         |                |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |                |                     |                     |
	|         | --driver=kvm2                         |                           |         |                |                     |                     |
	|         | --container-runtime=crio              |                           |         |                |                     |                     |
	| start   | -p running-upgrade-349166             | running-upgrade-349166    | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:14 UTC | 01 Apr 24 19:16 UTC |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --alsologtostderr                     |                           |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |                |                     |                     |
	|         | --container-runtime=crio              |                           |         |                |                     |                     |
	| delete  | -p cert-expiration-385547             | cert-expiration-385547    | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:15 UTC | 01 Apr 24 19:15 UTC |
	| start   | -p pause-208693 --memory=2048         | pause-208693              | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:15 UTC | 01 Apr 24 19:17 UTC |
	|         | --install-addons=false                |                           |         |                |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |                |                     |                     |
	|         | --container-runtime=crio              |                           |         |                |                     |                     |
	| stop    | stopped-upgrade-246129 stop           | minikube                  | jenkins | v1.26.0        | 01 Apr 24 19:15 UTC | 01 Apr 24 19:15 UTC |
	| start   | -p stopped-upgrade-246129             | stopped-upgrade-246129    | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:15 UTC | 01 Apr 24 19:16 UTC |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --alsologtostderr                     |                           |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |                |                     |                     |
	|         | --container-runtime=crio              |                           |         |                |                     |                     |
	| delete  | -p running-upgrade-349166             | running-upgrade-349166    | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:16 UTC | 01 Apr 24 19:16 UTC |
	| start   | -p auto-408543 --memory=3072          | auto-408543               | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:16 UTC | 01 Apr 24 19:17 UTC |
	|         | --alsologtostderr --wait=true         |                           |         |                |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |                |                     |                     |
	|         | --driver=kvm2                         |                           |         |                |                     |                     |
	|         | --container-runtime=crio              |                           |         |                |                     |                     |
	| delete  | -p stopped-upgrade-246129             | stopped-upgrade-246129    | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:16 UTC | 01 Apr 24 19:16 UTC |
	| start   | -p kindnet-408543                     | kindnet-408543            | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:16 UTC |                     |
	|         | --memory=3072                         |                           |         |                |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |                |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |                |                     |                     |
	|         | --cni=kindnet --driver=kvm2           |                           |         |                |                     |                     |
	|         | --container-runtime=crio              |                           |         |                |                     |                     |
	| start   | -p pause-208693                       | pause-208693              | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:17 UTC | 01 Apr 24 19:18 UTC |
	|         | --alsologtostderr                     |                           |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |                |                     |                     |
	|         | --container-runtime=crio              |                           |         |                |                     |                     |
	| ssh     | -p auto-408543 pgrep -a               | auto-408543               | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:17 UTC | 01 Apr 24 19:17 UTC |
	|         | kubelet                               |                           |         |                |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/01 19:17:06
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 19:17:06.559176   55506 out.go:291] Setting OutFile to fd 1 ...
	I0401 19:17:06.559696   55506 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 19:17:06.559716   55506 out.go:304] Setting ErrFile to fd 2...
	I0401 19:17:06.559725   55506 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 19:17:06.560174   55506 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-10493/.minikube/bin
	I0401 19:17:06.561139   55506 out.go:298] Setting JSON to false
	I0401 19:17:06.562213   55506 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7179,"bootTime":1711991848,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 19:17:06.562280   55506 start.go:139] virtualization: kvm guest
	I0401 19:17:06.564131   55506 out.go:177] * [pause-208693] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 19:17:06.565567   55506 notify.go:220] Checking for updates...
	I0401 19:17:06.565578   55506 out.go:177]   - MINIKUBE_LOCATION=18233
	I0401 19:17:06.567067   55506 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 19:17:06.568498   55506 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 19:17:06.569843   55506 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-10493/.minikube
	I0401 19:17:06.571020   55506 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 19:17:06.572203   55506 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 19:17:06.573864   55506 config.go:182] Loaded profile config "pause-208693": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 19:17:06.574465   55506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:17:06.574509   55506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:17:06.589665   55506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44483
	I0401 19:17:06.590127   55506 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:17:06.590641   55506 main.go:141] libmachine: Using API Version  1
	I0401 19:17:06.590661   55506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:17:06.591067   55506 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:17:06.591266   55506 main.go:141] libmachine: (pause-208693) Calling .DriverName
	I0401 19:17:06.591497   55506 driver.go:392] Setting default libvirt URI to qemu:///system
	I0401 19:17:06.591772   55506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:17:06.591803   55506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:17:06.606712   55506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41555
	I0401 19:17:06.607106   55506 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:17:06.607724   55506 main.go:141] libmachine: Using API Version  1
	I0401 19:17:06.607751   55506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:17:06.608126   55506 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:17:06.608338   55506 main.go:141] libmachine: (pause-208693) Calling .DriverName
	I0401 19:17:06.646606   55506 out.go:177] * Using the kvm2 driver based on existing profile
	I0401 19:17:06.647900   55506 start.go:297] selected driver: kvm2
	I0401 19:17:06.647913   55506 start.go:901] validating driver "kvm2" against &{Name:pause-208693 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.29.3 ClusterName:pause-208693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-dev
ice-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:17:06.648030   55506 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 19:17:06.648418   55506 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 19:17:06.648520   55506 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18233-10493/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0401 19:17:06.666623   55506 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0401 19:17:06.667314   55506 cni.go:84] Creating CNI manager for ""
	I0401 19:17:06.667339   55506 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:17:06.667397   55506 start.go:340] cluster config:
	{Name:pause-208693 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:pause-208693 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:
false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:17:06.667561   55506 iso.go:125] acquiring lock: {Name:mka511ffe42ecd86bd7f46e7a17ddcdd3e5e4327 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 19:17:06.669365   55506 out.go:177] * Starting "pause-208693" primary control-plane node in "pause-208693" cluster
	I0401 19:17:06.670568   55506 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0401 19:17:06.670599   55506 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0401 19:17:06.670633   55506 cache.go:56] Caching tarball of preloaded images
	I0401 19:17:06.670715   55506 preload.go:173] Found /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 19:17:06.670725   55506 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0401 19:17:06.670830   55506 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/pause-208693/config.json ...
	I0401 19:17:06.671016   55506 start.go:360] acquireMachinesLock for pause-208693: {Name:mk6b7472209a8db5f40be4c2f0565da7e0094c19 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 19:17:07.750918   55506 start.go:364] duration metric: took 1.079863315s to acquireMachinesLock for "pause-208693"
	I0401 19:17:07.750971   55506 start.go:96] Skipping create...Using existing machine configuration
	I0401 19:17:07.750985   55506 fix.go:54] fixHost starting: 
	I0401 19:17:07.751434   55506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:17:07.751475   55506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:17:07.768026   55506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41597
	I0401 19:17:07.768442   55506 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:17:07.768953   55506 main.go:141] libmachine: Using API Version  1
	I0401 19:17:07.768978   55506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:17:07.769299   55506 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:17:07.769508   55506 main.go:141] libmachine: (pause-208693) Calling .DriverName
	I0401 19:17:07.769716   55506 main.go:141] libmachine: (pause-208693) Calling .GetState
	I0401 19:17:07.771318   55506 fix.go:112] recreateIfNeeded on pause-208693: state=Running err=<nil>
	W0401 19:17:07.771337   55506 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 19:17:07.773627   55506 out.go:177] * Updating the running kvm2 "pause-208693" VM ...
	I0401 19:17:04.788178   54819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:17:05.287583   54819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:17:05.787429   54819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:17:06.288310   54819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:17:06.788078   54819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:17:07.287554   54819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:17:07.787437   54819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:17:08.288358   54819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:17:08.788348   54819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:17:09.288213   54819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:17:06.485407   55206 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 19:17:06.485426   55206 main.go:141] libmachine: Detecting the provisioner...
	I0401 19:17:06.485434   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHHostname
	I0401 19:17:06.488587   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:06.488950   55206 main.go:141] libmachine: (kindnet-408543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:4e:5e", ip: ""} in network mk-kindnet-408543: {Iface:virbr3 ExpiryTime:2024-04-01 20:17:00 +0000 UTC Type:0 Mac:52:54:00:dc:4e:5e Iaid: IPaddr:192.168.72.92 Prefix:24 Hostname:kindnet-408543 Clientid:01:52:54:00:dc:4e:5e}
	I0401 19:17:06.488979   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined IP address 192.168.72.92 and MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:06.489188   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHPort
	I0401 19:17:06.489364   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHKeyPath
	I0401 19:17:06.489550   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHKeyPath
	I0401 19:17:06.489710   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHUsername
	I0401 19:17:06.489884   55206 main.go:141] libmachine: Using SSH client type: native
	I0401 19:17:06.490047   55206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.92 22 <nil> <nil>}
	I0401 19:17:06.490060   55206 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0401 19:17:06.607155   55206 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0401 19:17:06.607233   55206 main.go:141] libmachine: found compatible host: buildroot
	I0401 19:17:06.607246   55206 main.go:141] libmachine: Provisioning with buildroot...
	I0401 19:17:06.607256   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetMachineName
	I0401 19:17:06.607447   55206 buildroot.go:166] provisioning hostname "kindnet-408543"
	I0401 19:17:06.607468   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetMachineName
	I0401 19:17:06.607632   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHHostname
	I0401 19:17:06.610975   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:06.611366   55206 main.go:141] libmachine: (kindnet-408543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:4e:5e", ip: ""} in network mk-kindnet-408543: {Iface:virbr3 ExpiryTime:2024-04-01 20:17:00 +0000 UTC Type:0 Mac:52:54:00:dc:4e:5e Iaid: IPaddr:192.168.72.92 Prefix:24 Hostname:kindnet-408543 Clientid:01:52:54:00:dc:4e:5e}
	I0401 19:17:06.611391   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined IP address 192.168.72.92 and MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:06.611577   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHPort
	I0401 19:17:06.611749   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHKeyPath
	I0401 19:17:06.611915   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHKeyPath
	I0401 19:17:06.612093   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHUsername
	I0401 19:17:06.612285   55206 main.go:141] libmachine: Using SSH client type: native
	I0401 19:17:06.612506   55206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.92 22 <nil> <nil>}
	I0401 19:17:06.612534   55206 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-408543 && echo "kindnet-408543" | sudo tee /etc/hostname
	I0401 19:17:06.742155   55206 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-408543
	
	I0401 19:17:06.742182   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHHostname
	I0401 19:17:06.745139   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:06.745460   55206 main.go:141] libmachine: (kindnet-408543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:4e:5e", ip: ""} in network mk-kindnet-408543: {Iface:virbr3 ExpiryTime:2024-04-01 20:17:00 +0000 UTC Type:0 Mac:52:54:00:dc:4e:5e Iaid: IPaddr:192.168.72.92 Prefix:24 Hostname:kindnet-408543 Clientid:01:52:54:00:dc:4e:5e}
	I0401 19:17:06.745487   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined IP address 192.168.72.92 and MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:06.745698   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHPort
	I0401 19:17:06.745915   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHKeyPath
	I0401 19:17:06.746065   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHKeyPath
	I0401 19:17:06.746208   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHUsername
	I0401 19:17:06.746378   55206 main.go:141] libmachine: Using SSH client type: native
	I0401 19:17:06.746539   55206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.92 22 <nil> <nil>}
	I0401 19:17:06.746555   55206 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-408543' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-408543/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-408543' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 19:17:06.871320   55206 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 19:17:06.871410   55206 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18233-10493/.minikube CaCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18233-10493/.minikube}
	I0401 19:17:06.871464   55206 buildroot.go:174] setting up certificates
	I0401 19:17:06.871479   55206 provision.go:84] configureAuth start
	I0401 19:17:06.871492   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetMachineName
	I0401 19:17:06.871796   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetIP
	I0401 19:17:06.874935   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:06.875382   55206 main.go:141] libmachine: (kindnet-408543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:4e:5e", ip: ""} in network mk-kindnet-408543: {Iface:virbr3 ExpiryTime:2024-04-01 20:17:00 +0000 UTC Type:0 Mac:52:54:00:dc:4e:5e Iaid: IPaddr:192.168.72.92 Prefix:24 Hostname:kindnet-408543 Clientid:01:52:54:00:dc:4e:5e}
	I0401 19:17:06.875413   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined IP address 192.168.72.92 and MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:06.875607   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHHostname
	I0401 19:17:06.878191   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:06.878544   55206 main.go:141] libmachine: (kindnet-408543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:4e:5e", ip: ""} in network mk-kindnet-408543: {Iface:virbr3 ExpiryTime:2024-04-01 20:17:00 +0000 UTC Type:0 Mac:52:54:00:dc:4e:5e Iaid: IPaddr:192.168.72.92 Prefix:24 Hostname:kindnet-408543 Clientid:01:52:54:00:dc:4e:5e}
	I0401 19:17:06.878592   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined IP address 192.168.72.92 and MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:06.878754   55206 provision.go:143] copyHostCerts
	I0401 19:17:06.878820   55206 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem, removing ...
	I0401 19:17:06.878833   55206 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem
	I0401 19:17:06.878900   55206 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem (1123 bytes)
	I0401 19:17:06.879035   55206 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem, removing ...
	I0401 19:17:06.879048   55206 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem
	I0401 19:17:06.879089   55206 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem (1679 bytes)
	I0401 19:17:06.879180   55206 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem, removing ...
	I0401 19:17:06.879195   55206 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem
	I0401 19:17:06.879227   55206 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem (1082 bytes)
	I0401 19:17:06.879295   55206 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem org=jenkins.kindnet-408543 san=[127.0.0.1 192.168.72.92 kindnet-408543 localhost minikube]
	I0401 19:17:07.029206   55206 provision.go:177] copyRemoteCerts
	I0401 19:17:07.029257   55206 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 19:17:07.029281   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHHostname
	I0401 19:17:07.031818   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:07.032147   55206 main.go:141] libmachine: (kindnet-408543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:4e:5e", ip: ""} in network mk-kindnet-408543: {Iface:virbr3 ExpiryTime:2024-04-01 20:17:00 +0000 UTC Type:0 Mac:52:54:00:dc:4e:5e Iaid: IPaddr:192.168.72.92 Prefix:24 Hostname:kindnet-408543 Clientid:01:52:54:00:dc:4e:5e}
	I0401 19:17:07.032183   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined IP address 192.168.72.92 and MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:07.032376   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHPort
	I0401 19:17:07.032599   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHKeyPath
	I0401 19:17:07.032742   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHUsername
	I0401 19:17:07.032872   55206 sshutil.go:53] new ssh client: &{IP:192.168.72.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/kindnet-408543/id_rsa Username:docker}
	I0401 19:17:07.121994   55206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 19:17:07.150612   55206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 19:17:07.177014   55206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0401 19:17:07.203202   55206 provision.go:87] duration metric: took 331.709585ms to configureAuth
	I0401 19:17:07.203233   55206 buildroot.go:189] setting minikube options for container-runtime
	I0401 19:17:07.203429   55206 config.go:182] Loaded profile config "kindnet-408543": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 19:17:07.203503   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHHostname
	I0401 19:17:07.206180   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:07.206594   55206 main.go:141] libmachine: (kindnet-408543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:4e:5e", ip: ""} in network mk-kindnet-408543: {Iface:virbr3 ExpiryTime:2024-04-01 20:17:00 +0000 UTC Type:0 Mac:52:54:00:dc:4e:5e Iaid: IPaddr:192.168.72.92 Prefix:24 Hostname:kindnet-408543 Clientid:01:52:54:00:dc:4e:5e}
	I0401 19:17:07.206632   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined IP address 192.168.72.92 and MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:07.206870   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHPort
	I0401 19:17:07.207072   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHKeyPath
	I0401 19:17:07.207275   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHKeyPath
	I0401 19:17:07.207456   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHUsername
	I0401 19:17:07.207663   55206 main.go:141] libmachine: Using SSH client type: native
	I0401 19:17:07.207820   55206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.92 22 <nil> <nil>}
	I0401 19:17:07.207835   55206 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 19:17:07.496135   55206 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 19:17:07.496162   55206 main.go:141] libmachine: Checking connection to Docker...
	I0401 19:17:07.496170   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetURL
	I0401 19:17:07.497546   55206 main.go:141] libmachine: (kindnet-408543) DBG | Using libvirt version 6000000
	I0401 19:17:07.499988   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:07.500368   55206 main.go:141] libmachine: (kindnet-408543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:4e:5e", ip: ""} in network mk-kindnet-408543: {Iface:virbr3 ExpiryTime:2024-04-01 20:17:00 +0000 UTC Type:0 Mac:52:54:00:dc:4e:5e Iaid: IPaddr:192.168.72.92 Prefix:24 Hostname:kindnet-408543 Clientid:01:52:54:00:dc:4e:5e}
	I0401 19:17:07.500387   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined IP address 192.168.72.92 and MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:07.500547   55206 main.go:141] libmachine: Docker is up and running!
	I0401 19:17:07.500565   55206 main.go:141] libmachine: Reticulating splines...
	I0401 19:17:07.500574   55206 client.go:171] duration metric: took 23.890274845s to LocalClient.Create
	I0401 19:17:07.500597   55206 start.go:167] duration metric: took 23.890367542s to libmachine.API.Create "kindnet-408543"
	I0401 19:17:07.500609   55206 start.go:293] postStartSetup for "kindnet-408543" (driver="kvm2")
	I0401 19:17:07.500622   55206 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 19:17:07.500639   55206 main.go:141] libmachine: (kindnet-408543) Calling .DriverName
	I0401 19:17:07.500877   55206 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 19:17:07.500902   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHHostname
	I0401 19:17:07.503198   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:07.503543   55206 main.go:141] libmachine: (kindnet-408543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:4e:5e", ip: ""} in network mk-kindnet-408543: {Iface:virbr3 ExpiryTime:2024-04-01 20:17:00 +0000 UTC Type:0 Mac:52:54:00:dc:4e:5e Iaid: IPaddr:192.168.72.92 Prefix:24 Hostname:kindnet-408543 Clientid:01:52:54:00:dc:4e:5e}
	I0401 19:17:07.503572   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined IP address 192.168.72.92 and MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:07.503682   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHPort
	I0401 19:17:07.503868   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHKeyPath
	I0401 19:17:07.504059   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHUsername
	I0401 19:17:07.504214   55206 sshutil.go:53] new ssh client: &{IP:192.168.72.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/kindnet-408543/id_rsa Username:docker}
	I0401 19:17:07.589123   55206 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 19:17:07.594165   55206 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 19:17:07.594186   55206 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/addons for local assets ...
	I0401 19:17:07.594242   55206 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/files for local assets ...
	I0401 19:17:07.594326   55206 filesync.go:149] local asset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> 177512.pem in /etc/ssl/certs
	I0401 19:17:07.594421   55206 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 19:17:07.605127   55206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:17:07.632251   55206 start.go:296] duration metric: took 131.62714ms for postStartSetup
	I0401 19:17:07.632309   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetConfigRaw
	I0401 19:17:07.632997   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetIP
	I0401 19:17:07.635648   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:07.636018   55206 main.go:141] libmachine: (kindnet-408543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:4e:5e", ip: ""} in network mk-kindnet-408543: {Iface:virbr3 ExpiryTime:2024-04-01 20:17:00 +0000 UTC Type:0 Mac:52:54:00:dc:4e:5e Iaid: IPaddr:192.168.72.92 Prefix:24 Hostname:kindnet-408543 Clientid:01:52:54:00:dc:4e:5e}
	I0401 19:17:07.636049   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined IP address 192.168.72.92 and MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:07.636252   55206 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/config.json ...
	I0401 19:17:07.636473   55206 start.go:128] duration metric: took 24.048696741s to createHost
	I0401 19:17:07.636497   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHHostname
	I0401 19:17:07.638784   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:07.639134   55206 main.go:141] libmachine: (kindnet-408543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:4e:5e", ip: ""} in network mk-kindnet-408543: {Iface:virbr3 ExpiryTime:2024-04-01 20:17:00 +0000 UTC Type:0 Mac:52:54:00:dc:4e:5e Iaid: IPaddr:192.168.72.92 Prefix:24 Hostname:kindnet-408543 Clientid:01:52:54:00:dc:4e:5e}
	I0401 19:17:07.639173   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined IP address 192.168.72.92 and MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:07.639294   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHPort
	I0401 19:17:07.639493   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHKeyPath
	I0401 19:17:07.639637   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHKeyPath
	I0401 19:17:07.639786   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHUsername
	I0401 19:17:07.639933   55206 main.go:141] libmachine: Using SSH client type: native
	I0401 19:17:07.640087   55206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.92 22 <nil> <nil>}
	I0401 19:17:07.640096   55206 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 19:17:07.750728   55206 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711999027.726162054
	
	I0401 19:17:07.750746   55206 fix.go:216] guest clock: 1711999027.726162054
	I0401 19:17:07.750753   55206 fix.go:229] Guest: 2024-04-01 19:17:07.726162054 +0000 UTC Remote: 2024-04-01 19:17:07.636486289 +0000 UTC m=+26.248393320 (delta=89.675765ms)
	I0401 19:17:07.750808   55206 fix.go:200] guest clock delta is within tolerance: 89.675765ms
	I0401 19:17:07.750818   55206 start.go:83] releasing machines lock for "kindnet-408543", held for 24.163227136s
	I0401 19:17:07.750845   55206 main.go:141] libmachine: (kindnet-408543) Calling .DriverName
	I0401 19:17:07.751146   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetIP
	I0401 19:17:07.753917   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:07.754252   55206 main.go:141] libmachine: (kindnet-408543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:4e:5e", ip: ""} in network mk-kindnet-408543: {Iface:virbr3 ExpiryTime:2024-04-01 20:17:00 +0000 UTC Type:0 Mac:52:54:00:dc:4e:5e Iaid: IPaddr:192.168.72.92 Prefix:24 Hostname:kindnet-408543 Clientid:01:52:54:00:dc:4e:5e}
	I0401 19:17:07.754296   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined IP address 192.168.72.92 and MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:07.754444   55206 main.go:141] libmachine: (kindnet-408543) Calling .DriverName
	I0401 19:17:07.755059   55206 main.go:141] libmachine: (kindnet-408543) Calling .DriverName
	I0401 19:17:07.755228   55206 main.go:141] libmachine: (kindnet-408543) Calling .DriverName
	I0401 19:17:07.755279   55206 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 19:17:07.755316   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHHostname
	I0401 19:17:07.755431   55206 ssh_runner.go:195] Run: cat /version.json
	I0401 19:17:07.755454   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHHostname
	I0401 19:17:07.757994   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:07.758243   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:07.758373   55206 main.go:141] libmachine: (kindnet-408543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:4e:5e", ip: ""} in network mk-kindnet-408543: {Iface:virbr3 ExpiryTime:2024-04-01 20:17:00 +0000 UTC Type:0 Mac:52:54:00:dc:4e:5e Iaid: IPaddr:192.168.72.92 Prefix:24 Hostname:kindnet-408543 Clientid:01:52:54:00:dc:4e:5e}
	I0401 19:17:07.758406   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined IP address 192.168.72.92 and MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:07.758510   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHPort
	I0401 19:17:07.758652   55206 main.go:141] libmachine: (kindnet-408543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:4e:5e", ip: ""} in network mk-kindnet-408543: {Iface:virbr3 ExpiryTime:2024-04-01 20:17:00 +0000 UTC Type:0 Mac:52:54:00:dc:4e:5e Iaid: IPaddr:192.168.72.92 Prefix:24 Hostname:kindnet-408543 Clientid:01:52:54:00:dc:4e:5e}
	I0401 19:17:07.758671   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined IP address 192.168.72.92 and MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:07.758683   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHKeyPath
	I0401 19:17:07.758872   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHPort
	I0401 19:17:07.758887   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHUsername
	I0401 19:17:07.759030   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHKeyPath
	I0401 19:17:07.759048   55206 sshutil.go:53] new ssh client: &{IP:192.168.72.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/kindnet-408543/id_rsa Username:docker}
	I0401 19:17:07.759161   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetSSHUsername
	I0401 19:17:07.759278   55206 sshutil.go:53] new ssh client: &{IP:192.168.72.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/kindnet-408543/id_rsa Username:docker}
	I0401 19:17:07.838972   55206 ssh_runner.go:195] Run: systemctl --version
	I0401 19:17:07.865172   55206 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 19:17:08.027681   55206 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 19:17:08.035280   55206 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 19:17:08.035353   55206 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 19:17:08.054108   55206 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 19:17:08.054127   55206 start.go:494] detecting cgroup driver to use...
	I0401 19:17:08.054196   55206 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 19:17:08.073250   55206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 19:17:08.088640   55206 docker.go:217] disabling cri-docker service (if available) ...
	I0401 19:17:08.088712   55206 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 19:17:08.105038   55206 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 19:17:08.118983   55206 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 19:17:08.249133   55206 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 19:17:08.423752   55206 docker.go:233] disabling docker service ...
	I0401 19:17:08.423813   55206 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 19:17:08.442559   55206 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 19:17:08.457483   55206 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 19:17:08.618055   55206 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 19:17:08.743313   55206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 19:17:08.759409   55206 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 19:17:08.780620   55206 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0401 19:17:08.780675   55206 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:17:08.792894   55206 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 19:17:08.792973   55206 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:17:08.804316   55206 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:17:08.818260   55206 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:17:08.829578   55206 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 19:17:08.841347   55206 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:17:08.852604   55206 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:17:08.874245   55206 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:17:08.886011   55206 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 19:17:08.896200   55206 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0401 19:17:08.896257   55206 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0401 19:17:08.911966   55206 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 19:17:08.927243   55206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:17:09.053370   55206 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 19:17:09.224608   55206 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 19:17:09.224688   55206 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 19:17:09.231151   55206 start.go:562] Will wait 60s for crictl version
	I0401 19:17:09.231208   55206 ssh_runner.go:195] Run: which crictl
	I0401 19:17:09.235439   55206 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 19:17:09.278477   55206 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 19:17:09.278580   55206 ssh_runner.go:195] Run: crio --version
	I0401 19:17:09.313173   55206 ssh_runner.go:195] Run: crio --version
	I0401 19:17:09.348807   55206 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0401 19:17:09.350131   55206 main.go:141] libmachine: (kindnet-408543) Calling .GetIP
	I0401 19:17:09.352723   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:09.353046   55206 main.go:141] libmachine: (kindnet-408543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:4e:5e", ip: ""} in network mk-kindnet-408543: {Iface:virbr3 ExpiryTime:2024-04-01 20:17:00 +0000 UTC Type:0 Mac:52:54:00:dc:4e:5e Iaid: IPaddr:192.168.72.92 Prefix:24 Hostname:kindnet-408543 Clientid:01:52:54:00:dc:4e:5e}
	I0401 19:17:09.353073   55206 main.go:141] libmachine: (kindnet-408543) DBG | domain kindnet-408543 has defined IP address 192.168.72.92 and MAC address 52:54:00:dc:4e:5e in network mk-kindnet-408543
	I0401 19:17:09.353311   55206 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0401 19:17:09.357952   55206 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:17:09.371265   55206 kubeadm.go:877] updating cluster {Name:kindnet-408543 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.3 ClusterName:kindnet-408543 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.72.92 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 19:17:09.371368   55206 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0401 19:17:09.371428   55206 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:17:09.411850   55206 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0401 19:17:09.411923   55206 ssh_runner.go:195] Run: which lz4
	I0401 19:17:09.416278   55206 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0401 19:17:09.420866   55206 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 19:17:09.420889   55206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0401 19:17:11.148399   55206 crio.go:462] duration metric: took 1.732145245s to copy over tarball
	I0401 19:17:11.148506   55206 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 19:17:07.775392   55506 machine.go:94] provisionDockerMachine start ...
	I0401 19:17:07.775417   55506 main.go:141] libmachine: (pause-208693) Calling .DriverName
	I0401 19:17:07.775610   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHHostname
	I0401 19:17:07.778194   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:07.778628   55506 main.go:141] libmachine: (pause-208693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:70:95", ip: ""} in network mk-pause-208693: {Iface:virbr1 ExpiryTime:2024-04-01 20:15:45 +0000 UTC Type:0 Mac:52:54:00:21:70:95 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:pause-208693 Clientid:01:52:54:00:21:70:95}
	I0401 19:17:07.778664   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined IP address 192.168.39.250 and MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:07.778819   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHPort
	I0401 19:17:07.778996   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHKeyPath
	I0401 19:17:07.779176   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHKeyPath
	I0401 19:17:07.779307   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHUsername
	I0401 19:17:07.779470   55506 main.go:141] libmachine: Using SSH client type: native
	I0401 19:17:07.779699   55506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0401 19:17:07.779718   55506 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 19:17:07.891921   55506 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-208693
	
	I0401 19:17:07.891946   55506 main.go:141] libmachine: (pause-208693) Calling .GetMachineName
	I0401 19:17:07.892212   55506 buildroot.go:166] provisioning hostname "pause-208693"
	I0401 19:17:07.892236   55506 main.go:141] libmachine: (pause-208693) Calling .GetMachineName
	I0401 19:17:07.892415   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHHostname
	I0401 19:17:07.895162   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:07.895581   55506 main.go:141] libmachine: (pause-208693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:70:95", ip: ""} in network mk-pause-208693: {Iface:virbr1 ExpiryTime:2024-04-01 20:15:45 +0000 UTC Type:0 Mac:52:54:00:21:70:95 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:pause-208693 Clientid:01:52:54:00:21:70:95}
	I0401 19:17:07.895604   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined IP address 192.168.39.250 and MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:07.895779   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHPort
	I0401 19:17:07.895965   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHKeyPath
	I0401 19:17:07.896115   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHKeyPath
	I0401 19:17:07.896268   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHUsername
	I0401 19:17:07.896424   55506 main.go:141] libmachine: Using SSH client type: native
	I0401 19:17:07.896615   55506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0401 19:17:07.896632   55506 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-208693 && echo "pause-208693" | sudo tee /etc/hostname
	I0401 19:17:08.023301   55506 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-208693
	
	I0401 19:17:08.023334   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHHostname
	I0401 19:17:08.026521   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:08.026994   55506 main.go:141] libmachine: (pause-208693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:70:95", ip: ""} in network mk-pause-208693: {Iface:virbr1 ExpiryTime:2024-04-01 20:15:45 +0000 UTC Type:0 Mac:52:54:00:21:70:95 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:pause-208693 Clientid:01:52:54:00:21:70:95}
	I0401 19:17:08.027030   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined IP address 192.168.39.250 and MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:08.027171   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHPort
	I0401 19:17:08.027344   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHKeyPath
	I0401 19:17:08.027519   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHKeyPath
	I0401 19:17:08.027730   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHUsername
	I0401 19:17:08.027938   55506 main.go:141] libmachine: Using SSH client type: native
	I0401 19:17:08.028129   55506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0401 19:17:08.028153   55506 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-208693' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-208693/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-208693' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 19:17:08.143107   55506 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 19:17:08.143142   55506 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18233-10493/.minikube CaCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18233-10493/.minikube}
	I0401 19:17:08.143185   55506 buildroot.go:174] setting up certificates
	I0401 19:17:08.143222   55506 provision.go:84] configureAuth start
	I0401 19:17:08.143241   55506 main.go:141] libmachine: (pause-208693) Calling .GetMachineName
	I0401 19:17:08.143520   55506 main.go:141] libmachine: (pause-208693) Calling .GetIP
	I0401 19:17:08.146422   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:08.146745   55506 main.go:141] libmachine: (pause-208693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:70:95", ip: ""} in network mk-pause-208693: {Iface:virbr1 ExpiryTime:2024-04-01 20:15:45 +0000 UTC Type:0 Mac:52:54:00:21:70:95 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:pause-208693 Clientid:01:52:54:00:21:70:95}
	I0401 19:17:08.146783   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined IP address 192.168.39.250 and MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:08.146884   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHHostname
	I0401 19:17:08.149275   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:08.149679   55506 main.go:141] libmachine: (pause-208693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:70:95", ip: ""} in network mk-pause-208693: {Iface:virbr1 ExpiryTime:2024-04-01 20:15:45 +0000 UTC Type:0 Mac:52:54:00:21:70:95 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:pause-208693 Clientid:01:52:54:00:21:70:95}
	I0401 19:17:08.149714   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined IP address 192.168.39.250 and MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:08.149799   55506 provision.go:143] copyHostCerts
	I0401 19:17:08.149849   55506 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem, removing ...
	I0401 19:17:08.149858   55506 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem
	I0401 19:17:08.149911   55506 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem (1082 bytes)
	I0401 19:17:08.149990   55506 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem, removing ...
	I0401 19:17:08.149999   55506 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem
	I0401 19:17:08.150017   55506 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem (1123 bytes)
	I0401 19:17:08.150061   55506 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem, removing ...
	I0401 19:17:08.150069   55506 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem
	I0401 19:17:08.150084   55506 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem (1679 bytes)
	I0401 19:17:08.150125   55506 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem org=jenkins.pause-208693 san=[127.0.0.1 192.168.39.250 localhost minikube pause-208693]
	I0401 19:17:08.216818   55506 provision.go:177] copyRemoteCerts
	I0401 19:17:08.216865   55506 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 19:17:08.216887   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHHostname
	I0401 19:17:08.219822   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:08.220083   55506 main.go:141] libmachine: (pause-208693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:70:95", ip: ""} in network mk-pause-208693: {Iface:virbr1 ExpiryTime:2024-04-01 20:15:45 +0000 UTC Type:0 Mac:52:54:00:21:70:95 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:pause-208693 Clientid:01:52:54:00:21:70:95}
	I0401 19:17:08.220111   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined IP address 192.168.39.250 and MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:08.220313   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHPort
	I0401 19:17:08.220515   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHKeyPath
	I0401 19:17:08.220676   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHUsername
	I0401 19:17:08.220787   55506 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/pause-208693/id_rsa Username:docker}
	I0401 19:17:08.312676   55506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 19:17:08.349102   55506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0401 19:17:08.382636   55506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 19:17:08.412753   55506 provision.go:87] duration metric: took 269.516315ms to configureAuth
	I0401 19:17:08.412778   55506 buildroot.go:189] setting minikube options for container-runtime
	I0401 19:17:08.412973   55506 config.go:182] Loaded profile config "pause-208693": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 19:17:08.413039   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHHostname
	I0401 19:17:08.415885   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:08.416289   55506 main.go:141] libmachine: (pause-208693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:70:95", ip: ""} in network mk-pause-208693: {Iface:virbr1 ExpiryTime:2024-04-01 20:15:45 +0000 UTC Type:0 Mac:52:54:00:21:70:95 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:pause-208693 Clientid:01:52:54:00:21:70:95}
	I0401 19:17:08.416325   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined IP address 192.168.39.250 and MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:08.416568   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHPort
	I0401 19:17:08.416749   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHKeyPath
	I0401 19:17:08.416918   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHKeyPath
	I0401 19:17:08.417071   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHUsername
	I0401 19:17:08.417255   55506 main.go:141] libmachine: Using SSH client type: native
	I0401 19:17:08.417432   55506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0401 19:17:08.417455   55506 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 19:17:09.788039   54819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:17:10.287920   54819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:17:10.788095   54819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:17:11.288111   54819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:17:11.788069   54819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:17:12.287917   54819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:17:12.787487   54819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:17:13.288353   54819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:17:13.787577   54819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:17:14.288183   54819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:17:14.788291   54819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:17:15.287861   54819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:17:15.787559   54819 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:17:16.035332   54819 kubeadm.go:1107] duration metric: took 12.955081663s to wait for elevateKubeSystemPrivileges
	W0401 19:17:16.035382   54819 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0401 19:17:16.035393   54819 kubeadm.go:393] duration metric: took 24.17974423s to StartCluster
	I0401 19:17:16.035414   54819 settings.go:142] acquiring lock: {Name:mk5cd3d9600680d3808ad7ff6310a5e71b09e71d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:17:16.035522   54819 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 19:17:16.036911   54819 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/kubeconfig: {Name:mkbd988e40ba29769e9f8a43c4d876f38e957f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:17:16.037211   54819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0401 19:17:16.037210   54819 start.go:234] Will wait 15m0s for node &{Name: IP:192.168.61.127 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 19:17:16.038824   54819 out.go:177] * Verifying Kubernetes components...
	I0401 19:17:16.037247   54819 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0401 19:17:16.037422   54819 config.go:182] Loaded profile config "auto-408543": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 19:17:16.040218   54819 addons.go:69] Setting storage-provisioner=true in profile "auto-408543"
	I0401 19:17:16.040233   54819 addons.go:69] Setting default-storageclass=true in profile "auto-408543"
	I0401 19:17:16.040252   54819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:17:16.040253   54819 addons.go:234] Setting addon storage-provisioner=true in "auto-408543"
	I0401 19:17:16.040381   54819 host.go:66] Checking if "auto-408543" exists ...
	I0401 19:17:16.040255   54819 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-408543"
	I0401 19:17:16.040851   54819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:17:16.040895   54819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:17:16.040853   54819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:17:16.040959   54819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:17:16.059614   54819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46729
	I0401 19:17:16.060198   54819 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:17:16.060775   54819 main.go:141] libmachine: Using API Version  1
	I0401 19:17:16.060806   54819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:17:16.061394   54819 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:17:16.061593   54819 main.go:141] libmachine: (auto-408543) Calling .GetState
	I0401 19:17:16.062087   54819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40573
	I0401 19:17:16.062492   54819 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:17:16.062992   54819 main.go:141] libmachine: Using API Version  1
	I0401 19:17:16.063012   54819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:17:16.063480   54819 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:17:16.065284   54819 addons.go:234] Setting addon default-storageclass=true in "auto-408543"
	I0401 19:17:16.065324   54819 host.go:66] Checking if "auto-408543" exists ...
	I0401 19:17:16.065720   54819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:17:16.065756   54819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:17:16.066096   54819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:17:16.066140   54819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:17:16.086294   54819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39459
	I0401 19:17:16.086647   54819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41265
	I0401 19:17:16.086788   54819 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:17:16.087144   54819 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:17:16.087324   54819 main.go:141] libmachine: Using API Version  1
	I0401 19:17:16.087346   54819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:17:16.087756   54819 main.go:141] libmachine: Using API Version  1
	I0401 19:17:16.087773   54819 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:17:16.087778   54819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:17:16.087948   54819 main.go:141] libmachine: (auto-408543) Calling .GetState
	I0401 19:17:16.088190   54819 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:17:16.088645   54819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:17:16.088677   54819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:17:16.090037   54819 main.go:141] libmachine: (auto-408543) Calling .DriverName
	I0401 19:17:16.094249   54819 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:17:13.885372   55206 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.736827952s)
	I0401 19:17:13.885404   55206 crio.go:469] duration metric: took 2.736972181s to extract the tarball
	I0401 19:17:13.885413   55206 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 19:17:13.942285   55206 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:17:14.003344   55206 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 19:17:14.003372   55206 cache_images.go:84] Images are preloaded, skipping loading
	I0401 19:17:14.003382   55206 kubeadm.go:928] updating node { 192.168.72.92 8443 v1.29.3 crio true true} ...
	I0401 19:17:14.003501   55206 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-408543 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.92
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:kindnet-408543 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I0401 19:17:14.003584   55206 ssh_runner.go:195] Run: crio config
	I0401 19:17:14.059192   55206 cni.go:84] Creating CNI manager for "kindnet"
	I0401 19:17:14.059216   55206 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 19:17:14.059236   55206 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.92 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-408543 NodeName:kindnet-408543 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.92"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.92 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 19:17:14.059414   55206 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.92
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-408543"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.92
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.92"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 19:17:14.059496   55206 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0401 19:17:14.071485   55206 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 19:17:14.071554   55206 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 19:17:14.084240   55206 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0401 19:17:14.103763   55206 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 19:17:14.124626   55206 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0401 19:17:14.145947   55206 ssh_runner.go:195] Run: grep 192.168.72.92	control-plane.minikube.internal$ /etc/hosts
	I0401 19:17:14.151714   55206 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.92	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:17:14.169166   55206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:17:14.321864   55206 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:17:14.345428   55206 certs.go:68] Setting up /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543 for IP: 192.168.72.92
	I0401 19:17:14.345528   55206 certs.go:194] generating shared ca certs ...
	I0401 19:17:14.345562   55206 certs.go:226] acquiring lock for ca certs: {Name:mk348b3e250c104b662139cd7212c6c6dfda3180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:17:14.345817   55206 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key
	I0401 19:17:14.345934   55206 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key
	I0401 19:17:14.345964   55206 certs.go:256] generating profile certs ...
	I0401 19:17:14.346048   55206 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/client.key
	I0401 19:17:14.346078   55206 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/client.crt with IP's: []
	I0401 19:17:14.442359   55206 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/client.crt ...
	I0401 19:17:14.442391   55206 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/client.crt: {Name:mk52e00fc29ffd19c9a2b30834373c76d62ba370 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:17:14.442592   55206 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/client.key ...
	I0401 19:17:14.442609   55206 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/client.key: {Name:mkfb32fe74a4738b41955f3158c5ab785c0ce205 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:17:14.442707   55206 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/apiserver.key.a2470e84
	I0401 19:17:14.442724   55206 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/apiserver.crt.a2470e84 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.92]
	I0401 19:17:14.827946   55206 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/apiserver.crt.a2470e84 ...
	I0401 19:17:14.827976   55206 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/apiserver.crt.a2470e84: {Name:mk6812340c695a1e1f31ec3dea722cd5be0d8126 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:17:14.828266   55206 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/apiserver.key.a2470e84 ...
	I0401 19:17:14.828289   55206 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/apiserver.key.a2470e84: {Name:mk80c61b212f0ab6460d076a4bade987a4094bd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:17:14.828408   55206 certs.go:381] copying /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/apiserver.crt.a2470e84 -> /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/apiserver.crt
	I0401 19:17:14.828478   55206 certs.go:385] copying /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/apiserver.key.a2470e84 -> /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/apiserver.key
	I0401 19:17:14.828528   55206 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/proxy-client.key
	I0401 19:17:14.828543   55206 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/proxy-client.crt with IP's: []
	I0401 19:17:14.892456   55206 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/proxy-client.crt ...
	I0401 19:17:14.892482   55206 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/proxy-client.crt: {Name:mk63ee518ab918e14123406405ce835ba65124de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:17:14.899607   55206 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/proxy-client.key ...
	I0401 19:17:14.899639   55206 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/proxy-client.key: {Name:mk8542b61cf9b1422dadef200f8ad7ad4236c22f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:17:14.899908   55206 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem (1338 bytes)
	W0401 19:17:14.899956   55206 certs.go:480] ignoring /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751_empty.pem, impossibly tiny 0 bytes
	I0401 19:17:14.899972   55206 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem (1675 bytes)
	I0401 19:17:14.900006   55206 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem (1082 bytes)
	I0401 19:17:14.900038   55206 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem (1123 bytes)
	I0401 19:17:14.900077   55206 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem (1679 bytes)
	I0401 19:17:14.900135   55206 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:17:14.900935   55206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 19:17:14.934460   55206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 19:17:14.963396   55206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 19:17:14.992990   55206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 19:17:15.025273   55206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0401 19:17:15.061058   55206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 19:17:15.088253   55206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 19:17:15.122145   55206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 19:17:15.159320   55206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /usr/share/ca-certificates/177512.pem (1708 bytes)
	I0401 19:17:15.188143   55206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 19:17:15.217610   55206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem --> /usr/share/ca-certificates/17751.pem (1338 bytes)
	I0401 19:17:15.248558   55206 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0401 19:17:15.273634   55206 ssh_runner.go:195] Run: openssl version
	I0401 19:17:15.281865   55206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177512.pem && ln -fs /usr/share/ca-certificates/177512.pem /etc/ssl/certs/177512.pem"
	I0401 19:17:15.298525   55206 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177512.pem
	I0401 19:17:15.306217   55206 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 18:15 /usr/share/ca-certificates/177512.pem
	I0401 19:17:15.306288   55206 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177512.pem
	I0401 19:17:15.322732   55206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177512.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 19:17:15.342000   55206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 19:17:15.358775   55206 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:17:15.365948   55206 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 18:07 /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:17:15.366046   55206 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:17:15.374736   55206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 19:17:15.389455   55206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17751.pem && ln -fs /usr/share/ca-certificates/17751.pem /etc/ssl/certs/17751.pem"
	I0401 19:17:15.405049   55206 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17751.pem
	I0401 19:17:15.411197   55206 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 18:15 /usr/share/ca-certificates/17751.pem
	I0401 19:17:15.411279   55206 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17751.pem
	I0401 19:17:15.420850   55206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17751.pem /etc/ssl/certs/51391683.0"
	I0401 19:17:15.433969   55206 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 19:17:15.440124   55206 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 19:17:15.440187   55206 kubeadm.go:391] StartCluster: {Name:kindnet-408543 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3
ClusterName:kindnet-408543 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.72.92 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:17:15.440280   55206 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 19:17:15.440340   55206 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:17:15.488289   55206 cri.go:89] found id: ""
	I0401 19:17:15.488361   55206 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 19:17:15.499888   55206 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:17:15.510289   55206 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:17:15.521028   55206 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:17:15.521049   55206 kubeadm.go:156] found existing configuration files:
	
	I0401 19:17:15.521096   55206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 19:17:15.531378   55206 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:17:15.531430   55206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:17:15.544065   55206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 19:17:15.555896   55206 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:17:15.555969   55206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:17:15.566613   55206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 19:17:15.576262   55206 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:17:15.576315   55206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:17:15.592591   55206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 19:17:15.603742   55206 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:17:15.603807   55206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:17:15.616380   55206 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 19:17:15.681004   55206 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0401 19:17:15.681097   55206 kubeadm.go:309] [preflight] Running pre-flight checks
	I0401 19:17:15.843965   55206 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 19:17:15.844124   55206 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 19:17:15.844288   55206 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 19:17:16.143175   55206 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 19:17:16.145842   55206 out.go:204]   - Generating certificates and keys ...
	I0401 19:17:16.145959   55206 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0401 19:17:16.146087   55206 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0401 19:17:16.208334   55206 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 19:17:16.401256   55206 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0401 19:17:14.011593   55506 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 19:17:14.011620   55506 machine.go:97] duration metric: took 6.236210277s to provisionDockerMachine
	I0401 19:17:14.011635   55506 start.go:293] postStartSetup for "pause-208693" (driver="kvm2")
	I0401 19:17:14.011647   55506 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 19:17:14.011667   55506 main.go:141] libmachine: (pause-208693) Calling .DriverName
	I0401 19:17:14.012056   55506 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 19:17:14.012091   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHHostname
	I0401 19:17:14.015013   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:14.015398   55506 main.go:141] libmachine: (pause-208693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:70:95", ip: ""} in network mk-pause-208693: {Iface:virbr1 ExpiryTime:2024-04-01 20:15:45 +0000 UTC Type:0 Mac:52:54:00:21:70:95 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:pause-208693 Clientid:01:52:54:00:21:70:95}
	I0401 19:17:14.015426   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined IP address 192.168.39.250 and MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:14.015761   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHPort
	I0401 19:17:14.015941   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHKeyPath
	I0401 19:17:14.016116   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHUsername
	I0401 19:17:14.016263   55506 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/pause-208693/id_rsa Username:docker}
	I0401 19:17:14.104053   55506 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 19:17:14.110496   55506 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 19:17:14.110522   55506 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/addons for local assets ...
	I0401 19:17:14.110577   55506 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/files for local assets ...
	I0401 19:17:14.110670   55506 filesync.go:149] local asset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> 177512.pem in /etc/ssl/certs
	I0401 19:17:14.110781   55506 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 19:17:14.124676   55506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:17:14.152875   55506 start.go:296] duration metric: took 141.227889ms for postStartSetup
	I0401 19:17:14.152908   55506 fix.go:56] duration metric: took 6.401924265s for fixHost
	I0401 19:17:14.152932   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHHostname
	I0401 19:17:14.156060   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:14.156464   55506 main.go:141] libmachine: (pause-208693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:70:95", ip: ""} in network mk-pause-208693: {Iface:virbr1 ExpiryTime:2024-04-01 20:15:45 +0000 UTC Type:0 Mac:52:54:00:21:70:95 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:pause-208693 Clientid:01:52:54:00:21:70:95}
	I0401 19:17:14.156489   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined IP address 192.168.39.250 and MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:14.156750   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHPort
	I0401 19:17:14.156939   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHKeyPath
	I0401 19:17:14.157124   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHKeyPath
	I0401 19:17:14.157360   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHUsername
	I0401 19:17:14.157553   55506 main.go:141] libmachine: Using SSH client type: native
	I0401 19:17:14.157770   55506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.250 22 <nil> <nil>}
	I0401 19:17:14.157786   55506 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 19:17:14.271277   55506 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711999034.266271505
	
	I0401 19:17:14.271297   55506 fix.go:216] guest clock: 1711999034.266271505
	I0401 19:17:14.271306   55506 fix.go:229] Guest: 2024-04-01 19:17:14.266271505 +0000 UTC Remote: 2024-04-01 19:17:14.152913142 +0000 UTC m=+7.640372274 (delta=113.358363ms)
	I0401 19:17:14.271330   55506 fix.go:200] guest clock delta is within tolerance: 113.358363ms
	I0401 19:17:14.271336   55506 start.go:83] releasing machines lock for "pause-208693", held for 6.520387462s
	I0401 19:17:14.271352   55506 main.go:141] libmachine: (pause-208693) Calling .DriverName
	I0401 19:17:14.271584   55506 main.go:141] libmachine: (pause-208693) Calling .GetIP
	I0401 19:17:14.274284   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:14.274692   55506 main.go:141] libmachine: (pause-208693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:70:95", ip: ""} in network mk-pause-208693: {Iface:virbr1 ExpiryTime:2024-04-01 20:15:45 +0000 UTC Type:0 Mac:52:54:00:21:70:95 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:pause-208693 Clientid:01:52:54:00:21:70:95}
	I0401 19:17:14.274734   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined IP address 192.168.39.250 and MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:14.274873   55506 main.go:141] libmachine: (pause-208693) Calling .DriverName
	I0401 19:17:14.275375   55506 main.go:141] libmachine: (pause-208693) Calling .DriverName
	I0401 19:17:14.275542   55506 main.go:141] libmachine: (pause-208693) Calling .DriverName
	I0401 19:17:14.275629   55506 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 19:17:14.275665   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHHostname
	I0401 19:17:14.275781   55506 ssh_runner.go:195] Run: cat /version.json
	I0401 19:17:14.275813   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHHostname
	I0401 19:17:14.278769   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:14.278948   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:14.279327   55506 main.go:141] libmachine: (pause-208693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:70:95", ip: ""} in network mk-pause-208693: {Iface:virbr1 ExpiryTime:2024-04-01 20:15:45 +0000 UTC Type:0 Mac:52:54:00:21:70:95 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:pause-208693 Clientid:01:52:54:00:21:70:95}
	I0401 19:17:14.279360   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined IP address 192.168.39.250 and MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:14.279397   55506 main.go:141] libmachine: (pause-208693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:70:95", ip: ""} in network mk-pause-208693: {Iface:virbr1 ExpiryTime:2024-04-01 20:15:45 +0000 UTC Type:0 Mac:52:54:00:21:70:95 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:pause-208693 Clientid:01:52:54:00:21:70:95}
	I0401 19:17:14.279413   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined IP address 192.168.39.250 and MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:14.279584   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHPort
	I0401 19:17:14.279788   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHPort
	I0401 19:17:14.279796   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHKeyPath
	I0401 19:17:14.280018   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHUsername
	I0401 19:17:14.280024   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHKeyPath
	I0401 19:17:14.280273   55506 main.go:141] libmachine: (pause-208693) Calling .GetSSHUsername
	I0401 19:17:14.280294   55506 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/pause-208693/id_rsa Username:docker}
	I0401 19:17:14.280375   55506 sshutil.go:53] new ssh client: &{IP:192.168.39.250 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/pause-208693/id_rsa Username:docker}
	I0401 19:17:14.398299   55506 ssh_runner.go:195] Run: systemctl --version
	I0401 19:17:14.407522   55506 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 19:17:14.589987   55506 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 19:17:14.597998   55506 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 19:17:14.598056   55506 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 19:17:14.610680   55506 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 19:17:14.610700   55506 start.go:494] detecting cgroup driver to use...
	I0401 19:17:14.610753   55506 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 19:17:14.633419   55506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 19:17:14.649012   55506 docker.go:217] disabling cri-docker service (if available) ...
	I0401 19:17:14.649069   55506 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 19:17:14.665238   55506 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 19:17:14.681036   55506 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 19:17:14.839573   55506 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 19:17:15.029768   55506 docker.go:233] disabling docker service ...
	I0401 19:17:15.029834   55506 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 19:17:15.055505   55506 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 19:17:15.072613   55506 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 19:17:15.249142   55506 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 19:17:15.437700   55506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 19:17:15.458179   55506 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 19:17:15.485128   55506 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0401 19:17:15.485204   55506 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:17:15.499058   55506 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 19:17:15.499145   55506 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:17:15.512574   55506 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:17:15.525783   55506 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:17:15.538961   55506 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 19:17:15.552336   55506 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:17:15.565451   55506 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:17:15.578535   55506 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:17:15.595981   55506 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 19:17:15.610546   55506 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 19:17:15.623143   55506 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:17:15.776688   55506 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 19:17:16.095725   54819 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 19:17:16.095744   54819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 19:17:16.095765   54819 main.go:141] libmachine: (auto-408543) Calling .GetSSHHostname
	I0401 19:17:16.099329   54819 main.go:141] libmachine: (auto-408543) DBG | domain auto-408543 has defined MAC address 52:54:00:f0:64:9b in network mk-auto-408543
	I0401 19:17:16.099898   54819 main.go:141] libmachine: (auto-408543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:64:9b", ip: ""} in network mk-auto-408543: {Iface:virbr4 ExpiryTime:2024-04-01 20:16:32 +0000 UTC Type:0 Mac:52:54:00:f0:64:9b Iaid: IPaddr:192.168.61.127 Prefix:24 Hostname:auto-408543 Clientid:01:52:54:00:f0:64:9b}
	I0401 19:17:16.099922   54819 main.go:141] libmachine: (auto-408543) DBG | domain auto-408543 has defined IP address 192.168.61.127 and MAC address 52:54:00:f0:64:9b in network mk-auto-408543
	I0401 19:17:16.100322   54819 main.go:141] libmachine: (auto-408543) Calling .GetSSHPort
	I0401 19:17:16.101936   54819 main.go:141] libmachine: (auto-408543) Calling .GetSSHKeyPath
	I0401 19:17:16.102104   54819 main.go:141] libmachine: (auto-408543) Calling .GetSSHUsername
	I0401 19:17:16.102364   54819 sshutil.go:53] new ssh client: &{IP:192.168.61.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/auto-408543/id_rsa Username:docker}
	I0401 19:17:16.107053   54819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42953
	I0401 19:17:16.107434   54819 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:17:16.107889   54819 main.go:141] libmachine: Using API Version  1
	I0401 19:17:16.107908   54819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:17:16.108372   54819 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:17:16.108577   54819 main.go:141] libmachine: (auto-408543) Calling .GetState
	I0401 19:17:16.110608   54819 main.go:141] libmachine: (auto-408543) Calling .DriverName
	I0401 19:17:16.110861   54819 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 19:17:16.110876   54819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 19:17:16.110895   54819 main.go:141] libmachine: (auto-408543) Calling .GetSSHHostname
	I0401 19:17:16.114068   54819 main.go:141] libmachine: (auto-408543) DBG | domain auto-408543 has defined MAC address 52:54:00:f0:64:9b in network mk-auto-408543
	I0401 19:17:16.114737   54819 main.go:141] libmachine: (auto-408543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:64:9b", ip: ""} in network mk-auto-408543: {Iface:virbr4 ExpiryTime:2024-04-01 20:16:32 +0000 UTC Type:0 Mac:52:54:00:f0:64:9b Iaid: IPaddr:192.168.61.127 Prefix:24 Hostname:auto-408543 Clientid:01:52:54:00:f0:64:9b}
	I0401 19:17:16.114759   54819 main.go:141] libmachine: (auto-408543) DBG | domain auto-408543 has defined IP address 192.168.61.127 and MAC address 52:54:00:f0:64:9b in network mk-auto-408543
	I0401 19:17:16.114792   54819 main.go:141] libmachine: (auto-408543) Calling .GetSSHPort
	I0401 19:17:16.114980   54819 main.go:141] libmachine: (auto-408543) Calling .GetSSHKeyPath
	I0401 19:17:16.115134   54819 main.go:141] libmachine: (auto-408543) Calling .GetSSHUsername
	I0401 19:17:16.115251   54819 sshutil.go:53] new ssh client: &{IP:192.168.61.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/auto-408543/id_rsa Username:docker}
	I0401 19:17:16.397636   54819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 19:17:16.450592   54819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 19:17:16.688176   54819 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:17:16.688265   54819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0401 19:17:17.070607   54819 main.go:141] libmachine: Making call to close driver server
	I0401 19:17:17.070634   54819 main.go:141] libmachine: (auto-408543) Calling .Close
	I0401 19:17:17.070660   54819 start.go:946] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0401 19:17:17.070617   54819 main.go:141] libmachine: Making call to close driver server
	I0401 19:17:17.070722   54819 main.go:141] libmachine: (auto-408543) Calling .Close
	I0401 19:17:17.070910   54819 main.go:141] libmachine: (auto-408543) DBG | Closing plugin on server side
	I0401 19:17:17.070943   54819 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:17:17.070951   54819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:17:17.070960   54819 main.go:141] libmachine: Making call to close driver server
	I0401 19:17:17.070968   54819 main.go:141] libmachine: (auto-408543) Calling .Close
	I0401 19:17:17.071025   54819 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:17:17.071041   54819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:17:17.071050   54819 main.go:141] libmachine: Making call to close driver server
	I0401 19:17:17.071062   54819 main.go:141] libmachine: (auto-408543) Calling .Close
	I0401 19:17:17.071871   54819 node_ready.go:35] waiting up to 15m0s for node "auto-408543" to be "Ready" ...
	I0401 19:17:17.072012   54819 main.go:141] libmachine: (auto-408543) DBG | Closing plugin on server side
	I0401 19:17:17.072051   54819 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:17:17.072051   54819 main.go:141] libmachine: (auto-408543) DBG | Closing plugin on server side
	I0401 19:17:17.072060   54819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:17:17.072069   54819 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:17:17.072086   54819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:17:17.097225   54819 node_ready.go:49] node "auto-408543" has status "Ready":"True"
	I0401 19:17:17.097251   54819 node_ready.go:38] duration metric: took 25.350762ms for node "auto-408543" to be "Ready" ...
	I0401 19:17:17.097261   54819 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:17:17.108022   54819 main.go:141] libmachine: Making call to close driver server
	I0401 19:17:17.108043   54819 main.go:141] libmachine: (auto-408543) Calling .Close
	I0401 19:17:17.108399   54819 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:17:17.108417   54819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:17:17.110345   54819 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0401 19:17:17.003923   55206 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0401 19:17:17.244916   55206 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0401 19:17:17.476146   55206 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0401 19:17:17.476460   55206 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [kindnet-408543 localhost] and IPs [192.168.72.92 127.0.0.1 ::1]
	I0401 19:17:17.654961   55206 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0401 19:17:17.655319   55206 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [kindnet-408543 localhost] and IPs [192.168.72.92 127.0.0.1 ::1]
	I0401 19:17:17.726884   55206 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 19:17:17.872355   55206 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 19:17:17.956635   55206 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0401 19:17:17.956965   55206 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 19:17:18.232213   55206 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 19:17:18.452754   55206 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 19:17:18.620605   55206 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 19:17:18.965570   55206 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 19:17:19.062913   55206 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 19:17:19.063563   55206 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 19:17:19.069009   55206 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 19:17:17.111755   54819 addons.go:505] duration metric: took 1.074510474s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0401 19:17:17.110373   54819 pod_ready.go:78] waiting up to 15m0s for pod "coredns-76f75df574-2dqzc" in "kube-system" namespace to be "Ready" ...
	I0401 19:17:17.575128   54819 kapi.go:248] "coredns" deployment in "kube-system" namespace and "auto-408543" context rescaled to 1 replicas
	I0401 19:17:19.119721   54819 pod_ready.go:102] pod "coredns-76f75df574-2dqzc" in "kube-system" namespace has status "Ready":"False"
	I0401 19:17:21.084965   55506 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.308237386s)
	I0401 19:17:21.085002   55506 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 19:17:21.085067   55506 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 19:17:21.093408   55506 start.go:562] Will wait 60s for crictl version
	I0401 19:17:21.093481   55506 ssh_runner.go:195] Run: which crictl
	I0401 19:17:21.098328   55506 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 19:17:21.143587   55506 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 19:17:21.143673   55506 ssh_runner.go:195] Run: crio --version
	I0401 19:17:21.178725   55506 ssh_runner.go:195] Run: crio --version
	I0401 19:17:21.314941   55506 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0401 19:17:19.070621   55206 out.go:204]   - Booting up control plane ...
	I0401 19:17:19.070728   55206 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 19:17:19.070859   55206 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 19:17:19.070964   55206 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 19:17:19.088627   55206 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 19:17:19.089523   55206 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 19:17:19.089567   55206 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0401 19:17:19.235705   55206 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 19:17:21.316408   55506 main.go:141] libmachine: (pause-208693) Calling .GetIP
	I0401 19:17:21.319848   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:21.320268   55506 main.go:141] libmachine: (pause-208693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:70:95", ip: ""} in network mk-pause-208693: {Iface:virbr1 ExpiryTime:2024-04-01 20:15:45 +0000 UTC Type:0 Mac:52:54:00:21:70:95 Iaid: IPaddr:192.168.39.250 Prefix:24 Hostname:pause-208693 Clientid:01:52:54:00:21:70:95}
	I0401 19:17:21.320288   55506 main.go:141] libmachine: (pause-208693) DBG | domain pause-208693 has defined IP address 192.168.39.250 and MAC address 52:54:00:21:70:95 in network mk-pause-208693
	I0401 19:17:21.320664   55506 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0401 19:17:21.327439   55506 kubeadm.go:877] updating cluster {Name:pause-208693 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3
ClusterName:pause-208693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fals
e olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 19:17:21.327563   55506 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0401 19:17:21.327623   55506 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:17:21.391343   55506 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 19:17:21.391373   55506 crio.go:433] Images already preloaded, skipping extraction
	I0401 19:17:21.391425   55506 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:17:21.430944   55506 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 19:17:21.430969   55506 cache_images.go:84] Images are preloaded, skipping loading
	I0401 19:17:21.430978   55506 kubeadm.go:928] updating node { 192.168.39.250 8443 v1.29.3 crio true true} ...
	I0401 19:17:21.431097   55506 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-208693 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.250
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:pause-208693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 19:17:21.431183   55506 ssh_runner.go:195] Run: crio config
	I0401 19:17:21.490699   55506 cni.go:84] Creating CNI manager for ""
	I0401 19:17:21.490720   55506 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:17:21.490733   55506 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 19:17:21.490752   55506 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.250 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-208693 NodeName:pause-208693 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.250"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.250 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 19:17:21.490874   55506 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.250
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-208693"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.250
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.250"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 19:17:21.490940   55506 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0401 19:17:21.505292   55506 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 19:17:21.505353   55506 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 19:17:21.518774   55506 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0401 19:17:21.539416   55506 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 19:17:19.615260   54819 pod_ready.go:97] error getting pod "coredns-76f75df574-2dqzc" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-2dqzc" not found
	I0401 19:17:19.615286   54819 pod_ready.go:81] duration metric: took 2.503509036s for pod "coredns-76f75df574-2dqzc" in "kube-system" namespace to be "Ready" ...
	E0401 19:17:19.615295   54819 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-76f75df574-2dqzc" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-2dqzc" not found
	I0401 19:17:19.615301   54819 pod_ready.go:78] waiting up to 15m0s for pod "coredns-76f75df574-gvklv" in "kube-system" namespace to be "Ready" ...
	I0401 19:17:21.624808   54819 pod_ready.go:102] pod "coredns-76f75df574-gvklv" in "kube-system" namespace has status "Ready":"False"
	I0401 19:17:23.626993   54819 pod_ready.go:102] pod "coredns-76f75df574-gvklv" in "kube-system" namespace has status "Ready":"False"
	I0401 19:17:25.237712   55206 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.002736 seconds
	I0401 19:17:25.260027   55206 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 19:17:25.281769   55206 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 19:17:25.826211   55206 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 19:17:25.826461   55206 kubeadm.go:309] [mark-control-plane] Marking the node kindnet-408543 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 19:17:26.344378   55206 kubeadm.go:309] [bootstrap-token] Using token: rwpz73.tx8m08htc5fha9yu
	I0401 19:17:26.346060   55206 out.go:204]   - Configuring RBAC rules ...
	I0401 19:17:26.346203   55206 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 19:17:26.358311   55206 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 19:17:26.374042   55206 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 19:17:26.378908   55206 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 19:17:26.387350   55206 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 19:17:26.391275   55206 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 19:17:26.412368   55206 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 19:17:21.559611   55506 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0401 19:17:21.583677   55506 ssh_runner.go:195] Run: grep 192.168.39.250	control-plane.minikube.internal$ /etc/hosts
	I0401 19:17:21.589587   55506 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:17:21.742565   55506 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:17:21.760339   55506 certs.go:68] Setting up /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/pause-208693 for IP: 192.168.39.250
	I0401 19:17:21.760372   55506 certs.go:194] generating shared ca certs ...
	I0401 19:17:21.760393   55506 certs.go:226] acquiring lock for ca certs: {Name:mk348b3e250c104b662139cd7212c6c6dfda3180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:17:21.760561   55506 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key
	I0401 19:17:21.760614   55506 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key
	I0401 19:17:21.760629   55506 certs.go:256] generating profile certs ...
	I0401 19:17:21.760753   55506 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/pause-208693/client.key
	I0401 19:17:21.760853   55506 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/pause-208693/apiserver.key.640a01f7
	I0401 19:17:21.760894   55506 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/pause-208693/proxy-client.key
	I0401 19:17:21.760997   55506 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem (1338 bytes)
	W0401 19:17:21.761029   55506 certs.go:480] ignoring /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751_empty.pem, impossibly tiny 0 bytes
	I0401 19:17:21.761038   55506 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem (1675 bytes)
	I0401 19:17:21.761067   55506 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem (1082 bytes)
	I0401 19:17:21.761094   55506 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem (1123 bytes)
	I0401 19:17:21.761118   55506 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem (1679 bytes)
	I0401 19:17:21.761152   55506 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:17:21.762220   55506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 19:17:21.789295   55506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 19:17:21.816554   55506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 19:17:22.006815   55506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 19:17:22.163516   55506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/pause-208693/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0401 19:17:22.366533   55506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/pause-208693/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0401 19:17:22.619189   55506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/pause-208693/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 19:17:22.751245   55506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/pause-208693/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 19:17:23.002508   55506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /usr/share/ca-certificates/177512.pem (1708 bytes)
	I0401 19:17:23.074573   55506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 19:17:23.110680   55506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem --> /usr/share/ca-certificates/17751.pem (1338 bytes)
	I0401 19:17:23.149436   55506 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0401 19:17:23.177136   55506 ssh_runner.go:195] Run: openssl version
	I0401 19:17:23.188108   55506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177512.pem && ln -fs /usr/share/ca-certificates/177512.pem /etc/ssl/certs/177512.pem"
	I0401 19:17:23.205226   55506 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177512.pem
	I0401 19:17:23.213298   55506 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 18:15 /usr/share/ca-certificates/177512.pem
	I0401 19:17:23.213366   55506 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177512.pem
	I0401 19:17:23.224544   55506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177512.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 19:17:23.240162   55506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 19:17:23.257743   55506 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:17:23.263476   55506 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 18:07 /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:17:23.263542   55506 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:17:23.272318   55506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 19:17:23.286967   55506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17751.pem && ln -fs /usr/share/ca-certificates/17751.pem /etc/ssl/certs/17751.pem"
	I0401 19:17:23.299084   55506 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17751.pem
	I0401 19:17:23.304168   55506 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 18:15 /usr/share/ca-certificates/17751.pem
	I0401 19:17:23.304227   55506 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17751.pem
	I0401 19:17:23.310638   55506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17751.pem /etc/ssl/certs/51391683.0"
	I0401 19:17:23.321219   55506 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 19:17:23.326672   55506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 19:17:23.336598   55506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 19:17:23.343670   55506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 19:17:23.354373   55506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 19:17:23.363233   55506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 19:17:23.376742   55506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 19:17:23.390188   55506 kubeadm.go:391] StartCluster: {Name:pause-208693 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cl
usterName:pause-208693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.250 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:17:23.390370   55506 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 19:17:23.390470   55506 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:17:23.488359   55506 cri.go:89] found id: "51f0eff30b7c7d78a434ac0cebb793087012ebc1a4e3af4377acb07b114c7b1b"
	I0401 19:17:23.488385   55506 cri.go:89] found id: "ffbcd93cd4dc6dd7bdba8817fe9464043c1441e48a2f0a339d8e2f90465c23b2"
	I0401 19:17:23.488390   55506 cri.go:89] found id: "2d9857c1d11f7699cdda344ccc35292880e1e966398923e7c7c8a221bb17fbb4"
	I0401 19:17:23.488395   55506 cri.go:89] found id: "eb0be624c77f6b7779d07398d8ac81b11ff2d1e2491332385f9bc7abd08da4d1"
	I0401 19:17:23.488399   55506 cri.go:89] found id: "90827beb7d452b745ca6b9be1e1cbf187b22f2a83733fa1cf32f65dd51871a94"
	I0401 19:17:23.488403   55506 cri.go:89] found id: "a191a299d42de032a4e1b058d778aeb8a768699852f90479ed27525750c39dcb"
	I0401 19:17:23.488407   55506 cri.go:89] found id: "13440e4b058772c288c91430cc8b93d4ee93f6c2dc002c58c42364841c37537c"
	I0401 19:17:23.488411   55506 cri.go:89] found id: "f4e035677728cfc3e8fdacccbe9c2074622432687c5ffea26e9297dab2bc7e5f"
	I0401 19:17:23.488415   55506 cri.go:89] found id: "4c96330c5da0385157221a32550935b344f8d450869645cdb302bf6d7d24d50a"
	I0401 19:17:23.488422   55506 cri.go:89] found id: "60bec38260e22141e8ef66a6e954a86d22216f47a8023678c8c9ec31a28ed3cd"
	I0401 19:17:23.488426   55506 cri.go:89] found id: "9ae44e1e9ca77a159598d47d87a284b50262d7feed6af8939a521854ddf86ff4"
	I0401 19:17:23.488448   55506 cri.go:89] found id: "44a8c17316feb68f3c977baa3f7431c716167f78518eb63e20017a200ca17ad4"
	I0401 19:17:23.488457   55506 cri.go:89] found id: ""
	I0401 19:17:23.488513   55506 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 01 19:18:10 pause-208693 crio[2141]: time="2024-04-01 19:18:10.272548467Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9fec0655-e747-48e6-9fcb-52a0099945ac name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:18:10 pause-208693 crio[2141]: time="2024-04-01 19:18:10.272936155Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d51e592bf39a8faf7132b084687b1accbdb18416ca3406e14ab22c4cc914398f,PodSandboxId:f8530e9e195d4d7e3093c22d1ed84743b6d1388e4e2682b416e6c41595984fc0,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711999071294496603,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rldp9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9bf80ea-9ada-4a47-bab1-e78b9223d2a8,},Annotations:map[string]string{io.kubernetes.container.hash: dae92949,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee316adbd86a389d2cee2a243bb53d9623bdc19f7f4ada9f6d1dca071d0882d0,PodSandboxId:6c9433488dd88aea169cba5d687c2b18ffb7bacf191081aad083c7e1b83bb2eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711999071239675723,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-df6ns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: acb3c498-4e8d-4a02-b9d7-8a368f9303d0,},Annotations:map[string]string{io.kubernetes.container.hash: d19b4992,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f63a9c00dde6c19df299f9c1f4a733df97d6952398eceec187693dc9f073374,PodSandboxId:048b641526bb6ff733a7c91666f55b7716aab4d9aeea375fde25db0b088b73ae,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711999066608613618,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-208693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ba31f6ed98991b7270f0eb1bd6de561,},Annot
ations:map[string]string{io.kubernetes.container.hash: e5a2f95e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a040f7057b6a33daba995297514b7b885fc9fe29532005c42b5e51d13105fdb9,PodSandboxId:cf7a984a2a9d307920f4f8475a386958d0e0b0ef51bdaa8e8792df7e58a19df4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711999066645334372,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-208693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a372662f6344b45a1f4085d401140c1a,},Annotations:map[string]
string{io.kubernetes.container.hash: d6729e28,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:354ccfac9527c54ac400591dde36f20a21b0f39232cee1442492d045c16195b2,PodSandboxId:073f93c45437f090972f867049697da94b76179da01cae458f4787eed49f9346,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711999066641810226,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-208693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9aa6509d897132c9a939f9cc49fd2164,},Annotations:map[string]string{io.kubernet
es.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d177654682bcbda844c640fa087f542ecebee05bce474ac4d3d8194d5ef6b06,PodSandboxId:7220ef36a671b8f828ba91ff79b645337994d459c971539c2e70edde07b7577b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711999066590692066,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-208693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c1e1b9ed1eae1e76338c581a974e1b2,},Annotations:map[string]string{io
.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51f0eff30b7c7d78a434ac0cebb793087012ebc1a4e3af4377acb07b114c7b1b,PodSandboxId:f8530e9e195d4d7e3093c22d1ed84743b6d1388e4e2682b416e6c41595984fc0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711999042746299873,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rldp9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9bf80ea-9ada-4a47-bab1-e78b9223d2a8,},Annotations:map[string]string{io.kubernetes.container.hash: dae9
2949,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffbcd93cd4dc6dd7bdba8817fe9464043c1441e48a2f0a339d8e2f90465c23b2,PodSandboxId:6c9433488dd88aea169cba5d687c2b18ffb7bacf191081aad083c7e1b83bb2eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1711999042518565519,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-df6ns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acb3c498-4e8d-4a02-b9d7-8a368f9303d0,},Annotations:map[string]string{io.kubernetes.container.hash: d19b4992,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb0be624c77f6b7779d07398d8ac81b11ff2d1e2491332385f9bc7abd08da4d1,PodSandboxId:073f93c45437f090972f867049697da94b76179da01cae458f4787eed49f9346,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1711999042384301994,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-208693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9aa6509d897132c9a939f9cc49fd2164,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d9857c1d11f7699cdda344ccc35292880e1e966398923e7c7c8a221bb17fbb4,PodSandboxId:048b641526bb6ff733a7c91666f55b7716aab4d9aeea375fde25db0b088b73ae,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1711999042408662170,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-208693,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 3ba31f6ed98991b7270f0eb1bd6de561,},Annotations:map[string]string{io.kubernetes.container.hash: e5a2f95e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90827beb7d452b745ca6b9be1e1cbf187b22f2a83733fa1cf32f65dd51871a94,PodSandboxId:cf7a984a2a9d307920f4f8475a386958d0e0b0ef51bdaa8e8792df7e58a19df4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711999042354498532,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-208693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: a372662f6344b45a1f4085d401140c1a,},Annotations:map[string]string{io.kubernetes.container.hash: d6729e28,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a191a299d42de032a4e1b058d778aeb8a768699852f90479ed27525750c39dcb,PodSandboxId:7220ef36a671b8f828ba91ff79b645337994d459c971539c2e70edde07b7577b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1711999042250503194,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-208693,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 9c1e1b9ed1eae1e76338c581a974e1b2,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9fec0655-e747-48e6-9fcb-52a0099945ac name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:18:10 pause-208693 crio[2141]: time="2024-04-01 19:18:10.333486851Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f5e0e6a3-72d2-40eb-afd9-834b3b58b237 name=/runtime.v1.RuntimeService/Version
	Apr 01 19:18:10 pause-208693 crio[2141]: time="2024-04-01 19:18:10.333561377Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f5e0e6a3-72d2-40eb-afd9-834b3b58b237 name=/runtime.v1.RuntimeService/Version
	Apr 01 19:18:10 pause-208693 crio[2141]: time="2024-04-01 19:18:10.335101972Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=68aca9bb-db28-4a14-9bbb-271153c613eb name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:18:10 pause-208693 crio[2141]: time="2024-04-01 19:18:10.335447219Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711999090335425559,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121209,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=68aca9bb-db28-4a14-9bbb-271153c613eb name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:18:10 pause-208693 crio[2141]: time="2024-04-01 19:18:10.336277704Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8fab0ac7-dff5-4a42-86e6-80da255c08f7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:18:10 pause-208693 crio[2141]: time="2024-04-01 19:18:10.336361024Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8fab0ac7-dff5-4a42-86e6-80da255c08f7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:18:10 pause-208693 crio[2141]: time="2024-04-01 19:18:10.336603974Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d51e592bf39a8faf7132b084687b1accbdb18416ca3406e14ab22c4cc914398f,PodSandboxId:f8530e9e195d4d7e3093c22d1ed84743b6d1388e4e2682b416e6c41595984fc0,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711999071294496603,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rldp9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9bf80ea-9ada-4a47-bab1-e78b9223d2a8,},Annotations:map[string]string{io.kubernetes.container.hash: dae92949,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee316adbd86a389d2cee2a243bb53d9623bdc19f7f4ada9f6d1dca071d0882d0,PodSandboxId:6c9433488dd88aea169cba5d687c2b18ffb7bacf191081aad083c7e1b83bb2eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711999071239675723,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-df6ns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: acb3c498-4e8d-4a02-b9d7-8a368f9303d0,},Annotations:map[string]string{io.kubernetes.container.hash: d19b4992,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f63a9c00dde6c19df299f9c1f4a733df97d6952398eceec187693dc9f073374,PodSandboxId:048b641526bb6ff733a7c91666f55b7716aab4d9aeea375fde25db0b088b73ae,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711999066608613618,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-208693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ba31f6ed98991b7270f0eb1bd6de561,},Annot
ations:map[string]string{io.kubernetes.container.hash: e5a2f95e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a040f7057b6a33daba995297514b7b885fc9fe29532005c42b5e51d13105fdb9,PodSandboxId:cf7a984a2a9d307920f4f8475a386958d0e0b0ef51bdaa8e8792df7e58a19df4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711999066645334372,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-208693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a372662f6344b45a1f4085d401140c1a,},Annotations:map[string]
string{io.kubernetes.container.hash: d6729e28,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:354ccfac9527c54ac400591dde36f20a21b0f39232cee1442492d045c16195b2,PodSandboxId:073f93c45437f090972f867049697da94b76179da01cae458f4787eed49f9346,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711999066641810226,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-208693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9aa6509d897132c9a939f9cc49fd2164,},Annotations:map[string]string{io.kubernet
es.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d177654682bcbda844c640fa087f542ecebee05bce474ac4d3d8194d5ef6b06,PodSandboxId:7220ef36a671b8f828ba91ff79b645337994d459c971539c2e70edde07b7577b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711999066590692066,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-208693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c1e1b9ed1eae1e76338c581a974e1b2,},Annotations:map[string]string{io
.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51f0eff30b7c7d78a434ac0cebb793087012ebc1a4e3af4377acb07b114c7b1b,PodSandboxId:f8530e9e195d4d7e3093c22d1ed84743b6d1388e4e2682b416e6c41595984fc0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711999042746299873,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rldp9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9bf80ea-9ada-4a47-bab1-e78b9223d2a8,},Annotations:map[string]string{io.kubernetes.container.hash: dae9
2949,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffbcd93cd4dc6dd7bdba8817fe9464043c1441e48a2f0a339d8e2f90465c23b2,PodSandboxId:6c9433488dd88aea169cba5d687c2b18ffb7bacf191081aad083c7e1b83bb2eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1711999042518565519,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-df6ns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acb3c498-4e8d-4a02-b9d7-8a368f9303d0,},Annotations:map[string]string{io.kubernetes.container.hash: d19b4992,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb0be624c77f6b7779d07398d8ac81b11ff2d1e2491332385f9bc7abd08da4d1,PodSandboxId:073f93c45437f090972f867049697da94b76179da01cae458f4787eed49f9346,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1711999042384301994,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-208693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9aa6509d897132c9a939f9cc49fd2164,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d9857c1d11f7699cdda344ccc35292880e1e966398923e7c7c8a221bb17fbb4,PodSandboxId:048b641526bb6ff733a7c91666f55b7716aab4d9aeea375fde25db0b088b73ae,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1711999042408662170,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-208693,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 3ba31f6ed98991b7270f0eb1bd6de561,},Annotations:map[string]string{io.kubernetes.container.hash: e5a2f95e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90827beb7d452b745ca6b9be1e1cbf187b22f2a83733fa1cf32f65dd51871a94,PodSandboxId:cf7a984a2a9d307920f4f8475a386958d0e0b0ef51bdaa8e8792df7e58a19df4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711999042354498532,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-208693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: a372662f6344b45a1f4085d401140c1a,},Annotations:map[string]string{io.kubernetes.container.hash: d6729e28,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a191a299d42de032a4e1b058d778aeb8a768699852f90479ed27525750c39dcb,PodSandboxId:7220ef36a671b8f828ba91ff79b645337994d459c971539c2e70edde07b7577b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1711999042250503194,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-208693,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 9c1e1b9ed1eae1e76338c581a974e1b2,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8fab0ac7-dff5-4a42-86e6-80da255c08f7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:18:10 pause-208693 crio[2141]: time="2024-04-01 19:18:10.384316002Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=028ec849-c799-4971-8fb7-2618326564d8 name=/runtime.v1.RuntimeService/Version
	Apr 01 19:18:10 pause-208693 crio[2141]: time="2024-04-01 19:18:10.384386731Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=028ec849-c799-4971-8fb7-2618326564d8 name=/runtime.v1.RuntimeService/Version
	Apr 01 19:18:10 pause-208693 crio[2141]: time="2024-04-01 19:18:10.386371197Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5454d1e5-f676-4dff-9aa6-629f7bffb163 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:18:10 pause-208693 crio[2141]: time="2024-04-01 19:18:10.386718449Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711999090386696589,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121209,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5454d1e5-f676-4dff-9aa6-629f7bffb163 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:18:10 pause-208693 crio[2141]: time="2024-04-01 19:18:10.387348065Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eaea8c4c-4dc0-4746-924b-f32fb3be37b3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:18:10 pause-208693 crio[2141]: time="2024-04-01 19:18:10.387400417Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eaea8c4c-4dc0-4746-924b-f32fb3be37b3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:18:10 pause-208693 crio[2141]: time="2024-04-01 19:18:10.388239024Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d51e592bf39a8faf7132b084687b1accbdb18416ca3406e14ab22c4cc914398f,PodSandboxId:f8530e9e195d4d7e3093c22d1ed84743b6d1388e4e2682b416e6c41595984fc0,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711999071294496603,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rldp9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9bf80ea-9ada-4a47-bab1-e78b9223d2a8,},Annotations:map[string]string{io.kubernetes.container.hash: dae92949,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee316adbd86a389d2cee2a243bb53d9623bdc19f7f4ada9f6d1dca071d0882d0,PodSandboxId:6c9433488dd88aea169cba5d687c2b18ffb7bacf191081aad083c7e1b83bb2eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711999071239675723,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-df6ns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: acb3c498-4e8d-4a02-b9d7-8a368f9303d0,},Annotations:map[string]string{io.kubernetes.container.hash: d19b4992,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f63a9c00dde6c19df299f9c1f4a733df97d6952398eceec187693dc9f073374,PodSandboxId:048b641526bb6ff733a7c91666f55b7716aab4d9aeea375fde25db0b088b73ae,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711999066608613618,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-208693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ba31f6ed98991b7270f0eb1bd6de561,},Annot
ations:map[string]string{io.kubernetes.container.hash: e5a2f95e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a040f7057b6a33daba995297514b7b885fc9fe29532005c42b5e51d13105fdb9,PodSandboxId:cf7a984a2a9d307920f4f8475a386958d0e0b0ef51bdaa8e8792df7e58a19df4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711999066645334372,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-208693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a372662f6344b45a1f4085d401140c1a,},Annotations:map[string]
string{io.kubernetes.container.hash: d6729e28,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:354ccfac9527c54ac400591dde36f20a21b0f39232cee1442492d045c16195b2,PodSandboxId:073f93c45437f090972f867049697da94b76179da01cae458f4787eed49f9346,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711999066641810226,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-208693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9aa6509d897132c9a939f9cc49fd2164,},Annotations:map[string]string{io.kubernet
es.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d177654682bcbda844c640fa087f542ecebee05bce474ac4d3d8194d5ef6b06,PodSandboxId:7220ef36a671b8f828ba91ff79b645337994d459c971539c2e70edde07b7577b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711999066590692066,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-208693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c1e1b9ed1eae1e76338c581a974e1b2,},Annotations:map[string]string{io
.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51f0eff30b7c7d78a434ac0cebb793087012ebc1a4e3af4377acb07b114c7b1b,PodSandboxId:f8530e9e195d4d7e3093c22d1ed84743b6d1388e4e2682b416e6c41595984fc0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711999042746299873,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rldp9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9bf80ea-9ada-4a47-bab1-e78b9223d2a8,},Annotations:map[string]string{io.kubernetes.container.hash: dae9
2949,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffbcd93cd4dc6dd7bdba8817fe9464043c1441e48a2f0a339d8e2f90465c23b2,PodSandboxId:6c9433488dd88aea169cba5d687c2b18ffb7bacf191081aad083c7e1b83bb2eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1711999042518565519,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-df6ns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acb3c498-4e8d-4a02-b9d7-8a368f9303d0,},Annotations:map[string]string{io.kubernetes.container.hash: d19b4992,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb0be624c77f6b7779d07398d8ac81b11ff2d1e2491332385f9bc7abd08da4d1,PodSandboxId:073f93c45437f090972f867049697da94b76179da01cae458f4787eed49f9346,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1711999042384301994,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-208693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9aa6509d897132c9a939f9cc49fd2164,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d9857c1d11f7699cdda344ccc35292880e1e966398923e7c7c8a221bb17fbb4,PodSandboxId:048b641526bb6ff733a7c91666f55b7716aab4d9aeea375fde25db0b088b73ae,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1711999042408662170,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-208693,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 3ba31f6ed98991b7270f0eb1bd6de561,},Annotations:map[string]string{io.kubernetes.container.hash: e5a2f95e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90827beb7d452b745ca6b9be1e1cbf187b22f2a83733fa1cf32f65dd51871a94,PodSandboxId:cf7a984a2a9d307920f4f8475a386958d0e0b0ef51bdaa8e8792df7e58a19df4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711999042354498532,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-208693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: a372662f6344b45a1f4085d401140c1a,},Annotations:map[string]string{io.kubernetes.container.hash: d6729e28,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a191a299d42de032a4e1b058d778aeb8a768699852f90479ed27525750c39dcb,PodSandboxId:7220ef36a671b8f828ba91ff79b645337994d459c971539c2e70edde07b7577b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1711999042250503194,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-208693,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 9c1e1b9ed1eae1e76338c581a974e1b2,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eaea8c4c-4dc0-4746-924b-f32fb3be37b3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:18:10 pause-208693 crio[2141]: time="2024-04-01 19:18:10.429636923Z" level=debug msg="Request: &VersionRequest{Version:0.1.0,}" file="otel-collector/interceptors.go:62" id=d67a6a3b-e85c-4edf-a2ea-1646adc5459e name=/runtime.v1.RuntimeService/Version
	Apr 01 19:18:10 pause-208693 crio[2141]: time="2024-04-01 19:18:10.429711736Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d67a6a3b-e85c-4edf-a2ea-1646adc5459e name=/runtime.v1.RuntimeService/Version
	Apr 01 19:18:10 pause-208693 crio[2141]: time="2024-04-01 19:18:10.445226349Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9f931739-611b-41bc-88e4-00b5eaa264f7 name=/runtime.v1.RuntimeService/Version
	Apr 01 19:18:10 pause-208693 crio[2141]: time="2024-04-01 19:18:10.445293328Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9f931739-611b-41bc-88e4-00b5eaa264f7 name=/runtime.v1.RuntimeService/Version
	Apr 01 19:18:10 pause-208693 crio[2141]: time="2024-04-01 19:18:10.446506442Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e56c912c-b105-4a3a-abbb-70b3840a03a7 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:18:10 pause-208693 crio[2141]: time="2024-04-01 19:18:10.446936787Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1711999090446915600,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121209,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e56c912c-b105-4a3a-abbb-70b3840a03a7 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:18:10 pause-208693 crio[2141]: time="2024-04-01 19:18:10.447512987Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=abd31e42-edbc-4a28-8aba-00df88097aec name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:18:10 pause-208693 crio[2141]: time="2024-04-01 19:18:10.447618560Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=abd31e42-edbc-4a28-8aba-00df88097aec name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:18:10 pause-208693 crio[2141]: time="2024-04-01 19:18:10.447846939Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d51e592bf39a8faf7132b084687b1accbdb18416ca3406e14ab22c4cc914398f,PodSandboxId:f8530e9e195d4d7e3093c22d1ed84743b6d1388e4e2682b416e6c41595984fc0,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1711999071294496603,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rldp9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9bf80ea-9ada-4a47-bab1-e78b9223d2a8,},Annotations:map[string]string{io.kubernetes.container.hash: dae92949,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee316adbd86a389d2cee2a243bb53d9623bdc19f7f4ada9f6d1dca071d0882d0,PodSandboxId:6c9433488dd88aea169cba5d687c2b18ffb7bacf191081aad083c7e1b83bb2eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1711999071239675723,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-df6ns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: acb3c498-4e8d-4a02-b9d7-8a368f9303d0,},Annotations:map[string]string{io.kubernetes.container.hash: d19b4992,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f63a9c00dde6c19df299f9c1f4a733df97d6952398eceec187693dc9f073374,PodSandboxId:048b641526bb6ff733a7c91666f55b7716aab4d9aeea375fde25db0b088b73ae,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1711999066608613618,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-208693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ba31f6ed98991b7270f0eb1bd6de561,},Annot
ations:map[string]string{io.kubernetes.container.hash: e5a2f95e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a040f7057b6a33daba995297514b7b885fc9fe29532005c42b5e51d13105fdb9,PodSandboxId:cf7a984a2a9d307920f4f8475a386958d0e0b0ef51bdaa8e8792df7e58a19df4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1711999066645334372,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-208693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a372662f6344b45a1f4085d401140c1a,},Annotations:map[string]
string{io.kubernetes.container.hash: d6729e28,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:354ccfac9527c54ac400591dde36f20a21b0f39232cee1442492d045c16195b2,PodSandboxId:073f93c45437f090972f867049697da94b76179da01cae458f4787eed49f9346,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1711999066641810226,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-208693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9aa6509d897132c9a939f9cc49fd2164,},Annotations:map[string]string{io.kubernet
es.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d177654682bcbda844c640fa087f542ecebee05bce474ac4d3d8194d5ef6b06,PodSandboxId:7220ef36a671b8f828ba91ff79b645337994d459c971539c2e70edde07b7577b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1711999066590692066,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-208693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c1e1b9ed1eae1e76338c581a974e1b2,},Annotations:map[string]string{io
.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51f0eff30b7c7d78a434ac0cebb793087012ebc1a4e3af4377acb07b114c7b1b,PodSandboxId:f8530e9e195d4d7e3093c22d1ed84743b6d1388e4e2682b416e6c41595984fc0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1711999042746299873,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rldp9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9bf80ea-9ada-4a47-bab1-e78b9223d2a8,},Annotations:map[string]string{io.kubernetes.container.hash: dae9
2949,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffbcd93cd4dc6dd7bdba8817fe9464043c1441e48a2f0a339d8e2f90465c23b2,PodSandboxId:6c9433488dd88aea169cba5d687c2b18ffb7bacf191081aad083c7e1b83bb2eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1711999042518565519,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-df6ns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acb3c498-4e8d-4a02-b9d7-8a368f9303d0,},Annotations:map[string]string{io.kubernetes.container.hash: d19b4992,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb0be624c77f6b7779d07398d8ac81b11ff2d1e2491332385f9bc7abd08da4d1,PodSandboxId:073f93c45437f090972f867049697da94b76179da01cae458f4787eed49f9346,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1711999042384301994,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-208693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9aa6509d897132c9a939f9cc49fd2164,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d9857c1d11f7699cdda344ccc35292880e1e966398923e7c7c8a221bb17fbb4,PodSandboxId:048b641526bb6ff733a7c91666f55b7716aab4d9aeea375fde25db0b088b73ae,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1711999042408662170,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-208693,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 3ba31f6ed98991b7270f0eb1bd6de561,},Annotations:map[string]string{io.kubernetes.container.hash: e5a2f95e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90827beb7d452b745ca6b9be1e1cbf187b22f2a83733fa1cf32f65dd51871a94,PodSandboxId:cf7a984a2a9d307920f4f8475a386958d0e0b0ef51bdaa8e8792df7e58a19df4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711999042354498532,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-208693,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: a372662f6344b45a1f4085d401140c1a,},Annotations:map[string]string{io.kubernetes.container.hash: d6729e28,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a191a299d42de032a4e1b058d778aeb8a768699852f90479ed27525750c39dcb,PodSandboxId:7220ef36a671b8f828ba91ff79b645337994d459c971539c2e70edde07b7577b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1711999042250503194,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-208693,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 9c1e1b9ed1eae1e76338c581a974e1b2,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=abd31e42-edbc-4a28-8aba-00df88097aec name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d51e592bf39a8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   19 seconds ago      Running             coredns                   2                   f8530e9e195d4       coredns-76f75df574-rldp9
	ee316adbd86a3       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392   19 seconds ago      Running             kube-proxy                2                   6c9433488dd88       kube-proxy-df6ns
	a040f7057b6a3       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   23 seconds ago      Running             kube-apiserver            2                   cf7a984a2a9d3       kube-apiserver-pause-208693
	354ccfac9527c       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b   23 seconds ago      Running             kube-scheduler            2                   073f93c45437f       kube-scheduler-pause-208693
	2f63a9c00dde6       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   23 seconds ago      Running             etcd                      2                   048b641526bb6       etcd-pause-208693
	4d177654682bc       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3   23 seconds ago      Running             kube-controller-manager   2                   7220ef36a671b       kube-controller-manager-pause-208693
	51f0eff30b7c7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   47 seconds ago      Exited              coredns                   1                   f8530e9e195d4       coredns-76f75df574-rldp9
	ffbcd93cd4dc6       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392   48 seconds ago      Exited              kube-proxy                1                   6c9433488dd88       kube-proxy-df6ns
	2d9857c1d11f7       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   48 seconds ago      Exited              etcd                      1                   048b641526bb6       etcd-pause-208693
	eb0be624c77f6       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b   48 seconds ago      Exited              kube-scheduler            1                   073f93c45437f       kube-scheduler-pause-208693
	90827beb7d452       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   48 seconds ago      Exited              kube-apiserver            1                   cf7a984a2a9d3       kube-apiserver-pause-208693
	a191a299d42de       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3   48 seconds ago      Exited              kube-controller-manager   1                   7220ef36a671b       kube-controller-manager-pause-208693
	
	
	==> coredns [51f0eff30b7c7d78a434ac0cebb793087012ebc1a4e3af4377acb07b114c7b1b] <==
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:48134 - 19721 "HINFO IN 6185462386848793682.7824587969644291913. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009841976s
	
	
	==> coredns [d51e592bf39a8faf7132b084687b1accbdb18416ca3406e14ab22c4cc914398f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:52870 - 27388 "HINFO IN 7710611610440155977.7233152509346432534. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009741877s
	
	
	==> describe nodes <==
	Name:               pause-208693
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-208693
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2
	                    minikube.k8s.io/name=pause-208693
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_01T19_16_10_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Apr 2024 19:16:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-208693
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Apr 2024 19:18:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Apr 2024 19:17:50 +0000   Mon, 01 Apr 2024 19:16:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Apr 2024 19:17:50 +0000   Mon, 01 Apr 2024 19:16:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Apr 2024 19:17:50 +0000   Mon, 01 Apr 2024 19:16:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Apr 2024 19:17:50 +0000   Mon, 01 Apr 2024 19:16:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.250
	  Hostname:    pause-208693
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 708746cc343f4c35bfb83cd045a64864
	  System UUID:                708746cc-343f-4c35-bfb8-3cd045a64864
	  Boot ID:                    33c40c98-8ea2-46b4-a76e-553079d53cc1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-rldp9                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     107s
	  kube-system                 etcd-pause-208693                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         2m
	  kube-system                 kube-apiserver-pause-208693             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m
	  kube-system                 kube-controller-manager-pause-208693    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m
	  kube-system                 kube-proxy-df6ns                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         107s
	  kube-system                 kube-scheduler-pause-208693             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 105s                 kube-proxy       
	  Normal   Starting                 19s                  kube-proxy       
	  Normal   Starting                 43s                  kube-proxy       
	  Normal   NodeHasSufficientPID     2m7s (x7 over 2m7s)  kubelet          Node pause-208693 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m7s (x8 over 2m7s)  kubelet          Node pause-208693 status is now: NodeHasSufficientMemory
	  Normal   Starting                 2m7s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    2m7s (x8 over 2m7s)  kubelet          Node pause-208693 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 2m                   kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  2m                   kubelet          Node pause-208693 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m                   kubelet          Node pause-208693 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m                   kubelet          Node pause-208693 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  2m                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeReady                119s                 kubelet          Node pause-208693 status is now: NodeReady
	  Normal   RegisteredNode           108s                 node-controller  Node pause-208693 event: Registered Node pause-208693 in Controller
	  Warning  ContainerGCFailed        60s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 25s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  24s (x8 over 24s)    kubelet          Node pause-208693 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    24s (x8 over 24s)    kubelet          Node pause-208693 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     24s (x7 over 24s)    kubelet          Node pause-208693 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           8s                   node-controller  Node pause-208693 event: Registered Node pause-208693 in Controller
	
	
	==> dmesg <==
	[  +0.073673] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.218396] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.148137] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.315544] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +5.124377] systemd-fstab-generator[770]: Ignoring "noauto" option for root device
	[  +0.068461] kauditd_printk_skb: 130 callbacks suppressed
	[Apr 1 19:16] systemd-fstab-generator[947]: Ignoring "noauto" option for root device
	[  +1.104446] kauditd_printk_skb: 57 callbacks suppressed
	[  +6.212560] systemd-fstab-generator[1280]: Ignoring "noauto" option for root device
	[  +0.089272] kauditd_printk_skb: 30 callbacks suppressed
	[  +5.031692] kauditd_printk_skb: 18 callbacks suppressed
	[  +8.268388] systemd-fstab-generator[1488]: Ignoring "noauto" option for root device
	[Apr 1 19:17] kauditd_printk_skb: 63 callbacks suppressed
	[  +9.705573] systemd-fstab-generator[2061]: Ignoring "noauto" option for root device
	[  +0.159308] systemd-fstab-generator[2073]: Ignoring "noauto" option for root device
	[  +0.236138] systemd-fstab-generator[2087]: Ignoring "noauto" option for root device
	[  +0.162323] systemd-fstab-generator[2099]: Ignoring "noauto" option for root device
	[  +0.373734] systemd-fstab-generator[2127]: Ignoring "noauto" option for root device
	[  +5.970510] systemd-fstab-generator[2219]: Ignoring "noauto" option for root device
	[  +0.082510] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.689758] kauditd_printk_skb: 81 callbacks suppressed
	[ +18.211221] systemd-fstab-generator[3043]: Ignoring "noauto" option for root device
	[  +5.666109] kauditd_printk_skb: 40 callbacks suppressed
	[Apr 1 19:18] kauditd_printk_skb: 2 callbacks suppressed
	[  +1.405568] systemd-fstab-generator[3483]: Ignoring "noauto" option for root device
	
	
	==> etcd [2d9857c1d11f7699cdda344ccc35292880e1e966398923e7c7c8a221bb17fbb4] <==
	{"level":"info","ts":"2024-04-01T19:17:23.21841Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-01T19:17:24.152944Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a69e859ffe38fcde is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-01T19:17:24.153043Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a69e859ffe38fcde became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-01T19:17:24.153085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a69e859ffe38fcde received MsgPreVoteResp from a69e859ffe38fcde at term 2"}
	{"level":"info","ts":"2024-04-01T19:17:24.153117Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a69e859ffe38fcde became candidate at term 3"}
	{"level":"info","ts":"2024-04-01T19:17:24.153148Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a69e859ffe38fcde received MsgVoteResp from a69e859ffe38fcde at term 3"}
	{"level":"info","ts":"2024-04-01T19:17:24.153182Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a69e859ffe38fcde became leader at term 3"}
	{"level":"info","ts":"2024-04-01T19:17:24.153215Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a69e859ffe38fcde elected leader a69e859ffe38fcde at term 3"}
	{"level":"info","ts":"2024-04-01T19:17:24.15812Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"a69e859ffe38fcde","local-member-attributes":"{Name:pause-208693 ClientURLs:[https://192.168.39.250:2379]}","request-path":"/0/members/a69e859ffe38fcde/attributes","cluster-id":"f7a04275a0bf31","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-01T19:17:24.15899Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-01T19:17:24.159089Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-01T19:17:24.162062Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-01T19:17:24.162134Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-01T19:17:24.163619Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-01T19:17:24.204144Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.250:2379"}
	{"level":"info","ts":"2024-04-01T19:17:33.648515Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-04-01T19:17:33.648574Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"pause-208693","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.250:2380"],"advertise-client-urls":["https://192.168.39.250:2379"]}
	{"level":"warn","ts":"2024-04-01T19:17:33.648678Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-01T19:17:33.648789Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-01T19:17:33.66658Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.250:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-01T19:17:33.666716Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.250:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-01T19:17:33.666819Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"a69e859ffe38fcde","current-leader-member-id":"a69e859ffe38fcde"}
	{"level":"info","ts":"2024-04-01T19:17:33.670779Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.250:2380"}
	{"level":"info","ts":"2024-04-01T19:17:33.67105Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.250:2380"}
	{"level":"info","ts":"2024-04-01T19:17:33.67109Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"pause-208693","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.250:2380"],"advertise-client-urls":["https://192.168.39.250:2379"]}
	
	
	==> etcd [2f63a9c00dde6c19df299f9c1f4a733df97d6952398eceec187693dc9f073374] <==
	{"level":"info","ts":"2024-04-01T19:17:47.396926Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-01T19:17:47.396976Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-01T19:17:47.397207Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a69e859ffe38fcde switched to configuration voters=(12006180578827762910)"}
	{"level":"info","ts":"2024-04-01T19:17:47.39796Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f7a04275a0bf31","local-member-id":"a69e859ffe38fcde","added-peer-id":"a69e859ffe38fcde","added-peer-peer-urls":["https://192.168.39.250:2380"]}
	{"level":"info","ts":"2024-04-01T19:17:47.398097Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f7a04275a0bf31","local-member-id":"a69e859ffe38fcde","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-01T19:17:47.399959Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-01T19:17:47.422065Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.250:2380"}
	{"level":"info","ts":"2024-04-01T19:17:47.422108Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.250:2380"}
	{"level":"info","ts":"2024-04-01T19:17:47.417836Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-01T19:17:47.423823Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"a69e859ffe38fcde","initial-advertise-peer-urls":["https://192.168.39.250:2380"],"listen-peer-urls":["https://192.168.39.250:2380"],"advertise-client-urls":["https://192.168.39.250:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.250:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-01T19:17:47.428939Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-01T19:17:48.514901Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a69e859ffe38fcde is starting a new election at term 3"}
	{"level":"info","ts":"2024-04-01T19:17:48.515016Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a69e859ffe38fcde became pre-candidate at term 3"}
	{"level":"info","ts":"2024-04-01T19:17:48.515058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a69e859ffe38fcde received MsgPreVoteResp from a69e859ffe38fcde at term 3"}
	{"level":"info","ts":"2024-04-01T19:17:48.515089Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a69e859ffe38fcde became candidate at term 4"}
	{"level":"info","ts":"2024-04-01T19:17:48.515113Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a69e859ffe38fcde received MsgVoteResp from a69e859ffe38fcde at term 4"}
	{"level":"info","ts":"2024-04-01T19:17:48.51514Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a69e859ffe38fcde became leader at term 4"}
	{"level":"info","ts":"2024-04-01T19:17:48.515172Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a69e859ffe38fcde elected leader a69e859ffe38fcde at term 4"}
	{"level":"info","ts":"2024-04-01T19:17:48.521243Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"a69e859ffe38fcde","local-member-attributes":"{Name:pause-208693 ClientURLs:[https://192.168.39.250:2379]}","request-path":"/0/members/a69e859ffe38fcde/attributes","cluster-id":"f7a04275a0bf31","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-01T19:17:48.52132Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-01T19:17:48.521743Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-01T19:17:48.523811Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-01T19:17:48.524028Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.250:2379"}
	{"level":"info","ts":"2024-04-01T19:17:48.524195Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-01T19:17:48.524234Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 19:18:10 up 2 min,  0 users,  load average: 0.98, 0.44, 0.17
	Linux pause-208693 5.10.207 #1 SMP Wed Mar 27 22:02:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [90827beb7d452b745ca6b9be1e1cbf187b22f2a83733fa1cf32f65dd51871a94] <==
	W0401 19:17:42.975776       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:17:42.993135       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:17:43.031665       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:17:43.114071       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:17:43.147042       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:17:43.151607       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:17:43.163833       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:17:43.171771       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:17:43.181549       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:17:43.232142       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:17:43.270973       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:17:43.293088       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:17:43.315953       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:17:43.373722       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:17:43.413141       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:17:43.435343       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:17:43.437845       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:17:43.443545       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:17:43.495226       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:17:43.505338       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:17:43.533202       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:17:43.568291       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:17:43.569695       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:17:43.618256       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:17:43.942305       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [a040f7057b6a33daba995297514b7b885fc9fe29532005c42b5e51d13105fdb9] <==
	I0401 19:17:49.972800       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0401 19:17:49.974947       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0401 19:17:49.975065       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0401 19:17:50.013986       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0401 19:17:50.014075       1 shared_informer.go:318] Caches are synced for configmaps
	I0401 19:17:50.013996       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0401 19:17:50.014148       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0401 19:17:50.014232       1 aggregator.go:165] initial CRD sync complete...
	I0401 19:17:50.014256       1 autoregister_controller.go:141] Starting autoregister controller
	I0401 19:17:50.014277       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0401 19:17:50.014300       1 cache.go:39] Caches are synced for autoregister controller
	I0401 19:17:50.014009       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0401 19:17:50.019329       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0401 19:17:50.020298       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0401 19:17:50.027511       1 shared_informer.go:318] Caches are synced for node_authorizer
	E0401 19:17:50.038613       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0401 19:17:50.050727       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0401 19:17:50.915767       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0401 19:17:51.724531       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0401 19:17:51.751001       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0401 19:17:51.809560       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0401 19:17:51.843107       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0401 19:17:51.852162       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0401 19:18:02.558408       1 controller.go:624] quota admission added evaluator for: endpoints
	I0401 19:18:02.579084       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [4d177654682bcbda844c640fa087f542ecebee05bce474ac4d3d8194d5ef6b06] <==
	I0401 19:18:02.508734       1 shared_informer.go:318] Caches are synced for legacy-service-account-token-cleaner
	I0401 19:18:02.510569       1 shared_informer.go:318] Caches are synced for attach detach
	I0401 19:18:02.520736       1 shared_informer.go:318] Caches are synced for taint
	I0401 19:18:02.520984       1 node_lifecycle_controller.go:1222] "Initializing eviction metric for zone" zone=""
	I0401 19:18:02.521131       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-208693"
	I0401 19:18:02.521172       1 node_lifecycle_controller.go:1068] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0401 19:18:02.521363       1 shared_informer.go:318] Caches are synced for resource quota
	I0401 19:18:02.521826       1 event.go:376] "Event occurred" object="pause-208693" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-208693 event: Registered Node pause-208693 in Controller"
	I0401 19:18:02.527005       1 shared_informer.go:318] Caches are synced for job
	I0401 19:18:02.527272       1 shared_informer.go:318] Caches are synced for persistent volume
	I0401 19:18:02.533505       1 shared_informer.go:318] Caches are synced for ephemeral
	I0401 19:18:02.533521       1 shared_informer.go:318] Caches are synced for daemon sets
	I0401 19:18:02.540423       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0401 19:18:02.545970       1 shared_informer.go:318] Caches are synced for disruption
	I0401 19:18:02.549474       1 shared_informer.go:318] Caches are synced for endpoint
	I0401 19:18:02.552991       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0401 19:18:02.553197       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="110.265µs"
	I0401 19:18:02.559080       1 shared_informer.go:318] Caches are synced for taint-eviction-controller
	I0401 19:18:02.568059       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0401 19:18:02.569735       1 shared_informer.go:318] Caches are synced for stateful set
	I0401 19:18:02.569917       1 shared_informer.go:318] Caches are synced for resource quota
	I0401 19:18:02.612284       1 shared_informer.go:318] Caches are synced for HPA
	I0401 19:18:02.961171       1 shared_informer.go:318] Caches are synced for garbage collector
	I0401 19:18:03.017168       1 shared_informer.go:318] Caches are synced for garbage collector
	I0401 19:18:03.017330       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	
	==> kube-controller-manager [a191a299d42de032a4e1b058d778aeb8a768699852f90479ed27525750c39dcb] <==
	I0401 19:17:24.547397       1 controllermanager.go:187] "Starting" version="v1.29.3"
	I0401 19:17:24.547446       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 19:17:24.549735       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0401 19:17:24.550108       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0401 19:17:24.551144       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0401 19:17:24.551268       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0401 19:17:27.970811       1 controllermanager.go:735] "Started controller" controller="serviceaccount-token-controller"
	I0401 19:17:27.970910       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0401 19:17:27.977125       1 controllermanager.go:735] "Started controller" controller="persistentvolume-protection-controller"
	I0401 19:17:27.977268       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0401 19:17:27.977302       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0401 19:17:27.980038       1 controllermanager.go:735] "Started controller" controller="endpointslice-mirroring-controller"
	I0401 19:17:27.980361       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
	I0401 19:17:27.980397       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0401 19:17:27.983084       1 controllermanager.go:735] "Started controller" controller="replicationcontroller-controller"
	I0401 19:17:27.983193       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0401 19:17:27.983440       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0401 19:17:27.992120       1 controllermanager.go:735] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0401 19:17:27.992321       1 horizontal.go:200] "Starting HPA controller"
	I0401 19:17:27.992364       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0401 19:17:27.994619       1 controllermanager.go:735] "Started controller" controller="token-cleaner-controller"
	I0401 19:17:27.994937       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0401 19:17:27.994975       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0401 19:17:27.994982       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0401 19:17:28.072162       1 shared_informer.go:318] Caches are synced for tokens
	
	
	==> kube-proxy [ee316adbd86a389d2cee2a243bb53d9623bdc19f7f4ada9f6d1dca071d0882d0] <==
	I0401 19:17:51.440496       1 server_others.go:72] "Using iptables proxy"
	I0401 19:17:51.460403       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.250"]
	I0401 19:17:51.544498       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0401 19:17:51.544541       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0401 19:17:51.544560       1 server_others.go:168] "Using iptables Proxier"
	I0401 19:17:51.557985       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0401 19:17:51.558271       1 server.go:865] "Version info" version="v1.29.3"
	I0401 19:17:51.558316       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 19:17:51.562322       1 config.go:188] "Starting service config controller"
	I0401 19:17:51.562370       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0401 19:17:51.562402       1 config.go:97] "Starting endpoint slice config controller"
	I0401 19:17:51.562407       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0401 19:17:51.563736       1 config.go:315] "Starting node config controller"
	I0401 19:17:51.563779       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0401 19:17:51.664116       1 shared_informer.go:318] Caches are synced for node config
	I0401 19:17:51.664140       1 shared_informer.go:318] Caches are synced for service config
	I0401 19:17:51.664180       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [ffbcd93cd4dc6dd7bdba8817fe9464043c1441e48a2f0a339d8e2f90465c23b2] <==
	I0401 19:17:23.918297       1 server_others.go:72] "Using iptables proxy"
	E0401 19:17:25.993969       1 server.go:1039] "Failed to retrieve node info" err="nodes \"pause-208693\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot get resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found]"
	I0401 19:17:27.084108       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.250"]
	I0401 19:17:27.233419       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0401 19:17:27.233560       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0401 19:17:27.233591       1 server_others.go:168] "Using iptables Proxier"
	I0401 19:17:27.247364       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0401 19:17:27.247627       1 server.go:865] "Version info" version="v1.29.3"
	I0401 19:17:27.247686       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 19:17:27.252289       1 config.go:188] "Starting service config controller"
	I0401 19:17:27.252403       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0401 19:17:27.253206       1 config.go:97] "Starting endpoint slice config controller"
	I0401 19:17:27.253260       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0401 19:17:27.253775       1 config.go:315] "Starting node config controller"
	I0401 19:17:27.253923       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0401 19:17:27.352579       1 shared_informer.go:318] Caches are synced for service config
	I0401 19:17:27.353961       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0401 19:17:27.354407       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [354ccfac9527c54ac400591dde36f20a21b0f39232cee1442492d045c16195b2] <==
	I0401 19:17:47.851110       1 serving.go:380] Generated self-signed cert in-memory
	W0401 19:17:49.977593       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0401 19:17:49.977710       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0401 19:17:49.977800       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0401 19:17:49.977834       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0401 19:17:50.012944       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0401 19:17:50.013094       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 19:17:50.016954       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0401 19:17:50.017059       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0401 19:17:50.021707       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0401 19:17:50.022459       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0401 19:17:50.118149       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [eb0be624c77f6b7779d07398d8ac81b11ff2d1e2491332385f9bc7abd08da4d1] <==
	I0401 19:17:24.176095       1 serving.go:380] Generated self-signed cert in-memory
	W0401 19:17:25.986695       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0401 19:17:25.986794       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0401 19:17:25.986810       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0401 19:17:25.986816       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0401 19:17:26.021044       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0401 19:17:26.021214       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 19:17:26.024953       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0401 19:17:26.025265       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0401 19:17:26.025314       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0401 19:17:26.025352       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0401 19:17:26.126450       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0401 19:17:33.797292       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0401 19:17:33.797396       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0401 19:17:33.797529       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0401 19:17:33.798240       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Apr 01 19:17:46 pause-208693 kubelet[3050]: I0401 19:17:46.568331    3050 scope.go:117] "RemoveContainer" containerID="90827beb7d452b745ca6b9be1e1cbf187b22f2a83733fa1cf32f65dd51871a94"
	Apr 01 19:17:46 pause-208693 kubelet[3050]: I0401 19:17:46.669430    3050 kubelet_node_status.go:73] "Attempting to register node" node="pause-208693"
	Apr 01 19:17:46 pause-208693 kubelet[3050]: E0401 19:17:46.670438    3050 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.250:8443: connect: connection refused" node="pause-208693"
	Apr 01 19:17:46 pause-208693 kubelet[3050]: W0401 19:17:46.874223    3050 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-208693&limit=500&resourceVersion=0": dial tcp 192.168.39.250:8443: connect: connection refused
	Apr 01 19:17:46 pause-208693 kubelet[3050]: E0401 19:17:46.874314    3050 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-208693&limit=500&resourceVersion=0": dial tcp 192.168.39.250:8443: connect: connection refused
	Apr 01 19:17:46 pause-208693 kubelet[3050]: W0401 19:17:46.891122    3050 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.250:8443: connect: connection refused
	Apr 01 19:17:46 pause-208693 kubelet[3050]: E0401 19:17:46.891204    3050 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.250:8443: connect: connection refused
	Apr 01 19:17:47 pause-208693 kubelet[3050]: W0401 19:17:47.051783    3050 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.250:8443: connect: connection refused
	Apr 01 19:17:47 pause-208693 kubelet[3050]: E0401 19:17:47.051968    3050 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.250:8443: connect: connection refused
	Apr 01 19:17:47 pause-208693 kubelet[3050]: W0401 19:17:47.065777    3050 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.250:8443: connect: connection refused
	Apr 01 19:17:47 pause-208693 kubelet[3050]: E0401 19:17:47.065836    3050 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.250:8443: connect: connection refused
	Apr 01 19:17:47 pause-208693 kubelet[3050]: I0401 19:17:47.473358    3050 kubelet_node_status.go:73] "Attempting to register node" node="pause-208693"
	Apr 01 19:17:50 pause-208693 kubelet[3050]: I0401 19:17:50.084995    3050 kubelet_node_status.go:112] "Node was previously registered" node="pause-208693"
	Apr 01 19:17:50 pause-208693 kubelet[3050]: I0401 19:17:50.085124    3050 kubelet_node_status.go:76] "Successfully registered node" node="pause-208693"
	Apr 01 19:17:50 pause-208693 kubelet[3050]: I0401 19:17:50.087055    3050 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 01 19:17:50 pause-208693 kubelet[3050]: I0401 19:17:50.088308    3050 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 01 19:17:50 pause-208693 kubelet[3050]: I0401 19:17:50.920320    3050 apiserver.go:52] "Watching apiserver"
	Apr 01 19:17:50 pause-208693 kubelet[3050]: I0401 19:17:50.923173    3050 topology_manager.go:215] "Topology Admit Handler" podUID="acb3c498-4e8d-4a02-b9d7-8a368f9303d0" podNamespace="kube-system" podName="kube-proxy-df6ns"
	Apr 01 19:17:50 pause-208693 kubelet[3050]: I0401 19:17:50.923721    3050 topology_manager.go:215] "Topology Admit Handler" podUID="c9bf80ea-9ada-4a47-bab1-e78b9223d2a8" podNamespace="kube-system" podName="coredns-76f75df574-rldp9"
	Apr 01 19:17:50 pause-208693 kubelet[3050]: I0401 19:17:50.936131    3050 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Apr 01 19:17:50 pause-208693 kubelet[3050]: I0401 19:17:50.996957    3050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/acb3c498-4e8d-4a02-b9d7-8a368f9303d0-lib-modules\") pod \"kube-proxy-df6ns\" (UID: \"acb3c498-4e8d-4a02-b9d7-8a368f9303d0\") " pod="kube-system/kube-proxy-df6ns"
	Apr 01 19:17:50 pause-208693 kubelet[3050]: I0401 19:17:50.997039    3050 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/acb3c498-4e8d-4a02-b9d7-8a368f9303d0-xtables-lock\") pod \"kube-proxy-df6ns\" (UID: \"acb3c498-4e8d-4a02-b9d7-8a368f9303d0\") " pod="kube-system/kube-proxy-df6ns"
	Apr 01 19:17:51 pause-208693 kubelet[3050]: I0401 19:17:51.224204    3050 scope.go:117] "RemoveContainer" containerID="ffbcd93cd4dc6dd7bdba8817fe9464043c1441e48a2f0a339d8e2f90465c23b2"
	Apr 01 19:17:51 pause-208693 kubelet[3050]: I0401 19:17:51.224649    3050 scope.go:117] "RemoveContainer" containerID="51f0eff30b7c7d78a434ac0cebb793087012ebc1a4e3af4377acb07b114c7b1b"
	Apr 01 19:17:56 pause-208693 kubelet[3050]: I0401 19:17:56.037293    3050 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0401 19:18:09.893621   56502 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18233-10493/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-208693 -n pause-208693
helpers_test.go:261: (dbg) Run:  kubectl --context pause-208693 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (65.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (284.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-163608 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-163608 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m44.626899815s)

                                                
                                                
-- stdout --
	* [old-k8s-version-163608] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18233
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18233-10493/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-10493/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-163608" primary control-plane node in "old-k8s-version-163608" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 19:21:18.097323   64332 out.go:291] Setting OutFile to fd 1 ...
	I0401 19:21:18.097428   64332 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 19:21:18.097438   64332 out.go:304] Setting ErrFile to fd 2...
	I0401 19:21:18.097442   64332 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 19:21:18.097629   64332 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-10493/.minikube/bin
	I0401 19:21:18.098230   64332 out.go:298] Setting JSON to false
	I0401 19:21:18.099170   64332 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7430,"bootTime":1711991848,"procs":233,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 19:21:18.099234   64332 start.go:139] virtualization: kvm guest
	I0401 19:21:18.101790   64332 out.go:177] * [old-k8s-version-163608] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 19:21:18.103354   64332 out.go:177]   - MINIKUBE_LOCATION=18233
	I0401 19:21:18.103238   64332 notify.go:220] Checking for updates...
	I0401 19:21:18.104826   64332 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 19:21:18.106060   64332 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 19:21:18.107378   64332 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-10493/.minikube
	I0401 19:21:18.108705   64332 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 19:21:18.110037   64332 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 19:21:18.111809   64332 config.go:182] Loaded profile config "bridge-408543": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 19:21:18.111918   64332 config.go:182] Loaded profile config "enable-default-cni-408543": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 19:21:18.112028   64332 config.go:182] Loaded profile config "flannel-408543": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 19:21:18.112136   64332 driver.go:392] Setting default libvirt URI to qemu:///system
	I0401 19:21:18.152385   64332 out.go:177] * Using the kvm2 driver based on user configuration
	I0401 19:21:18.153685   64332 start.go:297] selected driver: kvm2
	I0401 19:21:18.153705   64332 start.go:901] validating driver "kvm2" against <nil>
	I0401 19:21:18.153720   64332 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 19:21:18.154705   64332 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 19:21:18.154789   64332 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18233-10493/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0401 19:21:18.169882   64332 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0401 19:21:18.169935   64332 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0401 19:21:18.170177   64332 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 19:21:18.170265   64332 cni.go:84] Creating CNI manager for ""
	I0401 19:21:18.170284   64332 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:21:18.170298   64332 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0401 19:21:18.170376   64332 start.go:340] cluster config:
	{Name:old-k8s-version-163608 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-163608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:21:18.170516   64332 iso.go:125] acquiring lock: {Name:mka511ffe42ecd86bd7f46e7a17ddcdd3e5e4327 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 19:21:18.172640   64332 out.go:177] * Starting "old-k8s-version-163608" primary control-plane node in "old-k8s-version-163608" cluster
	I0401 19:21:18.174222   64332 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0401 19:21:18.174273   64332 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0401 19:21:18.174288   64332 cache.go:56] Caching tarball of preloaded images
	I0401 19:21:18.174382   64332 preload.go:173] Found /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 19:21:18.174397   64332 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0401 19:21:18.174541   64332 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/config.json ...
	I0401 19:21:18.174571   64332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/config.json: {Name:mk912c2271b664999a1c806f839f7ea083f35664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:21:18.174797   64332 start.go:360] acquireMachinesLock for old-k8s-version-163608: {Name:mk6b7472209a8db5f40be4c2f0565da7e0094c19 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 19:21:28.743550   64332 start.go:364] duration metric: took 10.568720337s to acquireMachinesLock for "old-k8s-version-163608"
	I0401 19:21:28.743629   64332 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-163608 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-163608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 19:21:28.743749   64332 start.go:125] createHost starting for "" (driver="kvm2")
	I0401 19:21:28.745874   64332 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0401 19:21:28.746097   64332 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:21:28.746155   64332 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:21:28.765720   64332 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44709
	I0401 19:21:28.766115   64332 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:21:28.766957   64332 main.go:141] libmachine: Using API Version  1
	I0401 19:21:28.766985   64332 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:21:28.767381   64332 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:21:28.768800   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetMachineName
	I0401 19:21:28.768991   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:21:28.769142   64332 start.go:159] libmachine.API.Create for "old-k8s-version-163608" (driver="kvm2")
	I0401 19:21:28.769166   64332 client.go:168] LocalClient.Create starting
	I0401 19:21:28.769212   64332 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem
	I0401 19:21:28.769263   64332 main.go:141] libmachine: Decoding PEM data...
	I0401 19:21:28.769297   64332 main.go:141] libmachine: Parsing certificate...
	I0401 19:21:28.769386   64332 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem
	I0401 19:21:28.769416   64332 main.go:141] libmachine: Decoding PEM data...
	I0401 19:21:28.769435   64332 main.go:141] libmachine: Parsing certificate...
	I0401 19:21:28.769460   64332 main.go:141] libmachine: Running pre-create checks...
	I0401 19:21:28.769478   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .PreCreateCheck
	I0401 19:21:28.769878   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetConfigRaw
	I0401 19:21:28.770271   64332 main.go:141] libmachine: Creating machine...
	I0401 19:21:28.770287   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .Create
	I0401 19:21:28.770439   64332 main.go:141] libmachine: (old-k8s-version-163608) Creating KVM machine...
	I0401 19:21:28.771733   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | found existing default KVM network
	I0401 19:21:28.772795   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:21:28.772618   65494 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:66:e9:7e} reservation:<nil>}
	I0401 19:21:28.773961   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:21:28.773873   65494 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002923e0}
	I0401 19:21:28.774001   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | created network xml: 
	I0401 19:21:28.774015   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | <network>
	I0401 19:21:28.774026   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG |   <name>mk-old-k8s-version-163608</name>
	I0401 19:21:28.774034   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG |   <dns enable='no'/>
	I0401 19:21:28.774042   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG |   
	I0401 19:21:28.774051   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0401 19:21:28.774059   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG |     <dhcp>
	I0401 19:21:28.774072   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0401 19:21:28.774081   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG |     </dhcp>
	I0401 19:21:28.774089   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG |   </ip>
	I0401 19:21:28.774098   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG |   
	I0401 19:21:28.774109   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | </network>
	I0401 19:21:28.774118   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | 
	I0401 19:21:28.780203   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | trying to create private KVM network mk-old-k8s-version-163608 192.168.50.0/24...
	I0401 19:21:28.860018   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | private KVM network mk-old-k8s-version-163608 192.168.50.0/24 created
	I0401 19:21:28.860056   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:21:28.859991   65494 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18233-10493/.minikube
	I0401 19:21:28.860075   64332 main.go:141] libmachine: (old-k8s-version-163608) Setting up store path in /home/jenkins/minikube-integration/18233-10493/.minikube/machines/old-k8s-version-163608 ...
	I0401 19:21:28.860095   64332 main.go:141] libmachine: (old-k8s-version-163608) Building disk image from file:///home/jenkins/minikube-integration/18233-10493/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso
	I0401 19:21:28.860153   64332 main.go:141] libmachine: (old-k8s-version-163608) Downloading /home/jenkins/minikube-integration/18233-10493/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18233-10493/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso...
	I0401 19:21:29.113090   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:21:29.112958   65494 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/old-k8s-version-163608/id_rsa...
	I0401 19:21:29.246223   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:21:29.246107   65494 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/old-k8s-version-163608/old-k8s-version-163608.rawdisk...
	I0401 19:21:29.246258   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | Writing magic tar header
	I0401 19:21:29.246275   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | Writing SSH key tar header
	I0401 19:21:29.246288   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:21:29.246221   65494 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18233-10493/.minikube/machines/old-k8s-version-163608 ...
	I0401 19:21:29.246327   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/old-k8s-version-163608
	I0401 19:21:29.246351   64332 main.go:141] libmachine: (old-k8s-version-163608) Setting executable bit set on /home/jenkins/minikube-integration/18233-10493/.minikube/machines/old-k8s-version-163608 (perms=drwx------)
	I0401 19:21:29.246410   64332 main.go:141] libmachine: (old-k8s-version-163608) Setting executable bit set on /home/jenkins/minikube-integration/18233-10493/.minikube/machines (perms=drwxr-xr-x)
	I0401 19:21:29.246435   64332 main.go:141] libmachine: (old-k8s-version-163608) Setting executable bit set on /home/jenkins/minikube-integration/18233-10493/.minikube (perms=drwxr-xr-x)
	I0401 19:21:29.246447   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18233-10493/.minikube/machines
	I0401 19:21:29.246465   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18233-10493/.minikube
	I0401 19:21:29.246494   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18233-10493
	I0401 19:21:29.246515   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0401 19:21:29.246530   64332 main.go:141] libmachine: (old-k8s-version-163608) Setting executable bit set on /home/jenkins/minikube-integration/18233-10493 (perms=drwxrwxr-x)
	I0401 19:21:29.246548   64332 main.go:141] libmachine: (old-k8s-version-163608) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0401 19:21:29.246561   64332 main.go:141] libmachine: (old-k8s-version-163608) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0401 19:21:29.246576   64332 main.go:141] libmachine: (old-k8s-version-163608) Creating domain...
	I0401 19:21:29.246586   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | Checking permissions on dir: /home/jenkins
	I0401 19:21:29.246602   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | Checking permissions on dir: /home
	I0401 19:21:29.246616   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | Skipping /home - not owner
	I0401 19:21:29.247609   64332 main.go:141] libmachine: (old-k8s-version-163608) define libvirt domain using xml: 
	I0401 19:21:29.247633   64332 main.go:141] libmachine: (old-k8s-version-163608) <domain type='kvm'>
	I0401 19:21:29.247640   64332 main.go:141] libmachine: (old-k8s-version-163608)   <name>old-k8s-version-163608</name>
	I0401 19:21:29.247645   64332 main.go:141] libmachine: (old-k8s-version-163608)   <memory unit='MiB'>2200</memory>
	I0401 19:21:29.247651   64332 main.go:141] libmachine: (old-k8s-version-163608)   <vcpu>2</vcpu>
	I0401 19:21:29.247655   64332 main.go:141] libmachine: (old-k8s-version-163608)   <features>
	I0401 19:21:29.247664   64332 main.go:141] libmachine: (old-k8s-version-163608)     <acpi/>
	I0401 19:21:29.247670   64332 main.go:141] libmachine: (old-k8s-version-163608)     <apic/>
	I0401 19:21:29.247691   64332 main.go:141] libmachine: (old-k8s-version-163608)     <pae/>
	I0401 19:21:29.247702   64332 main.go:141] libmachine: (old-k8s-version-163608)     
	I0401 19:21:29.247711   64332 main.go:141] libmachine: (old-k8s-version-163608)   </features>
	I0401 19:21:29.247720   64332 main.go:141] libmachine: (old-k8s-version-163608)   <cpu mode='host-passthrough'>
	I0401 19:21:29.247726   64332 main.go:141] libmachine: (old-k8s-version-163608)   
	I0401 19:21:29.247734   64332 main.go:141] libmachine: (old-k8s-version-163608)   </cpu>
	I0401 19:21:29.247765   64332 main.go:141] libmachine: (old-k8s-version-163608)   <os>
	I0401 19:21:29.247792   64332 main.go:141] libmachine: (old-k8s-version-163608)     <type>hvm</type>
	I0401 19:21:29.247803   64332 main.go:141] libmachine: (old-k8s-version-163608)     <boot dev='cdrom'/>
	I0401 19:21:29.247814   64332 main.go:141] libmachine: (old-k8s-version-163608)     <boot dev='hd'/>
	I0401 19:21:29.247825   64332 main.go:141] libmachine: (old-k8s-version-163608)     <bootmenu enable='no'/>
	I0401 19:21:29.247832   64332 main.go:141] libmachine: (old-k8s-version-163608)   </os>
	I0401 19:21:29.247838   64332 main.go:141] libmachine: (old-k8s-version-163608)   <devices>
	I0401 19:21:29.247845   64332 main.go:141] libmachine: (old-k8s-version-163608)     <disk type='file' device='cdrom'>
	I0401 19:21:29.247855   64332 main.go:141] libmachine: (old-k8s-version-163608)       <source file='/home/jenkins/minikube-integration/18233-10493/.minikube/machines/old-k8s-version-163608/boot2docker.iso'/>
	I0401 19:21:29.247872   64332 main.go:141] libmachine: (old-k8s-version-163608)       <target dev='hdc' bus='scsi'/>
	I0401 19:21:29.247888   64332 main.go:141] libmachine: (old-k8s-version-163608)       <readonly/>
	I0401 19:21:29.247896   64332 main.go:141] libmachine: (old-k8s-version-163608)     </disk>
	I0401 19:21:29.247906   64332 main.go:141] libmachine: (old-k8s-version-163608)     <disk type='file' device='disk'>
	I0401 19:21:29.247913   64332 main.go:141] libmachine: (old-k8s-version-163608)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0401 19:21:29.247927   64332 main.go:141] libmachine: (old-k8s-version-163608)       <source file='/home/jenkins/minikube-integration/18233-10493/.minikube/machines/old-k8s-version-163608/old-k8s-version-163608.rawdisk'/>
	I0401 19:21:29.247935   64332 main.go:141] libmachine: (old-k8s-version-163608)       <target dev='hda' bus='virtio'/>
	I0401 19:21:29.247999   64332 main.go:141] libmachine: (old-k8s-version-163608)     </disk>
	I0401 19:21:29.248025   64332 main.go:141] libmachine: (old-k8s-version-163608)     <interface type='network'>
	I0401 19:21:29.248039   64332 main.go:141] libmachine: (old-k8s-version-163608)       <source network='mk-old-k8s-version-163608'/>
	I0401 19:21:29.248052   64332 main.go:141] libmachine: (old-k8s-version-163608)       <model type='virtio'/>
	I0401 19:21:29.248063   64332 main.go:141] libmachine: (old-k8s-version-163608)     </interface>
	I0401 19:21:29.248075   64332 main.go:141] libmachine: (old-k8s-version-163608)     <interface type='network'>
	I0401 19:21:29.248089   64332 main.go:141] libmachine: (old-k8s-version-163608)       <source network='default'/>
	I0401 19:21:29.248106   64332 main.go:141] libmachine: (old-k8s-version-163608)       <model type='virtio'/>
	I0401 19:21:29.248121   64332 main.go:141] libmachine: (old-k8s-version-163608)     </interface>
	I0401 19:21:29.248133   64332 main.go:141] libmachine: (old-k8s-version-163608)     <serial type='pty'>
	I0401 19:21:29.248144   64332 main.go:141] libmachine: (old-k8s-version-163608)       <target port='0'/>
	I0401 19:21:29.248155   64332 main.go:141] libmachine: (old-k8s-version-163608)     </serial>
	I0401 19:21:29.248182   64332 main.go:141] libmachine: (old-k8s-version-163608)     <console type='pty'>
	I0401 19:21:29.248206   64332 main.go:141] libmachine: (old-k8s-version-163608)       <target type='serial' port='0'/>
	I0401 19:21:29.248219   64332 main.go:141] libmachine: (old-k8s-version-163608)     </console>
	I0401 19:21:29.248232   64332 main.go:141] libmachine: (old-k8s-version-163608)     <rng model='virtio'>
	I0401 19:21:29.248251   64332 main.go:141] libmachine: (old-k8s-version-163608)       <backend model='random'>/dev/random</backend>
	I0401 19:21:29.248269   64332 main.go:141] libmachine: (old-k8s-version-163608)     </rng>
	I0401 19:21:29.248281   64332 main.go:141] libmachine: (old-k8s-version-163608)     
	I0401 19:21:29.248290   64332 main.go:141] libmachine: (old-k8s-version-163608)     
	I0401 19:21:29.248303   64332 main.go:141] libmachine: (old-k8s-version-163608)   </devices>
	I0401 19:21:29.248313   64332 main.go:141] libmachine: (old-k8s-version-163608) </domain>
	I0401 19:21:29.248325   64332 main.go:141] libmachine: (old-k8s-version-163608) 
	I0401 19:21:29.253287   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:9b:d7:b6 in network default
	I0401 19:21:29.254071   64332 main.go:141] libmachine: (old-k8s-version-163608) Ensuring networks are active...
	I0401 19:21:29.254094   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:21:29.254786   64332 main.go:141] libmachine: (old-k8s-version-163608) Ensuring network default is active
	I0401 19:21:29.255117   64332 main.go:141] libmachine: (old-k8s-version-163608) Ensuring network mk-old-k8s-version-163608 is active
	I0401 19:21:29.255676   64332 main.go:141] libmachine: (old-k8s-version-163608) Getting domain xml...
	I0401 19:21:29.256391   64332 main.go:141] libmachine: (old-k8s-version-163608) Creating domain...
	I0401 19:21:30.619148   64332 main.go:141] libmachine: (old-k8s-version-163608) Waiting to get IP...
	I0401 19:21:30.619902   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:21:30.620299   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:21:30.620335   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:21:30.620270   65494 retry.go:31] will retry after 285.351579ms: waiting for machine to come up
	I0401 19:21:30.906952   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:21:30.907581   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:21:30.907610   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:21:30.907528   65494 retry.go:31] will retry after 237.382752ms: waiting for machine to come up
	I0401 19:21:31.147079   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:21:31.147654   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:21:31.147683   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:21:31.147621   65494 retry.go:31] will retry after 331.306185ms: waiting for machine to come up
	I0401 19:21:31.480094   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:21:31.480698   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:21:31.480719   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:21:31.480668   65494 retry.go:31] will retry after 422.653134ms: waiting for machine to come up
	I0401 19:21:31.905053   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:21:31.905612   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:21:31.905634   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:21:31.905530   65494 retry.go:31] will retry after 643.549401ms: waiting for machine to come up
	I0401 19:21:32.550548   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:21:32.551109   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:21:32.551136   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:21:32.551074   65494 retry.go:31] will retry after 584.677993ms: waiting for machine to come up
	I0401 19:21:33.136924   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:21:33.145888   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:21:33.145913   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:21:33.137329   65494 retry.go:31] will retry after 744.241781ms: waiting for machine to come up
	I0401 19:21:33.883663   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:21:33.884109   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:21:33.884135   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:21:33.884069   65494 retry.go:31] will retry after 1.210213814s: waiting for machine to come up
	I0401 19:21:35.095432   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:21:35.095902   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:21:35.095951   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:21:35.095853   65494 retry.go:31] will retry after 1.746129707s: waiting for machine to come up
	I0401 19:21:36.843074   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:21:36.843612   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:21:36.843641   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:21:36.843562   65494 retry.go:31] will retry after 2.101668612s: waiting for machine to come up
	I0401 19:21:38.946528   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:21:38.947173   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:21:38.947202   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:21:38.947114   65494 retry.go:31] will retry after 2.331383647s: waiting for machine to come up
	I0401 19:21:41.281590   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:21:41.282035   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:21:41.282062   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:21:41.282000   65494 retry.go:31] will retry after 2.687098646s: waiting for machine to come up
	I0401 19:21:43.971003   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:21:43.971576   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:21:43.971607   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:21:43.971514   65494 retry.go:31] will retry after 3.521386795s: waiting for machine to come up
	I0401 19:21:47.495169   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:21:47.495585   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:21:47.495607   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:21:47.495544   65494 retry.go:31] will retry after 4.398566984s: waiting for machine to come up
	I0401 19:21:51.895355   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:21:51.895900   64332 main.go:141] libmachine: (old-k8s-version-163608) Found IP for machine: 192.168.50.106
	I0401 19:21:51.895932   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has current primary IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:21:51.895942   64332 main.go:141] libmachine: (old-k8s-version-163608) Reserving static IP address...
	I0401 19:21:51.896313   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-163608", mac: "52:54:00:fe:1b:e7", ip: "192.168.50.106"} in network mk-old-k8s-version-163608
	I0401 19:21:51.977441   64332 main.go:141] libmachine: (old-k8s-version-163608) Reserved static IP address: 192.168.50.106
	I0401 19:21:51.977466   64332 main.go:141] libmachine: (old-k8s-version-163608) Waiting for SSH to be available...
	I0401 19:21:51.977486   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | Getting to WaitForSSH function...
	I0401 19:21:51.980802   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:21:51.981304   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:21:46 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:minikube Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:21:51.981333   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:21:51.981606   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | Using SSH client type: external
	I0401 19:21:51.981633   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | Using SSH private key: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/old-k8s-version-163608/id_rsa (-rw-------)
	I0401 19:21:51.981695   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.106 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18233-10493/.minikube/machines/old-k8s-version-163608/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0401 19:21:51.981710   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | About to run SSH command:
	I0401 19:21:51.981725   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | exit 0
	I0401 19:21:52.113998   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | SSH cmd err, output: <nil>: 
	I0401 19:21:52.114232   64332 main.go:141] libmachine: (old-k8s-version-163608) KVM machine creation complete!
	I0401 19:21:52.114587   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetConfigRaw
	I0401 19:21:52.115174   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:21:52.115409   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:21:52.115614   64332 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0401 19:21:52.115634   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetState
	I0401 19:21:52.117046   64332 main.go:141] libmachine: Detecting operating system of created instance...
	I0401 19:21:52.117063   64332 main.go:141] libmachine: Waiting for SSH to be available...
	I0401 19:21:52.117070   64332 main.go:141] libmachine: Getting to WaitForSSH function...
	I0401 19:21:52.117080   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:21:52.119664   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:21:52.120048   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:21:46 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:21:52.120080   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:21:52.120224   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:21:52.120411   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:21:52.120573   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:21:52.120707   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:21:52.120891   64332 main.go:141] libmachine: Using SSH client type: native
	I0401 19:21:52.121091   64332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.106 22 <nil> <nil>}
	I0401 19:21:52.121106   64332 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0401 19:21:52.237611   64332 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 19:21:52.237653   64332 main.go:141] libmachine: Detecting the provisioner...
	I0401 19:21:52.237664   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:21:52.240726   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:21:52.241161   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:21:46 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:21:52.241187   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:21:52.241384   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:21:52.241568   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:21:52.241712   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:21:52.241848   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:21:52.241992   64332 main.go:141] libmachine: Using SSH client type: native
	I0401 19:21:52.242215   64332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.106 22 <nil> <nil>}
	I0401 19:21:52.242229   64332 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0401 19:21:52.362929   64332 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0401 19:21:52.362998   64332 main.go:141] libmachine: found compatible host: buildroot
	I0401 19:21:52.363009   64332 main.go:141] libmachine: Provisioning with buildroot...
	I0401 19:21:52.363024   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetMachineName
	I0401 19:21:52.363292   64332 buildroot.go:166] provisioning hostname "old-k8s-version-163608"
	I0401 19:21:52.363312   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetMachineName
	I0401 19:21:52.363518   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:21:52.366696   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:21:52.367149   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:21:46 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:21:52.367184   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:21:52.367319   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:21:52.367510   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:21:52.367712   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:21:52.367883   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:21:52.368063   64332 main.go:141] libmachine: Using SSH client type: native
	I0401 19:21:52.368264   64332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.106 22 <nil> <nil>}
	I0401 19:21:52.368279   64332 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-163608 && echo "old-k8s-version-163608" | sudo tee /etc/hostname
	I0401 19:21:52.499285   64332 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-163608
	
	I0401 19:21:52.499321   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:21:52.502632   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:21:52.503075   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:21:46 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:21:52.503108   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:21:52.503400   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:21:52.503640   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:21:52.503883   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:21:52.504042   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:21:52.504223   64332 main.go:141] libmachine: Using SSH client type: native
	I0401 19:21:52.504463   64332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.106 22 <nil> <nil>}
	I0401 19:21:52.504490   64332 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-163608' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-163608/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-163608' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 19:21:52.630994   64332 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 19:21:52.631029   64332 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18233-10493/.minikube CaCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18233-10493/.minikube}
	I0401 19:21:52.631075   64332 buildroot.go:174] setting up certificates
	I0401 19:21:52.631091   64332 provision.go:84] configureAuth start
	I0401 19:21:52.631109   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetMachineName
	I0401 19:21:52.631397   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetIP
	I0401 19:21:52.634450   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:21:52.634913   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:21:46 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:21:52.634951   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:21:52.635127   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:21:52.637721   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:21:52.638115   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:21:46 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:21:52.638142   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:21:52.638327   64332 provision.go:143] copyHostCerts
	I0401 19:21:52.638396   64332 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem, removing ...
	I0401 19:21:52.638406   64332 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem
	I0401 19:21:52.638450   64332 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem (1082 bytes)
	I0401 19:21:52.638552   64332 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem, removing ...
	I0401 19:21:52.638561   64332 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem
	I0401 19:21:52.638582   64332 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem (1123 bytes)
	I0401 19:21:52.638638   64332 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem, removing ...
	I0401 19:21:52.638645   64332 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem
	I0401 19:21:52.638660   64332 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem (1679 bytes)
	I0401 19:21:52.638706   64332 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-163608 san=[127.0.0.1 192.168.50.106 localhost minikube old-k8s-version-163608]
	I0401 19:21:52.735198   64332 provision.go:177] copyRemoteCerts
	I0401 19:21:52.735267   64332 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 19:21:52.735295   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:21:52.738411   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:21:52.738808   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:21:46 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:21:52.738839   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:21:52.739118   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:21:52.739324   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:21:52.739512   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:21:52.739775   64332 sshutil.go:53] new ssh client: &{IP:192.168.50.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/old-k8s-version-163608/id_rsa Username:docker}
	I0401 19:21:52.824942   64332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0401 19:21:52.854661   64332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 19:21:52.885575   64332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 19:21:52.914073   64332 provision.go:87] duration metric: took 282.965833ms to configureAuth
	I0401 19:21:52.914103   64332 buildroot.go:189] setting minikube options for container-runtime
	I0401 19:21:52.914336   64332 config.go:182] Loaded profile config "old-k8s-version-163608": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 19:21:52.914429   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:21:52.917661   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:21:52.918105   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:21:46 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:21:52.918145   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:21:52.918517   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:21:52.918757   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:21:52.918945   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:21:52.919108   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:21:52.919302   64332 main.go:141] libmachine: Using SSH client type: native
	I0401 19:21:52.919509   64332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.106 22 <nil> <nil>}
	I0401 19:21:52.919534   64332 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 19:21:53.228265   64332 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 19:21:53.228298   64332 main.go:141] libmachine: Checking connection to Docker...
	I0401 19:21:53.228318   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetURL
	I0401 19:21:53.229669   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | Using libvirt version 6000000
	I0401 19:21:53.232329   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:21:53.232743   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:21:46 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:21:53.232807   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:21:53.233055   64332 main.go:141] libmachine: Docker is up and running!
	I0401 19:21:53.233070   64332 main.go:141] libmachine: Reticulating splines...
	I0401 19:21:53.233076   64332 client.go:171] duration metric: took 24.463898617s to LocalClient.Create
	I0401 19:21:53.233102   64332 start.go:167] duration metric: took 24.463961547s to libmachine.API.Create "old-k8s-version-163608"
	I0401 19:21:53.233142   64332 start.go:293] postStartSetup for "old-k8s-version-163608" (driver="kvm2")
	I0401 19:21:53.233160   64332 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 19:21:53.233181   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:21:53.233468   64332 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 19:21:53.233497   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:21:53.235914   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:21:53.236261   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:21:46 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:21:53.236288   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:21:53.236555   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:21:53.236742   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:21:53.236924   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:21:53.237089   64332 sshutil.go:53] new ssh client: &{IP:192.168.50.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/old-k8s-version-163608/id_rsa Username:docker}
	I0401 19:21:53.325518   64332 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 19:21:53.330941   64332 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 19:21:53.330973   64332 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/addons for local assets ...
	I0401 19:21:53.331045   64332 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/files for local assets ...
	I0401 19:21:53.331160   64332 filesync.go:149] local asset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> 177512.pem in /etc/ssl/certs
	I0401 19:21:53.331279   64332 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 19:21:53.341946   64332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:21:53.375869   64332 start.go:296] duration metric: took 142.707887ms for postStartSetup
	I0401 19:21:53.375930   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetConfigRaw
	I0401 19:21:53.376552   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetIP
	I0401 19:21:53.379576   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:21:53.379962   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:21:46 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:21:53.379993   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:21:53.380264   64332 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/config.json ...
	I0401 19:21:53.380502   64332 start.go:128] duration metric: took 24.636740376s to createHost
	I0401 19:21:53.380532   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:21:53.382693   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:21:53.383081   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:21:46 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:21:53.383097   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:21:53.383268   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:21:53.383456   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:21:53.383640   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:21:53.383765   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:21:53.383921   64332 main.go:141] libmachine: Using SSH client type: native
	I0401 19:21:53.384119   64332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.106 22 <nil> <nil>}
	I0401 19:21:53.384130   64332 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0401 19:21:53.495396   64332 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711999313.462534481
	
	I0401 19:21:53.495424   64332 fix.go:216] guest clock: 1711999313.462534481
	I0401 19:21:53.495434   64332 fix.go:229] Guest: 2024-04-01 19:21:53.462534481 +0000 UTC Remote: 2024-04-01 19:21:53.38051794 +0000 UTC m=+35.331578637 (delta=82.016541ms)
	I0401 19:21:53.495458   64332 fix.go:200] guest clock delta is within tolerance: 82.016541ms
	I0401 19:21:53.495465   64332 start.go:83] releasing machines lock for "old-k8s-version-163608", held for 24.751868928s
	I0401 19:21:53.495504   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:21:53.495808   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetIP
	I0401 19:21:53.499034   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:21:53.499619   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:21:46 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:21:53.499641   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:21:53.499925   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:21:53.500664   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:21:53.500864   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:21:53.500956   64332 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 19:21:53.501033   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:21:53.501098   64332 ssh_runner.go:195] Run: cat /version.json
	I0401 19:21:53.501127   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:21:53.504605   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:21:53.505347   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:21:53.505547   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:21:46 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:21:53.505596   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:21:53.505876   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:21:53.505925   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:21:46 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:21:53.505957   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:21:53.506090   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:21:53.506203   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:21:53.506275   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:21:53.506320   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:21:53.506360   64332 sshutil.go:53] new ssh client: &{IP:192.168.50.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/old-k8s-version-163608/id_rsa Username:docker}
	I0401 19:21:53.506437   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:21:53.506780   64332 sshutil.go:53] new ssh client: &{IP:192.168.50.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/old-k8s-version-163608/id_rsa Username:docker}
	I0401 19:21:53.597012   64332 ssh_runner.go:195] Run: systemctl --version
	I0401 19:21:53.620032   64332 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 19:21:53.799671   64332 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 19:21:53.810551   64332 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 19:21:53.810634   64332 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 19:21:53.840707   64332 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 19:21:53.840730   64332 start.go:494] detecting cgroup driver to use...
	I0401 19:21:53.840787   64332 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 19:21:53.867102   64332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 19:21:53.885720   64332 docker.go:217] disabling cri-docker service (if available) ...
	I0401 19:21:53.885788   64332 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 19:21:53.907613   64332 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 19:21:53.926800   64332 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 19:21:54.081934   64332 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 19:21:54.251483   64332 docker.go:233] disabling docker service ...
	I0401 19:21:54.251550   64332 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 19:21:54.282336   64332 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 19:21:54.301773   64332 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 19:21:54.482827   64332 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 19:21:54.653763   64332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 19:21:54.670468   64332 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 19:21:54.693806   64332 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0401 19:21:54.693860   64332 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:21:54.708911   64332 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 19:21:54.708972   64332 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:21:54.724165   64332 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:21:54.741290   64332 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:21:54.758557   64332 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 19:21:54.775536   64332 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 19:21:54.789294   64332 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0401 19:21:54.789401   64332 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0401 19:21:54.810121   64332 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 19:21:54.822445   64332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:21:55.012578   64332 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 19:21:55.201790   64332 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 19:21:55.201869   64332 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 19:21:55.207759   64332 start.go:562] Will wait 60s for crictl version
	I0401 19:21:55.207822   64332 ssh_runner.go:195] Run: which crictl
	I0401 19:21:55.212415   64332 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 19:21:55.258126   64332 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 19:21:55.258205   64332 ssh_runner.go:195] Run: crio --version
	I0401 19:21:55.296320   64332 ssh_runner.go:195] Run: crio --version
	I0401 19:21:55.335150   64332 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0401 19:21:55.336602   64332 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetIP
	I0401 19:21:55.339557   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:21:55.339956   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:21:46 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:21:55.339994   64332 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:21:55.340149   64332 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0401 19:21:55.345088   64332 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:21:55.360734   64332 kubeadm.go:877] updating cluster {Name:old-k8s-version-163608 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-163608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.106 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 19:21:55.360866   64332 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0401 19:21:55.360926   64332 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:21:55.403881   64332 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0401 19:21:55.403966   64332 ssh_runner.go:195] Run: which lz4
	I0401 19:21:55.409107   64332 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0401 19:21:55.414459   64332 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 19:21:55.414490   64332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0401 19:21:57.527589   64332 crio.go:462] duration metric: took 2.118509582s to copy over tarball
	I0401 19:21:57.527669   64332 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 19:22:00.694117   64332 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.166415986s)
	I0401 19:22:00.694143   64332 crio.go:469] duration metric: took 3.16652444s to extract the tarball
	I0401 19:22:00.694153   64332 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 19:22:00.774370   64332 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:22:00.920462   64332 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0401 19:22:00.920487   64332 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0401 19:22:00.920561   64332 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:22:00.920816   64332 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 19:22:00.920947   64332 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 19:22:00.921061   64332 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 19:22:00.921165   64332 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 19:22:00.921290   64332 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0401 19:22:00.921423   64332 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0401 19:22:00.921533   64332 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0401 19:22:00.923385   64332 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:22:00.923394   64332 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 19:22:00.923389   64332 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0401 19:22:00.923490   64332 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0401 19:22:00.924520   64332 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 19:22:00.924532   64332 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 19:22:00.924615   64332 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 19:22:00.930046   64332 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0401 19:22:01.084552   64332 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0401 19:22:01.098443   64332 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0401 19:22:01.099175   64332 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0401 19:22:01.102961   64332 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0401 19:22:01.124509   64332 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0401 19:22:01.127536   64332 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0401 19:22:01.141359   64332 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 19:22:01.225993   64332 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:22:01.294969   64332 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0401 19:22:01.295023   64332 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0401 19:22:01.295069   64332 ssh_runner.go:195] Run: which crictl
	I0401 19:22:01.305867   64332 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0401 19:22:01.305909   64332 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0401 19:22:01.305950   64332 ssh_runner.go:195] Run: which crictl
	I0401 19:22:01.431081   64332 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0401 19:22:01.431127   64332 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 19:22:01.431179   64332 ssh_runner.go:195] Run: which crictl
	I0401 19:22:01.466551   64332 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0401 19:22:01.466593   64332 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 19:22:01.466658   64332 ssh_runner.go:195] Run: which crictl
	I0401 19:22:01.508993   64332 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0401 19:22:01.509043   64332 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 19:22:01.509098   64332 ssh_runner.go:195] Run: which crictl
	I0401 19:22:01.509167   64332 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0401 19:22:01.509202   64332 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0401 19:22:01.509238   64332 ssh_runner.go:195] Run: which crictl
	I0401 19:22:01.509257   64332 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0401 19:22:01.509280   64332 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 19:22:01.509318   64332 ssh_runner.go:195] Run: which crictl
	I0401 19:22:01.537905   64332 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0401 19:22:01.537966   64332 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0401 19:22:01.538020   64332 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0401 19:22:01.538053   64332 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0401 19:22:01.538085   64332 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0401 19:22:01.538117   64332 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0401 19:22:01.538146   64332 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 19:22:01.765325   64332 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0401 19:22:01.765394   64332 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0401 19:22:01.765438   64332 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0401 19:22:01.765480   64332 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0401 19:22:01.767582   64332 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0401 19:22:01.784415   64332 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0401 19:22:01.784509   64332 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0401 19:22:01.784541   64332 cache_images.go:92] duration metric: took 864.039664ms to LoadCachedImages
	W0401 19:22:01.784622   64332 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0401 19:22:01.784638   64332 kubeadm.go:928] updating node { 192.168.50.106 8443 v1.20.0 crio true true} ...
	I0401 19:22:01.784760   64332 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-163608 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.106
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-163608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 19:22:01.784842   64332 ssh_runner.go:195] Run: crio config
	I0401 19:22:01.863346   64332 cni.go:84] Creating CNI manager for ""
	I0401 19:22:01.863376   64332 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:22:01.863392   64332 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 19:22:01.863417   64332 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.106 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-163608 NodeName:old-k8s-version-163608 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.106"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.106 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0401 19:22:01.863619   64332 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.106
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-163608"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.106
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.106"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 19:22:01.863698   64332 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0401 19:22:01.875442   64332 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 19:22:01.875518   64332 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 19:22:01.886256   64332 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0401 19:22:01.905921   64332 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 19:22:01.925862   64332 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0401 19:22:01.946731   64332 ssh_runner.go:195] Run: grep 192.168.50.106	control-plane.minikube.internal$ /etc/hosts
	I0401 19:22:01.951149   64332 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.106	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:22:01.966200   64332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:22:02.097488   64332 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:22:02.122996   64332 certs.go:68] Setting up /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608 for IP: 192.168.50.106
	I0401 19:22:02.123023   64332 certs.go:194] generating shared ca certs ...
	I0401 19:22:02.123045   64332 certs.go:226] acquiring lock for ca certs: {Name:mk348b3e250c104b662139cd7212c6c6dfda3180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:22:02.123216   64332 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key
	I0401 19:22:02.123292   64332 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key
	I0401 19:22:02.123306   64332 certs.go:256] generating profile certs ...
	I0401 19:22:02.123398   64332 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/client.key
	I0401 19:22:02.123428   64332 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/client.crt with IP's: []
	I0401 19:22:02.234642   64332 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/client.crt ...
	I0401 19:22:02.234675   64332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/client.crt: {Name:mka621f26e362f7e72c5aeb7f1f6d1bcef6dc2e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:22:02.234863   64332 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/client.key ...
	I0401 19:22:02.234879   64332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/client.key: {Name:mk72a529f205a754a229d3aea4173d8542dfd367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:22:02.234991   64332 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/apiserver.key.f2de0982
	I0401 19:22:02.235023   64332 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/apiserver.crt.f2de0982 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.106]
	I0401 19:22:02.663927   64332 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/apiserver.crt.f2de0982 ...
	I0401 19:22:02.663954   64332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/apiserver.crt.f2de0982: {Name:mk5481f5b5333d2ff5a3004e1647272c72d2a4d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:22:02.664108   64332 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/apiserver.key.f2de0982 ...
	I0401 19:22:02.664122   64332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/apiserver.key.f2de0982: {Name:mk16de14624d3aa251c14d0ffb327c3bd0af9b0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:22:02.664192   64332 certs.go:381] copying /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/apiserver.crt.f2de0982 -> /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/apiserver.crt
	I0401 19:22:02.664277   64332 certs.go:385] copying /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/apiserver.key.f2de0982 -> /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/apiserver.key
	I0401 19:22:02.664332   64332 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/proxy-client.key
	I0401 19:22:02.664346   64332 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/proxy-client.crt with IP's: []
	I0401 19:22:03.007277   64332 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/proxy-client.crt ...
	I0401 19:22:03.007318   64332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/proxy-client.crt: {Name:mk148d51671583c6677a879c334f6355218be012 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:22:03.007533   64332 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/proxy-client.key ...
	I0401 19:22:03.007555   64332 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/proxy-client.key: {Name:mk9cf3e2f3c16e8e24fd65a7c23d78aa6fcdbd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:22:03.007798   64332 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem (1338 bytes)
	W0401 19:22:03.007855   64332 certs.go:480] ignoring /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751_empty.pem, impossibly tiny 0 bytes
	I0401 19:22:03.007870   64332 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem (1675 bytes)
	I0401 19:22:03.007899   64332 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem (1082 bytes)
	I0401 19:22:03.007938   64332 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem (1123 bytes)
	I0401 19:22:03.007981   64332 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem (1679 bytes)
	I0401 19:22:03.008024   64332 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:22:03.008623   64332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 19:22:03.047948   64332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 19:22:03.087078   64332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 19:22:03.134400   64332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 19:22:03.170892   64332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0401 19:22:03.205256   64332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 19:22:03.237357   64332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 19:22:03.268181   64332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 19:22:03.302042   64332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 19:22:03.332226   64332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem --> /usr/share/ca-certificates/17751.pem (1338 bytes)
	I0401 19:22:03.365430   64332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /usr/share/ca-certificates/177512.pem (1708 bytes)
	I0401 19:22:03.400661   64332 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0401 19:22:03.420641   64332 ssh_runner.go:195] Run: openssl version
	I0401 19:22:03.427188   64332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 19:22:03.440444   64332 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:22:03.446548   64332 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 18:07 /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:22:03.446625   64332 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:22:03.453549   64332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 19:22:03.466430   64332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17751.pem && ln -fs /usr/share/ca-certificates/17751.pem /etc/ssl/certs/17751.pem"
	I0401 19:22:03.483639   64332 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17751.pem
	I0401 19:22:03.489322   64332 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 18:15 /usr/share/ca-certificates/17751.pem
	I0401 19:22:03.489395   64332 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17751.pem
	I0401 19:22:03.496215   64332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17751.pem /etc/ssl/certs/51391683.0"
	I0401 19:22:03.512893   64332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177512.pem && ln -fs /usr/share/ca-certificates/177512.pem /etc/ssl/certs/177512.pem"
	I0401 19:22:03.527273   64332 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177512.pem
	I0401 19:22:03.534104   64332 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 18:15 /usr/share/ca-certificates/177512.pem
	I0401 19:22:03.534167   64332 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177512.pem
	I0401 19:22:03.542965   64332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177512.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 19:22:03.557332   64332 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 19:22:03.562927   64332 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 19:22:03.562988   64332 kubeadm.go:391] StartCluster: {Name:old-k8s-version-163608 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-163608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.106 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:22:03.563071   64332 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 19:22:03.563119   64332 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:22:03.611607   64332 cri.go:89] found id: ""
	I0401 19:22:03.611699   64332 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 19:22:03.626566   64332 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:22:03.641690   64332 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:22:03.656026   64332 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:22:03.656045   64332 kubeadm.go:156] found existing configuration files:
	
	I0401 19:22:03.656096   64332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 19:22:03.669288   64332 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:22:03.669355   64332 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:22:03.682071   64332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 19:22:03.695146   64332 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:22:03.695212   64332 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:22:03.709345   64332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 19:22:03.722425   64332 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:22:03.722519   64332 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:22:03.733827   64332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 19:22:03.747838   64332 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:22:03.747911   64332 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:22:03.758927   64332 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 19:22:03.902597   64332 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0401 19:22:03.902692   64332 kubeadm.go:309] [preflight] Running pre-flight checks
	I0401 19:22:04.136018   64332 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 19:22:04.136160   64332 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 19:22:04.136289   64332 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 19:22:04.386099   64332 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 19:22:04.389242   64332 out.go:204]   - Generating certificates and keys ...
	I0401 19:22:04.389360   64332 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0401 19:22:04.389459   64332 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0401 19:22:04.583291   64332 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 19:22:04.789416   64332 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0401 19:22:04.932598   64332 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0401 19:22:05.057207   64332 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0401 19:22:05.330488   64332 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0401 19:22:05.331252   64332 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-163608] and IPs [192.168.50.106 127.0.0.1 ::1]
	I0401 19:22:05.408381   64332 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0401 19:22:05.408632   64332 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-163608] and IPs [192.168.50.106 127.0.0.1 ::1]
	I0401 19:22:06.038170   64332 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 19:22:06.167702   64332 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 19:22:06.296475   64332 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0401 19:22:06.296804   64332 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 19:22:06.504815   64332 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 19:22:06.631663   64332 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 19:22:06.955081   64332 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 19:22:07.319315   64332 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 19:22:07.339260   64332 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 19:22:07.342628   64332 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 19:22:07.342766   64332 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0401 19:22:07.518487   64332 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 19:22:07.520352   64332 out.go:204]   - Booting up control plane ...
	I0401 19:22:07.520488   64332 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 19:22:07.525725   64332 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 19:22:07.528051   64332 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 19:22:07.532779   64332 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 19:22:07.539918   64332 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 19:22:47.531699   64332 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0401 19:22:47.532299   64332 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:22:47.532553   64332 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:22:52.533020   64332 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:22:52.533288   64332 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:23:02.532613   64332 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:23:02.532850   64332 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:23:22.532572   64332 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:23:22.532842   64332 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:24:02.534097   64332 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:24:02.534386   64332 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:24:02.534404   64332 kubeadm.go:309] 
	I0401 19:24:02.534442   64332 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0401 19:24:02.534517   64332 kubeadm.go:309] 		timed out waiting for the condition
	I0401 19:24:02.534538   64332 kubeadm.go:309] 
	I0401 19:24:02.534586   64332 kubeadm.go:309] 	This error is likely caused by:
	I0401 19:24:02.534641   64332 kubeadm.go:309] 		- The kubelet is not running
	I0401 19:24:02.534834   64332 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0401 19:24:02.534851   64332 kubeadm.go:309] 
	I0401 19:24:02.535003   64332 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0401 19:24:02.535052   64332 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0401 19:24:02.535099   64332 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0401 19:24:02.535110   64332 kubeadm.go:309] 
	I0401 19:24:02.535235   64332 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0401 19:24:02.535339   64332 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0401 19:24:02.535352   64332 kubeadm.go:309] 
	I0401 19:24:02.535488   64332 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0401 19:24:02.535594   64332 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0401 19:24:02.535692   64332 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0401 19:24:02.535812   64332 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0401 19:24:02.535827   64332 kubeadm.go:309] 
	I0401 19:24:02.536493   64332 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 19:24:02.536607   64332 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0401 19:24:02.536696   64332 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0401 19:24:02.536820   64332 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-163608] and IPs [192.168.50.106 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-163608] and IPs [192.168.50.106 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-163608] and IPs [192.168.50.106 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-163608] and IPs [192.168.50.106 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0401 19:24:02.536872   64332 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0401 19:24:05.554319   64332 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.017412567s)
	I0401 19:24:05.554403   64332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:24:05.569309   64332 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:24:05.579276   64332 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:24:05.579300   64332 kubeadm.go:156] found existing configuration files:
	
	I0401 19:24:05.579359   64332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 19:24:05.589088   64332 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:24:05.589146   64332 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:24:05.598928   64332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 19:24:05.609193   64332 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:24:05.609250   64332 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:24:05.619298   64332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 19:24:05.629198   64332 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:24:05.629255   64332 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:24:05.638933   64332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 19:24:05.648754   64332 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:24:05.648798   64332 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:24:05.659034   64332 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 19:24:05.727215   64332 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0401 19:24:05.727316   64332 kubeadm.go:309] [preflight] Running pre-flight checks
	I0401 19:24:05.899028   64332 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 19:24:05.899177   64332 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 19:24:05.899296   64332 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 19:24:06.115337   64332 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 19:24:06.117349   64332 out.go:204]   - Generating certificates and keys ...
	I0401 19:24:06.117468   64332 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0401 19:24:06.117581   64332 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0401 19:24:06.117703   64332 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0401 19:24:06.117772   64332 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0401 19:24:06.117866   64332 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0401 19:24:06.117943   64332 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0401 19:24:06.118028   64332 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0401 19:24:06.118458   64332 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0401 19:24:06.119407   64332 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0401 19:24:06.120346   64332 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0401 19:24:06.120794   64332 kubeadm.go:309] [certs] Using the existing "sa" key
	I0401 19:24:06.120847   64332 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 19:24:06.309987   64332 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 19:24:06.566399   64332 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 19:24:06.671724   64332 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 19:24:06.759378   64332 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 19:24:06.776186   64332 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 19:24:06.777416   64332 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 19:24:06.777493   64332 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0401 19:24:06.967580   64332 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 19:24:06.969085   64332 out.go:204]   - Booting up control plane ...
	I0401 19:24:06.969223   64332 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 19:24:06.995661   64332 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 19:24:06.995741   64332 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 19:24:06.995866   64332 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 19:24:07.002154   64332 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 19:24:47.005429   64332 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0401 19:24:47.005568   64332 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:24:47.006190   64332 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:24:52.006910   64332 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:24:52.007171   64332 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:25:02.008145   64332 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:25:02.008408   64332 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:25:22.007178   64332 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:25:22.007351   64332 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:26:02.007007   64332 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:26:02.007263   64332 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:26:02.007276   64332 kubeadm.go:309] 
	I0401 19:26:02.007324   64332 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0401 19:26:02.007394   64332 kubeadm.go:309] 		timed out waiting for the condition
	I0401 19:26:02.007418   64332 kubeadm.go:309] 
	I0401 19:26:02.007466   64332 kubeadm.go:309] 	This error is likely caused by:
	I0401 19:26:02.007515   64332 kubeadm.go:309] 		- The kubelet is not running
	I0401 19:26:02.007665   64332 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0401 19:26:02.007677   64332 kubeadm.go:309] 
	I0401 19:26:02.007780   64332 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0401 19:26:02.007810   64332 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0401 19:26:02.007838   64332 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0401 19:26:02.007850   64332 kubeadm.go:309] 
	I0401 19:26:02.007937   64332 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0401 19:26:02.008055   64332 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0401 19:26:02.008073   64332 kubeadm.go:309] 
	I0401 19:26:02.008210   64332 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0401 19:26:02.008337   64332 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0401 19:26:02.008437   64332 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0401 19:26:02.008533   64332 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0401 19:26:02.008545   64332 kubeadm.go:309] 
	I0401 19:26:02.009823   64332 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 19:26:02.009946   64332 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0401 19:26:02.010066   64332 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0401 19:26:02.010130   64332 kubeadm.go:393] duration metric: took 3m58.447146496s to StartCluster
	I0401 19:26:02.010169   64332 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:26:02.010215   64332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:26:02.067444   64332 cri.go:89] found id: ""
	I0401 19:26:02.067466   64332 logs.go:276] 0 containers: []
	W0401 19:26:02.067476   64332 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:26:02.067481   64332 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:26:02.067521   64332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:26:02.114766   64332 cri.go:89] found id: ""
	I0401 19:26:02.114789   64332 logs.go:276] 0 containers: []
	W0401 19:26:02.114796   64332 logs.go:278] No container was found matching "etcd"
	I0401 19:26:02.114802   64332 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:26:02.114854   64332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:26:02.164005   64332 cri.go:89] found id: ""
	I0401 19:26:02.164039   64332 logs.go:276] 0 containers: []
	W0401 19:26:02.164048   64332 logs.go:278] No container was found matching "coredns"
	I0401 19:26:02.164054   64332 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:26:02.164107   64332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:26:02.205343   64332 cri.go:89] found id: ""
	I0401 19:26:02.205365   64332 logs.go:276] 0 containers: []
	W0401 19:26:02.205373   64332 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:26:02.205378   64332 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:26:02.205440   64332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:26:02.247235   64332 cri.go:89] found id: ""
	I0401 19:26:02.247261   64332 logs.go:276] 0 containers: []
	W0401 19:26:02.247270   64332 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:26:02.247276   64332 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:26:02.247331   64332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:26:02.290858   64332 cri.go:89] found id: ""
	I0401 19:26:02.290884   64332 logs.go:276] 0 containers: []
	W0401 19:26:02.290892   64332 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:26:02.290897   64332 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:26:02.290958   64332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:26:02.328562   64332 cri.go:89] found id: ""
	I0401 19:26:02.328596   64332 logs.go:276] 0 containers: []
	W0401 19:26:02.328607   64332 logs.go:278] No container was found matching "kindnet"
	I0401 19:26:02.328618   64332 logs.go:123] Gathering logs for kubelet ...
	I0401 19:26:02.328630   64332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:26:02.383310   64332 logs.go:123] Gathering logs for dmesg ...
	I0401 19:26:02.383335   64332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:26:02.397922   64332 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:26:02.397952   64332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:26:02.514395   64332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:26:02.514418   64332 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:26:02.514433   64332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:26:02.605441   64332 logs.go:123] Gathering logs for container status ...
	I0401 19:26:02.605485   64332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0401 19:26:02.658736   64332 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0401 19:26:02.658778   64332 out.go:239] * 
	* 
	W0401 19:26:02.658825   64332 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0401 19:26:02.658845   64332 out.go:239] * 
	* 
	W0401 19:26:02.659626   64332 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 19:26:02.663045   64332 out.go:177] 
	W0401 19:26:02.664406   64332 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0401 19:26:02.664446   64332 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0401 19:26:02.664465   64332 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0401 19:26:02.665995   64332 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-163608 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-163608 -n old-k8s-version-163608
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-163608 -n old-k8s-version-163608: exit status 6 (233.573636ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0401 19:26:02.938635   70327 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-163608" does not appear in /home/jenkins/minikube-integration/18233-10493/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-163608" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (284.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-472858 --alsologtostderr -v=3
E0401 19:23:35.233764   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/client.crt: no such file or directory
E0401 19:23:40.285102   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/auto-408543/client.crt: no such file or directory
E0401 19:23:52.854488   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/client.crt: no such file or directory
E0401 19:23:55.714374   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-472858 --alsologtostderr -v=3: exit status 82 (2m0.520701512s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-472858"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 19:23:30.408809   69518 out.go:291] Setting OutFile to fd 1 ...
	I0401 19:23:30.409054   69518 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 19:23:30.409066   69518 out.go:304] Setting ErrFile to fd 2...
	I0401 19:23:30.409070   69518 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 19:23:30.409277   69518 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-10493/.minikube/bin
	I0401 19:23:30.409562   69518 out.go:298] Setting JSON to false
	I0401 19:23:30.409640   69518 mustload.go:65] Loading cluster: no-preload-472858
	I0401 19:23:30.409986   69518 config.go:182] Loaded profile config "no-preload-472858": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0401 19:23:30.410058   69518 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/no-preload-472858/config.json ...
	I0401 19:23:30.410214   69518 mustload.go:65] Loading cluster: no-preload-472858
	I0401 19:23:30.410307   69518 config.go:182] Loaded profile config "no-preload-472858": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0401 19:23:30.410333   69518 stop.go:39] StopHost: no-preload-472858
	I0401 19:23:30.410759   69518 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:23:30.410812   69518 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:23:30.425132   69518 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35217
	I0401 19:23:30.425671   69518 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:23:30.426212   69518 main.go:141] libmachine: Using API Version  1
	I0401 19:23:30.426236   69518 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:23:30.426556   69518 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:23:30.429259   69518 out.go:177] * Stopping node "no-preload-472858"  ...
	I0401 19:23:30.430909   69518 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0401 19:23:30.430941   69518 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:23:30.431140   69518 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0401 19:23:30.431163   69518 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:23:30.434185   69518 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:23:30.434621   69518 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:22:11 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:23:30.434655   69518 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:23:30.434856   69518 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:23:30.435053   69518 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:23:30.435219   69518 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:23:30.435410   69518 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/no-preload-472858/id_rsa Username:docker}
	I0401 19:23:30.542303   69518 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0401 19:23:30.604219   69518 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0401 19:23:30.667560   69518 main.go:141] libmachine: Stopping "no-preload-472858"...
	I0401 19:23:30.667612   69518 main.go:141] libmachine: (no-preload-472858) Calling .GetState
	I0401 19:23:30.668956   69518 main.go:141] libmachine: (no-preload-472858) Calling .Stop
	I0401 19:23:30.672816   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 0/120
	I0401 19:23:31.674227   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 1/120
	I0401 19:23:32.675603   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 2/120
	I0401 19:23:33.677272   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 3/120
	I0401 19:23:34.679665   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 4/120
	I0401 19:23:35.681255   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 5/120
	I0401 19:23:36.683660   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 6/120
	I0401 19:23:37.684901   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 7/120
	I0401 19:23:38.687208   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 8/120
	I0401 19:23:39.688492   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 9/120
	I0401 19:23:40.690943   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 10/120
	I0401 19:23:41.692902   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 11/120
	I0401 19:23:42.694324   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 12/120
	I0401 19:23:43.696094   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 13/120
	I0401 19:23:44.697697   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 14/120
	I0401 19:23:45.699897   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 15/120
	I0401 19:23:46.701376   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 16/120
	I0401 19:23:47.702863   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 17/120
	I0401 19:23:48.704644   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 18/120
	I0401 19:23:49.706000   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 19/120
	I0401 19:23:50.708141   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 20/120
	I0401 19:23:51.709672   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 21/120
	I0401 19:23:52.711012   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 22/120
	I0401 19:23:53.712459   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 23/120
	I0401 19:23:54.713999   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 24/120
	I0401 19:23:55.716126   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 25/120
	I0401 19:23:56.717596   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 26/120
	I0401 19:23:57.718966   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 27/120
	I0401 19:23:58.720351   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 28/120
	I0401 19:23:59.722204   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 29/120
	I0401 19:24:00.724053   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 30/120
	I0401 19:24:01.725566   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 31/120
	I0401 19:24:02.727048   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 32/120
	I0401 19:24:03.729037   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 33/120
	I0401 19:24:04.730480   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 34/120
	I0401 19:24:05.733065   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 35/120
	I0401 19:24:06.734851   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 36/120
	I0401 19:24:07.736142   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 37/120
	I0401 19:24:08.737471   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 38/120
	I0401 19:24:09.738818   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 39/120
	I0401 19:24:10.741007   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 40/120
	I0401 19:24:11.742363   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 41/120
	I0401 19:24:12.744322   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 42/120
	I0401 19:24:13.745890   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 43/120
	I0401 19:24:14.748557   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 44/120
	I0401 19:24:15.750250   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 45/120
	I0401 19:24:16.751455   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 46/120
	I0401 19:24:17.752751   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 47/120
	I0401 19:24:18.754104   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 48/120
	I0401 19:24:19.756131   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 49/120
	I0401 19:24:20.758291   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 50/120
	I0401 19:24:21.759712   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 51/120
	I0401 19:24:22.761006   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 52/120
	I0401 19:24:23.762464   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 53/120
	I0401 19:24:24.764024   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 54/120
	I0401 19:24:25.766141   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 55/120
	I0401 19:24:26.767380   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 56/120
	I0401 19:24:27.768744   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 57/120
	I0401 19:24:28.769933   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 58/120
	I0401 19:24:29.772020   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 59/120
	I0401 19:24:30.774116   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 60/120
	I0401 19:24:31.775440   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 61/120
	I0401 19:24:32.776831   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 62/120
	I0401 19:24:33.778156   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 63/120
	I0401 19:24:34.780064   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 64/120
	I0401 19:24:35.781926   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 65/120
	I0401 19:24:36.784104   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 66/120
	I0401 19:24:37.786523   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 67/120
	I0401 19:24:38.787892   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 68/120
	I0401 19:24:39.789359   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 69/120
	I0401 19:24:40.791209   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 70/120
	I0401 19:24:41.792545   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 71/120
	I0401 19:24:42.793915   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 72/120
	I0401 19:24:43.795143   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 73/120
	I0401 19:24:44.796567   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 74/120
	I0401 19:24:45.798672   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 75/120
	I0401 19:24:46.800206   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 76/120
	I0401 19:24:47.801478   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 77/120
	I0401 19:24:48.802830   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 78/120
	I0401 19:24:49.804074   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 79/120
	I0401 19:24:50.805988   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 80/120
	I0401 19:24:51.808257   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 81/120
	I0401 19:24:52.809786   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 82/120
	I0401 19:24:53.811069   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 83/120
	I0401 19:24:54.812371   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 84/120
	I0401 19:24:55.814034   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 85/120
	I0401 19:24:56.816397   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 86/120
	I0401 19:24:57.817719   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 87/120
	I0401 19:24:58.819088   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 88/120
	I0401 19:24:59.820472   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 89/120
	I0401 19:25:00.822242   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 90/120
	I0401 19:25:01.824147   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 91/120
	I0401 19:25:02.825525   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 92/120
	I0401 19:25:03.826851   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 93/120
	I0401 19:25:04.828197   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 94/120
	I0401 19:25:05.829658   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 95/120
	I0401 19:25:06.831612   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 96/120
	I0401 19:25:07.832794   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 97/120
	I0401 19:25:08.834151   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 98/120
	I0401 19:25:09.835685   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 99/120
	I0401 19:25:10.837742   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 100/120
	I0401 19:25:11.838915   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 101/120
	I0401 19:25:12.840147   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 102/120
	I0401 19:25:13.841362   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 103/120
	I0401 19:25:14.842648   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 104/120
	I0401 19:25:15.844541   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 105/120
	I0401 19:25:16.845786   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 106/120
	I0401 19:25:17.847222   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 107/120
	I0401 19:25:18.848389   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 108/120
	I0401 19:25:19.849792   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 109/120
	I0401 19:25:20.851789   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 110/120
	I0401 19:25:21.853173   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 111/120
	I0401 19:25:22.854526   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 112/120
	I0401 19:25:23.856144   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 113/120
	I0401 19:25:24.857612   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 114/120
	I0401 19:25:25.859519   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 115/120
	I0401 19:25:26.860707   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 116/120
	I0401 19:25:27.862374   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 117/120
	I0401 19:25:28.863652   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 118/120
	I0401 19:25:29.864902   69518 main.go:141] libmachine: (no-preload-472858) Waiting for machine to stop 119/120
	I0401 19:25:30.865914   69518 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0401 19:25:30.865959   69518 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0401 19:25:30.867780   69518 out.go:177] 
	W0401 19:25:30.869135   69518 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0401 19:25:30.869148   69518 out.go:239] * 
	* 
	W0401 19:25:30.872668   69518 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 19:25:30.873914   69518 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-472858 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-472858 -n no-preload-472858
E0401 19:25:43.166283   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/auto-408543/client.crt: no such file or directory
E0401 19:25:47.137567   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/custom-flannel-408543/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-472858 -n no-preload-472858: exit status 3 (18.626908574s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0401 19:25:49.501975   70107 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.119:22: connect: no route to host
	E0401 19:25:49.501993   70107 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.119:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-472858" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-882095 --alsologtostderr -v=3
E0401 19:24:16.856365   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/functional-784295/client.crt: no such file or directory
E0401 19:24:21.246050   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/auto-408543/client.crt: no such file or directory
E0401 19:24:36.675469   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/client.crt: no such file or directory
E0401 19:24:45.496598   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/calico-408543/client.crt: no such file or directory
E0401 19:24:45.501857   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/calico-408543/client.crt: no such file or directory
E0401 19:24:45.512099   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/calico-408543/client.crt: no such file or directory
E0401 19:24:45.532377   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/calico-408543/client.crt: no such file or directory
E0401 19:24:45.572624   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/calico-408543/client.crt: no such file or directory
E0401 19:24:45.652922   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/calico-408543/client.crt: no such file or directory
E0401 19:24:45.813932   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/calico-408543/client.crt: no such file or directory
E0401 19:24:46.134713   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/calico-408543/client.crt: no such file or directory
E0401 19:24:46.775077   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/calico-408543/client.crt: no such file or directory
E0401 19:24:48.055975   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/calico-408543/client.crt: no such file or directory
E0401 19:24:50.617169   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/calico-408543/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-882095 --alsologtostderr -v=3: exit status 82 (2m0.554535196s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-882095"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 19:24:14.974260   69779 out.go:291] Setting OutFile to fd 1 ...
	I0401 19:24:14.974393   69779 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 19:24:14.974405   69779 out.go:304] Setting ErrFile to fd 2...
	I0401 19:24:14.974412   69779 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 19:24:14.974617   69779 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-10493/.minikube/bin
	I0401 19:24:14.974903   69779 out.go:298] Setting JSON to false
	I0401 19:24:14.975006   69779 mustload.go:65] Loading cluster: embed-certs-882095
	I0401 19:24:14.975377   69779 config.go:182] Loaded profile config "embed-certs-882095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 19:24:14.975453   69779 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/embed-certs-882095/config.json ...
	I0401 19:24:14.975638   69779 mustload.go:65] Loading cluster: embed-certs-882095
	I0401 19:24:14.975760   69779 config.go:182] Loaded profile config "embed-certs-882095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 19:24:14.975792   69779 stop.go:39] StopHost: embed-certs-882095
	I0401 19:24:14.976231   69779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:24:14.976297   69779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:24:14.991750   69779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38495
	I0401 19:24:14.992313   69779 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:24:14.993051   69779 main.go:141] libmachine: Using API Version  1
	I0401 19:24:14.993083   69779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:24:14.993491   69779 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:24:14.996161   69779 out.go:177] * Stopping node "embed-certs-882095"  ...
	I0401 19:24:14.997371   69779 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0401 19:24:14.997402   69779 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:24:14.997692   69779 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0401 19:24:14.997757   69779 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:24:15.000775   69779 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:24:15.001140   69779 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:22:40 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:24:15.001174   69779 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:24:15.001405   69779 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:24:15.001578   69779 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:24:15.001726   69779 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:24:15.001894   69779 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/embed-certs-882095/id_rsa Username:docker}
	I0401 19:24:15.151305   69779 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0401 19:24:15.224739   69779 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0401 19:24:15.274849   69779 main.go:141] libmachine: Stopping "embed-certs-882095"...
	I0401 19:24:15.274880   69779 main.go:141] libmachine: (embed-certs-882095) Calling .GetState
	I0401 19:24:15.276647   69779 main.go:141] libmachine: (embed-certs-882095) Calling .Stop
	I0401 19:24:15.281141   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 0/120
	I0401 19:24:16.282516   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 1/120
	I0401 19:24:17.283976   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 2/120
	I0401 19:24:18.285670   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 3/120
	I0401 19:24:19.287265   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 4/120
	I0401 19:24:20.289344   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 5/120
	I0401 19:24:21.290673   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 6/120
	I0401 19:24:22.291996   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 7/120
	I0401 19:24:23.293375   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 8/120
	I0401 19:24:24.294647   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 9/120
	I0401 19:24:25.295984   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 10/120
	I0401 19:24:26.297428   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 11/120
	I0401 19:24:27.298783   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 12/120
	I0401 19:24:28.300132   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 13/120
	I0401 19:24:29.301473   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 14/120
	I0401 19:24:30.303319   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 15/120
	I0401 19:24:31.304724   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 16/120
	I0401 19:24:32.305949   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 17/120
	I0401 19:24:33.308119   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 18/120
	I0401 19:24:34.309369   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 19/120
	I0401 19:24:35.311395   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 20/120
	I0401 19:24:36.312821   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 21/120
	I0401 19:24:37.314110   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 22/120
	I0401 19:24:38.315812   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 23/120
	I0401 19:24:39.316904   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 24/120
	I0401 19:24:40.318217   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 25/120
	I0401 19:24:41.319629   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 26/120
	I0401 19:24:42.320898   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 27/120
	I0401 19:24:43.322425   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 28/120
	I0401 19:24:44.323974   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 29/120
	I0401 19:24:45.325976   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 30/120
	I0401 19:24:46.327419   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 31/120
	I0401 19:24:47.328648   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 32/120
	I0401 19:24:48.330348   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 33/120
	I0401 19:24:49.331441   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 34/120
	I0401 19:24:50.333390   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 35/120
	I0401 19:24:51.334608   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 36/120
	I0401 19:24:52.335816   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 37/120
	I0401 19:24:53.337041   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 38/120
	I0401 19:24:54.338511   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 39/120
	I0401 19:24:55.340369   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 40/120
	I0401 19:24:56.342498   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 41/120
	I0401 19:24:57.344178   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 42/120
	I0401 19:24:58.345386   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 43/120
	I0401 19:24:59.346672   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 44/120
	I0401 19:25:00.348530   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 45/120
	I0401 19:25:01.350102   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 46/120
	I0401 19:25:02.352094   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 47/120
	I0401 19:25:03.354197   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 48/120
	I0401 19:25:04.355572   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 49/120
	I0401 19:25:05.357350   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 50/120
	I0401 19:25:06.359828   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 51/120
	I0401 19:25:07.361188   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 52/120
	I0401 19:25:08.362393   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 53/120
	I0401 19:25:09.363707   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 54/120
	I0401 19:25:10.365316   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 55/120
	I0401 19:25:11.366551   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 56/120
	I0401 19:25:12.367962   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 57/120
	I0401 19:25:13.369089   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 58/120
	I0401 19:25:14.370515   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 59/120
	I0401 19:25:15.372538   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 60/120
	I0401 19:25:16.373833   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 61/120
	I0401 19:25:17.375079   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 62/120
	I0401 19:25:18.376350   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 63/120
	I0401 19:25:19.377497   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 64/120
	I0401 19:25:20.379261   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 65/120
	I0401 19:25:21.380621   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 66/120
	I0401 19:25:22.382001   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 67/120
	I0401 19:25:23.384124   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 68/120
	I0401 19:25:24.385264   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 69/120
	I0401 19:25:25.387118   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 70/120
	I0401 19:25:26.388546   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 71/120
	I0401 19:25:27.389661   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 72/120
	I0401 19:25:28.390820   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 73/120
	I0401 19:25:29.391951   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 74/120
	I0401 19:25:30.393613   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 75/120
	I0401 19:25:31.394795   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 76/120
	I0401 19:25:32.396100   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 77/120
	I0401 19:25:33.397294   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 78/120
	I0401 19:25:34.398478   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 79/120
	I0401 19:25:35.400387   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 80/120
	I0401 19:25:36.401508   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 81/120
	I0401 19:25:37.402599   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 82/120
	I0401 19:25:38.403869   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 83/120
	I0401 19:25:39.404994   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 84/120
	I0401 19:25:40.406597   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 85/120
	I0401 19:25:41.407968   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 86/120
	I0401 19:25:42.409313   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 87/120
	I0401 19:25:43.410685   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 88/120
	I0401 19:25:44.411915   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 89/120
	I0401 19:25:45.414104   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 90/120
	I0401 19:25:46.415550   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 91/120
	I0401 19:25:47.416679   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 92/120
	I0401 19:25:48.418003   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 93/120
	I0401 19:25:49.419184   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 94/120
	I0401 19:25:50.420920   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 95/120
	I0401 19:25:51.422311   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 96/120
	I0401 19:25:52.423539   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 97/120
	I0401 19:25:53.424760   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 98/120
	I0401 19:25:54.426059   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 99/120
	I0401 19:25:55.428056   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 100/120
	I0401 19:25:56.429171   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 101/120
	I0401 19:25:57.430520   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 102/120
	I0401 19:25:58.431742   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 103/120
	I0401 19:25:59.433101   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 104/120
	I0401 19:26:00.435080   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 105/120
	I0401 19:26:01.436297   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 106/120
	I0401 19:26:02.437661   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 107/120
	I0401 19:26:03.438982   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 108/120
	I0401 19:26:04.440029   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 109/120
	I0401 19:26:05.442084   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 110/120
	I0401 19:26:06.443351   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 111/120
	I0401 19:26:07.444706   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 112/120
	I0401 19:26:08.445944   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 113/120
	I0401 19:26:09.447129   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 114/120
	I0401 19:26:10.448927   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 115/120
	I0401 19:26:11.450126   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 116/120
	I0401 19:26:12.451338   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 117/120
	I0401 19:26:13.452558   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 118/120
	I0401 19:26:14.453890   69779 main.go:141] libmachine: (embed-certs-882095) Waiting for machine to stop 119/120
	I0401 19:26:15.454591   69779 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0401 19:26:15.454652   69779 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0401 19:26:15.456416   69779 out.go:177] 
	W0401 19:26:15.457618   69779 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0401 19:26:15.457632   69779 out.go:239] * 
	* 
	W0401 19:26:15.461058   69779 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 19:26:15.462503   69779 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-882095 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-882095 -n embed-certs-882095
E0401 19:26:19.225908   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/enable-default-cni-408543/client.crt: no such file or directory
E0401 19:26:28.098620   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/custom-flannel-408543/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-882095 -n embed-certs-882095: exit status 3 (18.581044815s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0401 19:26:34.045952   70492 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.190:22: connect: no route to host
	E0401 19:26:34.045975   70492 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.190:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-882095" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-734648 --alsologtostderr -v=3
E0401 19:25:05.978122   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/calico-408543/client.crt: no such file or directory
E0401 19:25:06.173311   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/custom-flannel-408543/client.crt: no such file or directory
E0401 19:25:06.178597   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/custom-flannel-408543/client.crt: no such file or directory
E0401 19:25:06.189284   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/custom-flannel-408543/client.crt: no such file or directory
E0401 19:25:06.209562   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/custom-flannel-408543/client.crt: no such file or directory
E0401 19:25:06.249867   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/custom-flannel-408543/client.crt: no such file or directory
E0401 19:25:06.330731   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/custom-flannel-408543/client.crt: no such file or directory
E0401 19:25:06.491769   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/custom-flannel-408543/client.crt: no such file or directory
E0401 19:25:06.812369   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/custom-flannel-408543/client.crt: no such file or directory
E0401 19:25:07.453532   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/custom-flannel-408543/client.crt: no such file or directory
E0401 19:25:08.733965   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/custom-flannel-408543/client.crt: no such file or directory
E0401 19:25:11.295138   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/custom-flannel-408543/client.crt: no such file or directory
E0401 19:25:15.902890   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/client.crt: no such file or directory
E0401 19:25:16.416124   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/custom-flannel-408543/client.crt: no such file or directory
E0401 19:25:26.458911   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/calico-408543/client.crt: no such file or directory
E0401 19:25:26.657267   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/custom-flannel-408543/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-734648 --alsologtostderr -v=3: exit status 82 (2m0.498467196s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-734648"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 19:25:05.995698   70031 out.go:291] Setting OutFile to fd 1 ...
	I0401 19:25:05.995835   70031 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 19:25:05.995847   70031 out.go:304] Setting ErrFile to fd 2...
	I0401 19:25:05.995853   70031 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 19:25:05.996135   70031 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-10493/.minikube/bin
	I0401 19:25:05.996447   70031 out.go:298] Setting JSON to false
	I0401 19:25:05.996549   70031 mustload.go:65] Loading cluster: default-k8s-diff-port-734648
	I0401 19:25:05.996998   70031 config.go:182] Loaded profile config "default-k8s-diff-port-734648": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 19:25:05.997094   70031 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/default-k8s-diff-port-734648/config.json ...
	I0401 19:25:05.997322   70031 mustload.go:65] Loading cluster: default-k8s-diff-port-734648
	I0401 19:25:05.997481   70031 config.go:182] Loaded profile config "default-k8s-diff-port-734648": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 19:25:05.997522   70031 stop.go:39] StopHost: default-k8s-diff-port-734648
	I0401 19:25:05.998147   70031 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:25:05.998207   70031 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:25:06.012565   70031 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39079
	I0401 19:25:06.013053   70031 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:25:06.013610   70031 main.go:141] libmachine: Using API Version  1
	I0401 19:25:06.013631   70031 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:25:06.014021   70031 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:25:06.016547   70031 out.go:177] * Stopping node "default-k8s-diff-port-734648"  ...
	I0401 19:25:06.017837   70031 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0401 19:25:06.017880   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:25:06.018125   70031 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0401 19:25:06.018156   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:25:06.020841   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:25:06.021364   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:25:06.021399   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:25:06.021497   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:25:06.021704   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:25:06.021844   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:25:06.021991   70031 sshutil.go:53] new ssh client: &{IP:192.168.61.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/default-k8s-diff-port-734648/id_rsa Username:docker}
	I0401 19:25:06.127918   70031 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0401 19:25:06.181989   70031 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0401 19:25:06.252960   70031 main.go:141] libmachine: Stopping "default-k8s-diff-port-734648"...
	I0401 19:25:06.252987   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetState
	I0401 19:25:06.254442   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .Stop
	I0401 19:25:06.257468   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 0/120
	I0401 19:25:07.258817   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 1/120
	I0401 19:25:08.260178   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 2/120
	I0401 19:25:09.261453   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 3/120
	I0401 19:25:10.262883   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 4/120
	I0401 19:25:11.264857   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 5/120
	I0401 19:25:12.266136   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 6/120
	I0401 19:25:13.267624   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 7/120
	I0401 19:25:14.268858   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 8/120
	I0401 19:25:15.270155   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 9/120
	I0401 19:25:16.272080   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 10/120
	I0401 19:25:17.273518   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 11/120
	I0401 19:25:18.274832   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 12/120
	I0401 19:25:19.276192   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 13/120
	I0401 19:25:20.277699   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 14/120
	I0401 19:25:21.279408   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 15/120
	I0401 19:25:22.280726   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 16/120
	I0401 19:25:23.282029   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 17/120
	I0401 19:25:24.283338   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 18/120
	I0401 19:25:25.284567   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 19/120
	I0401 19:25:26.286423   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 20/120
	I0401 19:25:27.287724   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 21/120
	I0401 19:25:28.289205   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 22/120
	I0401 19:25:29.290718   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 23/120
	I0401 19:25:30.292020   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 24/120
	I0401 19:25:31.293485   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 25/120
	I0401 19:25:32.294762   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 26/120
	I0401 19:25:33.296266   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 27/120
	I0401 19:25:34.297557   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 28/120
	I0401 19:25:35.298961   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 29/120
	I0401 19:25:36.300812   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 30/120
	I0401 19:25:37.302036   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 31/120
	I0401 19:25:38.303420   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 32/120
	I0401 19:25:39.304826   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 33/120
	I0401 19:25:40.306110   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 34/120
	I0401 19:25:41.307861   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 35/120
	I0401 19:25:42.309090   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 36/120
	I0401 19:25:43.310361   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 37/120
	I0401 19:25:44.311769   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 38/120
	I0401 19:25:45.312994   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 39/120
	I0401 19:25:46.315025   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 40/120
	I0401 19:25:47.316378   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 41/120
	I0401 19:25:48.317763   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 42/120
	I0401 19:25:49.319073   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 43/120
	I0401 19:25:50.320391   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 44/120
	I0401 19:25:51.322276   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 45/120
	I0401 19:25:52.323493   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 46/120
	I0401 19:25:53.324836   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 47/120
	I0401 19:25:54.326129   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 48/120
	I0401 19:25:55.327531   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 49/120
	I0401 19:25:56.328850   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 50/120
	I0401 19:25:57.330250   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 51/120
	I0401 19:25:58.331565   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 52/120
	I0401 19:25:59.332990   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 53/120
	I0401 19:26:00.334488   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 54/120
	I0401 19:26:01.336182   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 55/120
	I0401 19:26:02.337602   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 56/120
	I0401 19:26:03.338628   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 57/120
	I0401 19:26:04.339897   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 58/120
	I0401 19:26:05.341184   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 59/120
	I0401 19:26:06.342859   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 60/120
	I0401 19:26:07.344199   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 61/120
	I0401 19:26:08.345510   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 62/120
	I0401 19:26:09.346823   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 63/120
	I0401 19:26:10.348087   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 64/120
	I0401 19:26:11.350052   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 65/120
	I0401 19:26:12.352057   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 66/120
	I0401 19:26:13.353255   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 67/120
	I0401 19:26:14.354501   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 68/120
	I0401 19:26:15.355819   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 69/120
	I0401 19:26:16.357929   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 70/120
	I0401 19:26:17.359338   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 71/120
	I0401 19:26:18.360519   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 72/120
	I0401 19:26:19.361829   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 73/120
	I0401 19:26:20.363092   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 74/120
	I0401 19:26:21.364957   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 75/120
	I0401 19:26:22.366267   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 76/120
	I0401 19:26:23.367560   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 77/120
	I0401 19:26:24.368844   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 78/120
	I0401 19:26:25.370285   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 79/120
	I0401 19:26:26.372312   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 80/120
	I0401 19:26:27.373586   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 81/120
	I0401 19:26:28.374858   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 82/120
	I0401 19:26:29.376095   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 83/120
	I0401 19:26:30.377364   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 84/120
	I0401 19:26:31.379191   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 85/120
	I0401 19:26:32.380299   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 86/120
	I0401 19:26:33.381811   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 87/120
	I0401 19:26:34.383073   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 88/120
	I0401 19:26:35.384481   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 89/120
	I0401 19:26:36.386523   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 90/120
	I0401 19:26:37.387811   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 91/120
	I0401 19:26:38.389219   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 92/120
	I0401 19:26:39.390520   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 93/120
	I0401 19:26:40.391883   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 94/120
	I0401 19:26:41.393877   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 95/120
	I0401 19:26:42.395437   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 96/120
	I0401 19:26:43.397289   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 97/120
	I0401 19:26:44.398765   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 98/120
	I0401 19:26:45.400002   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 99/120
	I0401 19:26:46.402173   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 100/120
	I0401 19:26:47.403763   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 101/120
	I0401 19:26:48.405224   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 102/120
	I0401 19:26:49.406732   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 103/120
	I0401 19:26:50.408141   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 104/120
	I0401 19:26:51.410160   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 105/120
	I0401 19:26:52.411546   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 106/120
	I0401 19:26:53.412935   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 107/120
	I0401 19:26:54.414315   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 108/120
	I0401 19:26:55.415775   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 109/120
	I0401 19:26:56.417923   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 110/120
	I0401 19:26:57.419295   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 111/120
	I0401 19:26:58.420706   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 112/120
	I0401 19:26:59.422090   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 113/120
	I0401 19:27:00.423508   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 114/120
	I0401 19:27:01.425402   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 115/120
	I0401 19:27:02.426701   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 116/120
	I0401 19:27:03.428117   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 117/120
	I0401 19:27:04.429488   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 118/120
	I0401 19:27:05.430848   70031 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for machine to stop 119/120
	I0401 19:27:06.431901   70031 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0401 19:27:06.431963   70031 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0401 19:27:06.433761   70031 out.go:177] 
	W0401 19:27:06.435066   70031 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0401 19:27:06.435093   70031 out.go:239] * 
	* 
	W0401 19:27:06.438220   70031 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 19:27:06.439607   70031 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-734648 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-734648 -n default-k8s-diff-port-734648
E0401 19:27:20.666643   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/enable-default-cni-408543/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-734648 -n default-k8s-diff-port-734648: exit status 3 (18.547838073s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0401 19:27:24.990025   70768 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.145:22: connect: no route to host
	E0401 19:27:24.990057   70768 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.145:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-734648" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-472858 -n no-preload-472858
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-472858 -n no-preload-472858: exit status 3 (3.167593649s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0401 19:25:52.669958   70174 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.119:22: connect: no route to host
	E0401 19:25:52.669981   70174 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.119:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-472858 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0401 19:25:58.596157   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/client.crt: no such file or directory
E0401 19:25:58.744523   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/enable-default-cni-408543/client.crt: no such file or directory
E0401 19:25:58.749761   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/enable-default-cni-408543/client.crt: no such file or directory
E0401 19:25:58.759989   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/enable-default-cni-408543/client.crt: no such file or directory
E0401 19:25:58.780205   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/enable-default-cni-408543/client.crt: no such file or directory
E0401 19:25:58.820301   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/enable-default-cni-408543/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-472858 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153714836s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.119:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-472858 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-472858 -n no-preload-472858
E0401 19:25:58.901305   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/enable-default-cni-408543/client.crt: no such file or directory
E0401 19:25:59.061699   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/enable-default-cni-408543/client.crt: no such file or directory
E0401 19:25:59.382231   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/enable-default-cni-408543/client.crt: no such file or directory
E0401 19:26:00.022869   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/enable-default-cni-408543/client.crt: no such file or directory
E0401 19:26:01.303429   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/enable-default-cni-408543/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-472858 -n no-preload-472858: exit status 3 (3.06194204s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0401 19:26:01.885958   70243 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.119:22: connect: no route to host
	E0401 19:26:01.885993   70243 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.119:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-472858" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-163608 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-163608 create -f testdata/busybox.yaml: exit status 1 (40.676129ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-163608" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-163608 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-163608 -n old-k8s-version-163608
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-163608 -n old-k8s-version-163608: exit status 6 (226.88316ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0401 19:26:03.207285   70367 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-163608" does not appear in /home/jenkins/minikube-integration/18233-10493/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-163608" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-163608 -n old-k8s-version-163608
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-163608 -n old-k8s-version-163608: exit status 6 (222.813296ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0401 19:26:03.430555   70397 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-163608" does not appear in /home/jenkins/minikube-integration/18233-10493/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-163608" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (104.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-163608 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0401 19:26:03.864270   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/enable-default-cni-408543/client.crt: no such file or directory
E0401 19:26:07.419563   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/calico-408543/client.crt: no such file or directory
E0401 19:26:08.984804   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/enable-default-cni-408543/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-163608 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m44.688140242s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-163608 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-163608 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-163608 describe deploy/metrics-server -n kube-system: exit status 1 (42.710743ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-163608" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-163608 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-163608 -n old-k8s-version-163608
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-163608 -n old-k8s-version-163608: exit status 6 (224.790542ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0401 19:27:48.386395   71038 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-163608" does not appear in /home/jenkins/minikube-integration/18233-10493/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-163608" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (104.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-882095 -n embed-certs-882095
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-882095 -n embed-certs-882095: exit status 3 (3.169228508s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0401 19:26:37.213976   70573 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.190:22: connect: no route to host
	E0401 19:26:37.214001   70573 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.190:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-882095 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0401 19:26:39.706058   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/enable-default-cni-408543/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-882095 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.151842469s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.190:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-882095 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-882095 -n embed-certs-882095
E0401 19:26:44.321475   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/flannel-408543/client.crt: no such file or directory
E0401 19:26:44.326720   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/flannel-408543/client.crt: no such file or directory
E0401 19:26:44.336955   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/flannel-408543/client.crt: no such file or directory
E0401 19:26:44.357215   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/flannel-408543/client.crt: no such file or directory
E0401 19:26:44.397455   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/flannel-408543/client.crt: no such file or directory
E0401 19:26:44.478049   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/flannel-408543/client.crt: no such file or directory
E0401 19:26:44.639155   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/flannel-408543/client.crt: no such file or directory
E0401 19:26:44.959799   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/flannel-408543/client.crt: no such file or directory
E0401 19:26:45.600629   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/flannel-408543/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-882095 -n embed-certs-882095: exit status 3 (3.062545954s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0401 19:26:46.429959   70645 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.190:22: connect: no route to host
	E0401 19:26:46.429979   70645 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.190:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-882095" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-734648 -n default-k8s-diff-port-734648
E0401 19:27:25.284077   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/flannel-408543/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-734648 -n default-k8s-diff-port-734648: exit status 3 (3.167639908s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0401 19:27:28.157971   70862 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.145:22: connect: no route to host
	E0401 19:27:28.157993   70862 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.145:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-734648 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0401 19:27:29.339826   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/calico-408543/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-734648 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153678186s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.145:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-734648 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-734648 -n default-k8s-diff-port-734648
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-734648 -n default-k8s-diff-port-734648: exit status 3 (3.062530202s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0401 19:27:37.374056   70921 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.145:22: connect: no route to host
	E0401 19:27:37.374079   70921 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.145:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-734648" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (720.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-163608 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0401 19:27:55.457863   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/bridge-408543/client.crt: no such file or directory
E0401 19:27:59.323878   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/auto-408543/client.crt: no such file or directory
E0401 19:28:05.699026   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/bridge-408543/client.crt: no such file or directory
E0401 19:28:06.245034   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/flannel-408543/client.crt: no such file or directory
E0401 19:28:14.750559   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/client.crt: no such file or directory
E0401 19:28:26.179673   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/bridge-408543/client.crt: no such file or directory
E0401 19:28:27.006709   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/auto-408543/client.crt: no such file or directory
E0401 19:28:42.437879   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/client.crt: no such file or directory
E0401 19:28:42.587247   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/enable-default-cni-408543/client.crt: no such file or directory
E0401 19:28:52.854221   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/client.crt: no such file or directory
E0401 19:29:07.140418   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/bridge-408543/client.crt: no such file or directory
E0401 19:29:16.857047   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/functional-784295/client.crt: no such file or directory
E0401 19:29:28.166412   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/flannel-408543/client.crt: no such file or directory
E0401 19:29:45.495727   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/calico-408543/client.crt: no such file or directory
E0401 19:30:06.173574   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/custom-flannel-408543/client.crt: no such file or directory
E0401 19:30:13.180042   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/calico-408543/client.crt: no such file or directory
E0401 19:30:29.060894   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/bridge-408543/client.crt: no such file or directory
E0401 19:30:33.859676   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/custom-flannel-408543/client.crt: no such file or directory
E0401 19:30:39.906200   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/functional-784295/client.crt: no such file or directory
E0401 19:30:58.744112   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/enable-default-cni-408543/client.crt: no such file or directory
E0401 19:31:26.428462   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/enable-default-cni-408543/client.crt: no such file or directory
E0401 19:31:44.321831   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/flannel-408543/client.crt: no such file or directory
E0401 19:32:12.006860   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/flannel-408543/client.crt: no such file or directory
E0401 19:32:45.216656   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/bridge-408543/client.crt: no such file or directory
E0401 19:32:59.323162   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/auto-408543/client.crt: no such file or directory
E0401 19:33:12.901287   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/bridge-408543/client.crt: no such file or directory
E0401 19:33:14.750700   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/client.crt: no such file or directory
E0401 19:33:52.854248   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/client.crt: no such file or directory
E0401 19:34:16.856271   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/functional-784295/client.crt: no such file or directory
E0401 19:34:45.496630   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/calico-408543/client.crt: no such file or directory
E0401 19:35:06.172649   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/custom-flannel-408543/client.crt: no such file or directory
E0401 19:35:58.743557   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/enable-default-cni-408543/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-163608 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (11m56.612243778s)

                                                
                                                
-- stdout --
	* [old-k8s-version-163608] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18233
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18233-10493/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-10493/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-163608" primary control-plane node in "old-k8s-version-163608" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-163608" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 19:27:52.967684   71168 out.go:291] Setting OutFile to fd 1 ...
	I0401 19:27:52.967904   71168 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 19:27:52.967912   71168 out.go:304] Setting ErrFile to fd 2...
	I0401 19:27:52.967916   71168 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 19:27:52.968071   71168 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-10493/.minikube/bin
	I0401 19:27:52.968601   71168 out.go:298] Setting JSON to false
	I0401 19:27:52.969458   71168 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7825,"bootTime":1711991848,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 19:27:52.969511   71168 start.go:139] virtualization: kvm guest
	I0401 19:27:52.972337   71168 out.go:177] * [old-k8s-version-163608] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 19:27:52.973728   71168 out.go:177]   - MINIKUBE_LOCATION=18233
	I0401 19:27:52.973774   71168 notify.go:220] Checking for updates...
	I0401 19:27:52.975050   71168 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 19:27:52.976498   71168 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 19:27:52.977880   71168 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-10493/.minikube
	I0401 19:27:52.979140   71168 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 19:27:52.980397   71168 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 19:27:52.982116   71168 config.go:182] Loaded profile config "old-k8s-version-163608": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 19:27:52.982478   71168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:27:52.982569   71168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:27:52.996903   71168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44083
	I0401 19:27:52.997230   71168 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:27:52.997702   71168 main.go:141] libmachine: Using API Version  1
	I0401 19:27:52.997724   71168 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:27:52.998082   71168 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:27:52.998286   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:27:53.000287   71168 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0401 19:27:53.001714   71168 driver.go:392] Setting default libvirt URI to qemu:///system
	I0401 19:27:53.001993   71168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:27:53.002030   71168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:27:53.016155   71168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43947
	I0401 19:27:53.016524   71168 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:27:53.016981   71168 main.go:141] libmachine: Using API Version  1
	I0401 19:27:53.017003   71168 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:27:53.017352   71168 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:27:53.017550   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:27:53.051163   71168 out.go:177] * Using the kvm2 driver based on existing profile
	I0401 19:27:53.052475   71168 start.go:297] selected driver: kvm2
	I0401 19:27:53.052488   71168 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-163608 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-163608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.106 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:27:53.052621   71168 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 19:27:53.053266   71168 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 19:27:53.053349   71168 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18233-10493/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0401 19:27:53.067629   71168 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0401 19:27:53.067994   71168 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 19:27:53.068065   71168 cni.go:84] Creating CNI manager for ""
	I0401 19:27:53.068083   71168 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:27:53.068130   71168 start.go:340] cluster config:
	{Name:old-k8s-version-163608 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-163608 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.106 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:27:53.068640   71168 iso.go:125] acquiring lock: {Name:mka511ffe42ecd86bd7f46e7a17ddcdd3e5e4327 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 19:27:53.070506   71168 out.go:177] * Starting "old-k8s-version-163608" primary control-plane node in "old-k8s-version-163608" cluster
	I0401 19:27:53.071686   71168 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0401 19:27:53.071716   71168 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0401 19:27:53.071726   71168 cache.go:56] Caching tarball of preloaded images
	I0401 19:27:53.071807   71168 preload.go:173] Found /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 19:27:53.071818   71168 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0401 19:27:53.071904   71168 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/config.json ...
	I0401 19:27:53.072076   71168 start.go:360] acquireMachinesLock for old-k8s-version-163608: {Name:mk6b7472209a8db5f40be4c2f0565da7e0094c19 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 19:31:20.350903   71168 start.go:364] duration metric: took 3m27.278785625s to acquireMachinesLock for "old-k8s-version-163608"
	I0401 19:31:20.350993   71168 start.go:96] Skipping create...Using existing machine configuration
	I0401 19:31:20.351010   71168 fix.go:54] fixHost starting: 
	I0401 19:31:20.351490   71168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:31:20.351571   71168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:31:20.368575   71168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38247
	I0401 19:31:20.368936   71168 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:31:20.369448   71168 main.go:141] libmachine: Using API Version  1
	I0401 19:31:20.369469   71168 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:31:20.369822   71168 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:31:20.370033   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:31:20.370195   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetState
	I0401 19:31:20.371625   71168 fix.go:112] recreateIfNeeded on old-k8s-version-163608: state=Stopped err=<nil>
	I0401 19:31:20.371681   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	W0401 19:31:20.371842   71168 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 19:31:20.374328   71168 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-163608" ...
	I0401 19:31:20.375755   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .Start
	I0401 19:31:20.375932   71168 main.go:141] libmachine: (old-k8s-version-163608) Ensuring networks are active...
	I0401 19:31:20.376713   71168 main.go:141] libmachine: (old-k8s-version-163608) Ensuring network default is active
	I0401 19:31:20.377858   71168 main.go:141] libmachine: (old-k8s-version-163608) Ensuring network mk-old-k8s-version-163608 is active
	I0401 19:31:20.378278   71168 main.go:141] libmachine: (old-k8s-version-163608) Getting domain xml...
	I0401 19:31:20.378972   71168 main.go:141] libmachine: (old-k8s-version-163608) Creating domain...
	I0401 19:31:21.643237   71168 main.go:141] libmachine: (old-k8s-version-163608) Waiting to get IP...
	I0401 19:31:21.644082   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:21.644468   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:21.644535   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:21.644446   71902 retry.go:31] will retry after 208.251344ms: waiting for machine to come up
	I0401 19:31:21.854070   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:21.854545   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:21.854593   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:21.854527   71902 retry.go:31] will retry after 240.466964ms: waiting for machine to come up
	I0401 19:31:22.096940   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:22.097447   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:22.097470   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:22.097405   71902 retry.go:31] will retry after 480.217755ms: waiting for machine to come up
	I0401 19:31:22.579111   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:22.579596   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:22.579628   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:22.579518   71902 retry.go:31] will retry after 581.713487ms: waiting for machine to come up
	I0401 19:31:23.163331   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:23.163803   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:23.163838   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:23.163770   71902 retry.go:31] will retry after 737.12898ms: waiting for machine to come up
	I0401 19:31:23.902739   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:23.903192   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:23.903222   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:23.903139   71902 retry.go:31] will retry after 718.826495ms: waiting for machine to come up
	I0401 19:31:24.624169   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:24.624620   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:24.624648   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:24.624574   71902 retry.go:31] will retry after 1.020701715s: waiting for machine to come up
	I0401 19:31:25.647470   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:25.647957   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:25.647988   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:25.647921   71902 retry.go:31] will retry after 1.318891306s: waiting for machine to come up
	I0401 19:31:26.968134   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:26.968588   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:26.968613   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:26.968535   71902 retry.go:31] will retry after 1.465864517s: waiting for machine to come up
	I0401 19:31:28.435890   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:28.436304   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:28.436334   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:28.436255   71902 retry.go:31] will retry after 2.062597688s: waiting for machine to come up
	I0401 19:31:30.500523   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:30.500999   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:30.501027   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:30.500954   71902 retry.go:31] will retry after 2.068480339s: waiting for machine to come up
	I0401 19:31:32.571229   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:32.571603   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:32.571635   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:32.571550   71902 retry.go:31] will retry after 3.355965883s: waiting for machine to come up
	I0401 19:31:35.929498   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:35.930010   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:35.930042   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:35.929963   71902 retry.go:31] will retry after 3.806123644s: waiting for machine to come up
	I0401 19:31:39.739700   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.740313   71168 main.go:141] libmachine: (old-k8s-version-163608) Found IP for machine: 192.168.50.106
	I0401 19:31:39.740369   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has current primary IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.740386   71168 main.go:141] libmachine: (old-k8s-version-163608) Reserving static IP address...
	I0401 19:31:39.740767   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "old-k8s-version-163608", mac: "52:54:00:fe:1b:e7", ip: "192.168.50.106"} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:39.740798   71168 main.go:141] libmachine: (old-k8s-version-163608) Reserved static IP address: 192.168.50.106
	I0401 19:31:39.740818   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | skip adding static IP to network mk-old-k8s-version-163608 - found existing host DHCP lease matching {name: "old-k8s-version-163608", mac: "52:54:00:fe:1b:e7", ip: "192.168.50.106"}
	I0401 19:31:39.740839   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | Getting to WaitForSSH function...
	I0401 19:31:39.740857   71168 main.go:141] libmachine: (old-k8s-version-163608) Waiting for SSH to be available...
	I0401 19:31:39.743023   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.743417   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:39.743447   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.743589   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | Using SSH client type: external
	I0401 19:31:39.743614   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | Using SSH private key: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/old-k8s-version-163608/id_rsa (-rw-------)
	I0401 19:31:39.743648   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.106 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18233-10493/.minikube/machines/old-k8s-version-163608/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0401 19:31:39.743662   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | About to run SSH command:
	I0401 19:31:39.743676   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | exit 0
	I0401 19:31:39.877699   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | SSH cmd err, output: <nil>: 
	I0401 19:31:39.878044   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetConfigRaw
	I0401 19:31:39.878611   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetIP
	I0401 19:31:39.880733   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.881074   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:39.881107   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.881352   71168 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/config.json ...
	I0401 19:31:39.881510   71168 machine.go:94] provisionDockerMachine start ...
	I0401 19:31:39.881529   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:31:39.881766   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:39.883980   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.884318   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:39.884360   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.884483   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:39.884675   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:39.884877   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:39.885029   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:39.885175   71168 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:39.885339   71168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.106 22 <nil> <nil>}
	I0401 19:31:39.885349   71168 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 19:31:39.994935   71168 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0401 19:31:39.994971   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetMachineName
	I0401 19:31:39.995213   71168 buildroot.go:166] provisioning hostname "old-k8s-version-163608"
	I0401 19:31:39.995241   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetMachineName
	I0401 19:31:39.995472   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:39.998179   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.998490   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:39.998525   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.998656   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:39.998805   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:39.998949   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:39.999054   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:39.999183   71168 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:39.999372   71168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.106 22 <nil> <nil>}
	I0401 19:31:39.999390   71168 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-163608 && echo "old-k8s-version-163608" | sudo tee /etc/hostname
	I0401 19:31:40.128852   71168 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-163608
	
	I0401 19:31:40.128880   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:40.131508   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.131817   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:40.131874   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.131987   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:40.132188   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:40.132365   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:40.132503   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:40.132693   71168 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:40.132890   71168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.106 22 <nil> <nil>}
	I0401 19:31:40.132908   71168 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-163608' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-163608/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-163608' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 19:31:40.252693   71168 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 19:31:40.252727   71168 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18233-10493/.minikube CaCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18233-10493/.minikube}
	I0401 19:31:40.252749   71168 buildroot.go:174] setting up certificates
	I0401 19:31:40.252759   71168 provision.go:84] configureAuth start
	I0401 19:31:40.252767   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetMachineName
	I0401 19:31:40.253030   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetIP
	I0401 19:31:40.255827   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.256183   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:40.256210   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.256418   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:40.259041   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.259388   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:40.259418   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.259540   71168 provision.go:143] copyHostCerts
	I0401 19:31:40.259592   71168 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem, removing ...
	I0401 19:31:40.259602   71168 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem
	I0401 19:31:40.259654   71168 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem (1082 bytes)
	I0401 19:31:40.259745   71168 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem, removing ...
	I0401 19:31:40.259754   71168 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem
	I0401 19:31:40.259773   71168 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem (1123 bytes)
	I0401 19:31:40.259822   71168 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem, removing ...
	I0401 19:31:40.259830   71168 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem
	I0401 19:31:40.259846   71168 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem (1679 bytes)
	I0401 19:31:40.259891   71168 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-163608 san=[127.0.0.1 192.168.50.106 localhost minikube old-k8s-version-163608]
	I0401 19:31:40.465177   71168 provision.go:177] copyRemoteCerts
	I0401 19:31:40.465241   71168 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 19:31:40.465265   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:40.467676   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.468040   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:40.468070   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.468272   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:40.468456   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:40.468622   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:40.468767   71168 sshutil.go:53] new ssh client: &{IP:192.168.50.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/old-k8s-version-163608/id_rsa Username:docker}
	I0401 19:31:40.557764   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 19:31:40.585326   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0401 19:31:40.611671   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 19:31:40.639265   71168 provision.go:87] duration metric: took 386.497023ms to configureAuth
	I0401 19:31:40.639296   71168 buildroot.go:189] setting minikube options for container-runtime
	I0401 19:31:40.639521   71168 config.go:182] Loaded profile config "old-k8s-version-163608": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 19:31:40.639590   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:40.642321   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.642733   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:40.642762   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.642921   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:40.643122   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:40.643294   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:40.643442   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:40.643647   71168 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:40.643802   71168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.106 22 <nil> <nil>}
	I0401 19:31:40.643819   71168 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 19:31:40.940619   71168 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 19:31:40.940647   71168 machine.go:97] duration metric: took 1.059122816s to provisionDockerMachine
	I0401 19:31:40.940661   71168 start.go:293] postStartSetup for "old-k8s-version-163608" (driver="kvm2")
	I0401 19:31:40.940672   71168 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 19:31:40.940687   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:31:40.940955   71168 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 19:31:40.940981   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:40.943787   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.944159   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:40.944197   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.944347   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:40.944556   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:40.944700   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:40.944834   71168 sshutil.go:53] new ssh client: &{IP:192.168.50.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/old-k8s-version-163608/id_rsa Username:docker}
	I0401 19:31:41.035824   71168 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 19:31:41.040975   71168 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 19:31:41.041007   71168 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/addons for local assets ...
	I0401 19:31:41.041085   71168 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/files for local assets ...
	I0401 19:31:41.041165   71168 filesync.go:149] local asset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> 177512.pem in /etc/ssl/certs
	I0401 19:31:41.041255   71168 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 19:31:41.052356   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:31:41.080699   71168 start.go:296] duration metric: took 140.024653ms for postStartSetup
	I0401 19:31:41.080737   71168 fix.go:56] duration metric: took 20.729726297s for fixHost
	I0401 19:31:41.080759   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:41.083664   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:41.084045   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:41.084075   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:41.084202   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:41.084405   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:41.084599   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:41.084796   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:41.084971   71168 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:41.085169   71168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.106 22 <nil> <nil>}
	I0401 19:31:41.085180   71168 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0401 19:31:41.203392   71168 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711999901.182365994
	
	I0401 19:31:41.203412   71168 fix.go:216] guest clock: 1711999901.182365994
	I0401 19:31:41.203419   71168 fix.go:229] Guest: 2024-04-01 19:31:41.182365994 +0000 UTC Remote: 2024-04-01 19:31:41.080741553 +0000 UTC m=+228.159955492 (delta=101.624441ms)
	I0401 19:31:41.203437   71168 fix.go:200] guest clock delta is within tolerance: 101.624441ms
	I0401 19:31:41.203442   71168 start.go:83] releasing machines lock for "old-k8s-version-163608", held for 20.852486097s
	I0401 19:31:41.203462   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:31:41.203744   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetIP
	I0401 19:31:41.206582   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:41.206952   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:41.206973   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:41.207151   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:31:41.207701   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:31:41.207891   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:31:41.207954   71168 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 19:31:41.207996   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:41.208096   71168 ssh_runner.go:195] Run: cat /version.json
	I0401 19:31:41.208127   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:41.210731   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:41.210928   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:41.211107   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:41.211132   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:41.211317   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:41.211446   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:41.211488   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:41.211491   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:41.211636   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:41.211692   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:41.211783   71168 sshutil.go:53] new ssh client: &{IP:192.168.50.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/old-k8s-version-163608/id_rsa Username:docker}
	I0401 19:31:41.211891   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:41.212031   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:41.212187   71168 sshutil.go:53] new ssh client: &{IP:192.168.50.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/old-k8s-version-163608/id_rsa Username:docker}
	I0401 19:31:41.296330   71168 ssh_runner.go:195] Run: systemctl --version
	I0401 19:31:41.326247   71168 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 19:31:41.479411   71168 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 19:31:41.486996   71168 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 19:31:41.487063   71168 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 19:31:41.507840   71168 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 19:31:41.507870   71168 start.go:494] detecting cgroup driver to use...
	I0401 19:31:41.507942   71168 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 19:31:41.533063   71168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 19:31:41.551699   71168 docker.go:217] disabling cri-docker service (if available) ...
	I0401 19:31:41.551754   71168 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 19:31:41.568078   71168 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 19:31:41.584278   71168 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 19:31:41.726884   71168 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 19:31:41.882514   71168 docker.go:233] disabling docker service ...
	I0401 19:31:41.882587   71168 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 19:31:41.901235   71168 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 19:31:41.919787   71168 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 19:31:42.082420   71168 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 19:31:42.248527   71168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 19:31:42.266610   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 19:31:42.295677   71168 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0401 19:31:42.295740   71168 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:42.313855   71168 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 19:31:42.313920   71168 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:42.327176   71168 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:42.339527   71168 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:42.351220   71168 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 19:31:42.363716   71168 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 19:31:42.379911   71168 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0401 19:31:42.379971   71168 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0401 19:31:42.395282   71168 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 19:31:42.407713   71168 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:31:42.579648   71168 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 19:31:42.764748   71168 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 19:31:42.764858   71168 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 19:31:42.771038   71168 start.go:562] Will wait 60s for crictl version
	I0401 19:31:42.771125   71168 ssh_runner.go:195] Run: which crictl
	I0401 19:31:42.775871   71168 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 19:31:42.823135   71168 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 19:31:42.823218   71168 ssh_runner.go:195] Run: crio --version
	I0401 19:31:42.863748   71168 ssh_runner.go:195] Run: crio --version
	I0401 19:31:42.900263   71168 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0401 19:31:42.901631   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetIP
	I0401 19:31:42.904464   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:42.904773   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:42.904812   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:42.905048   71168 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0401 19:31:42.910117   71168 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:31:42.925313   71168 kubeadm.go:877] updating cluster {Name:old-k8s-version-163608 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-163608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.106 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 19:31:42.925475   71168 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0401 19:31:42.925542   71168 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:31:42.974220   71168 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0401 19:31:42.974307   71168 ssh_runner.go:195] Run: which lz4
	I0401 19:31:42.979179   71168 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0401 19:31:42.984204   71168 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 19:31:42.984236   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0401 19:31:45.108131   71168 crio.go:462] duration metric: took 2.128988098s to copy over tarball
	I0401 19:31:45.108232   71168 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 19:31:48.581824   71168 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.473552916s)
	I0401 19:31:48.581871   71168 crio.go:469] duration metric: took 3.473700991s to extract the tarball
	I0401 19:31:48.581881   71168 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 19:31:48.630609   71168 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:31:48.673027   71168 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0401 19:31:48.673048   71168 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0401 19:31:48.673085   71168 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:31:48.673129   71168 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 19:31:48.673155   71168 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 19:31:48.673190   71168 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 19:31:48.673133   71168 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0401 19:31:48.673273   71168 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0401 19:31:48.673143   71168 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0401 19:31:48.673336   71168 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 19:31:48.675068   71168 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:31:48.675073   71168 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 19:31:48.675068   71168 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 19:31:48.675093   71168 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0401 19:31:48.675072   71168 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0401 19:31:48.675073   71168 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 19:31:48.675115   71168 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 19:31:48.675096   71168 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0401 19:31:48.827947   71168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0401 19:31:48.846025   71168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0401 19:31:48.848769   71168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0401 19:31:48.858366   71168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0401 19:31:48.858613   71168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0401 19:31:48.859241   71168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 19:31:48.862047   71168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0401 19:31:48.912299   71168 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0401 19:31:48.912346   71168 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0401 19:31:48.912399   71168 ssh_runner.go:195] Run: which crictl
	I0401 19:31:49.030117   71168 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0401 19:31:49.030357   71168 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 19:31:49.030122   71168 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0401 19:31:49.030433   71168 ssh_runner.go:195] Run: which crictl
	I0401 19:31:49.030460   71168 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 19:31:49.030526   71168 ssh_runner.go:195] Run: which crictl
	I0401 19:31:49.062211   71168 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0401 19:31:49.062327   71168 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0401 19:31:49.062234   71168 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0401 19:31:49.062415   71168 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0401 19:31:49.062396   71168 ssh_runner.go:195] Run: which crictl
	I0401 19:31:49.062461   71168 ssh_runner.go:195] Run: which crictl
	I0401 19:31:49.078249   71168 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0401 19:31:49.078308   71168 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 19:31:49.078323   71168 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0401 19:31:49.078358   71168 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 19:31:49.078379   71168 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0401 19:31:49.078398   71168 ssh_runner.go:195] Run: which crictl
	I0401 19:31:49.078426   71168 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0401 19:31:49.078440   71168 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0401 19:31:49.078362   71168 ssh_runner.go:195] Run: which crictl
	I0401 19:31:49.078466   71168 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0401 19:31:49.078494   71168 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0401 19:31:49.225060   71168 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0401 19:31:49.225137   71168 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0401 19:31:49.225160   71168 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0401 19:31:49.225199   71168 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0401 19:31:49.225250   71168 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0401 19:31:49.225252   71168 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 19:31:49.225326   71168 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0401 19:31:49.280782   71168 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0401 19:31:49.281709   71168 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0401 19:31:49.299218   71168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:31:49.465497   71168 cache_images.go:92] duration metric: took 792.432136ms to LoadCachedImages
	W0401 19:31:49.465595   71168 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0401 19:31:49.465613   71168 kubeadm.go:928] updating node { 192.168.50.106 8443 v1.20.0 crio true true} ...
	I0401 19:31:49.465768   71168 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-163608 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.106
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-163608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 19:31:49.465862   71168 ssh_runner.go:195] Run: crio config
	I0401 19:31:49.529730   71168 cni.go:84] Creating CNI manager for ""
	I0401 19:31:49.529757   71168 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:31:49.529771   71168 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 19:31:49.529799   71168 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.106 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-163608 NodeName:old-k8s-version-163608 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.106"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.106 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0401 19:31:49.529969   71168 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.106
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-163608"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.106
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.106"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 19:31:49.530037   71168 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0401 19:31:49.542642   71168 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 19:31:49.542724   71168 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 19:31:49.557001   71168 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0401 19:31:49.579568   71168 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 19:31:49.599692   71168 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0401 19:31:49.619780   71168 ssh_runner.go:195] Run: grep 192.168.50.106	control-plane.minikube.internal$ /etc/hosts
	I0401 19:31:49.625597   71168 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.106	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:31:49.643862   71168 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:31:49.791391   71168 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:31:49.814470   71168 certs.go:68] Setting up /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608 for IP: 192.168.50.106
	I0401 19:31:49.814497   71168 certs.go:194] generating shared ca certs ...
	I0401 19:31:49.814516   71168 certs.go:226] acquiring lock for ca certs: {Name:mk348b3e250c104b662139cd7212c6c6dfda3180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:31:49.814680   71168 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key
	I0401 19:31:49.814736   71168 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key
	I0401 19:31:49.814745   71168 certs.go:256] generating profile certs ...
	I0401 19:31:49.814852   71168 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/client.key
	I0401 19:31:49.814916   71168 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/apiserver.key.f2de0982
	I0401 19:31:49.814964   71168 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/proxy-client.key
	I0401 19:31:49.815119   71168 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem (1338 bytes)
	W0401 19:31:49.815178   71168 certs.go:480] ignoring /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751_empty.pem, impossibly tiny 0 bytes
	I0401 19:31:49.815195   71168 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem (1675 bytes)
	I0401 19:31:49.815224   71168 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem (1082 bytes)
	I0401 19:31:49.815266   71168 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem (1123 bytes)
	I0401 19:31:49.815299   71168 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem (1679 bytes)
	I0401 19:31:49.815362   71168 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:31:49.816196   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 19:31:49.866842   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 19:31:49.913788   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 19:31:49.953223   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 19:31:50.004313   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0401 19:31:50.046972   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 19:31:50.086990   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 19:31:50.134907   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 19:31:50.163395   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /usr/share/ca-certificates/177512.pem (1708 bytes)
	I0401 19:31:50.191901   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 19:31:50.221196   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem --> /usr/share/ca-certificates/17751.pem (1338 bytes)
	I0401 19:31:50.253024   71168 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0401 19:31:50.275781   71168 ssh_runner.go:195] Run: openssl version
	I0401 19:31:50.282795   71168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177512.pem && ln -fs /usr/share/ca-certificates/177512.pem /etc/ssl/certs/177512.pem"
	I0401 19:31:50.296952   71168 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177512.pem
	I0401 19:31:50.303868   71168 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 18:15 /usr/share/ca-certificates/177512.pem
	I0401 19:31:50.303950   71168 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177512.pem
	I0401 19:31:50.312249   71168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177512.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 19:31:50.328985   71168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 19:31:50.345917   71168 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:50.352041   71168 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 18:07 /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:50.352103   71168 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:50.358752   71168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 19:31:50.371702   71168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17751.pem && ln -fs /usr/share/ca-certificates/17751.pem /etc/ssl/certs/17751.pem"
	I0401 19:31:50.384633   71168 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17751.pem
	I0401 19:31:50.391229   71168 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 18:15 /usr/share/ca-certificates/17751.pem
	I0401 19:31:50.391277   71168 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17751.pem
	I0401 19:31:50.397980   71168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17751.pem /etc/ssl/certs/51391683.0"
	I0401 19:31:50.412674   71168 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 19:31:50.418084   71168 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 19:31:50.425102   71168 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 19:31:50.431949   71168 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 19:31:50.438665   71168 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 19:31:50.446633   71168 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 19:31:50.454688   71168 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 19:31:50.462805   71168 kubeadm.go:391] StartCluster: {Name:old-k8s-version-163608 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-163608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.106 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:31:50.462922   71168 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 19:31:50.462956   71168 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:31:50.505702   71168 cri.go:89] found id: ""
	I0401 19:31:50.505788   71168 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0401 19:31:50.517916   71168 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0401 19:31:50.517934   71168 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0401 19:31:50.517940   71168 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0401 19:31:50.517995   71168 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 19:31:50.529459   71168 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 19:31:50.530408   71168 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-163608" does not appear in /home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 19:31:50.531055   71168 kubeconfig.go:62] /home/jenkins/minikube-integration/18233-10493/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-163608" cluster setting kubeconfig missing "old-k8s-version-163608" context setting]
	I0401 19:31:50.532369   71168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/kubeconfig: {Name:mkbd988e40ba29769e9f8a43c4d876f38e957f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:31:50.534578   71168 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 19:31:50.546275   71168 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.106
	I0401 19:31:50.546309   71168 kubeadm.go:1154] stopping kube-system containers ...
	I0401 19:31:50.546328   71168 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0401 19:31:50.546371   71168 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:31:50.588826   71168 cri.go:89] found id: ""
	I0401 19:31:50.588881   71168 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0401 19:31:50.610933   71168 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:31:50.622201   71168 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:31:50.622221   71168 kubeadm.go:156] found existing configuration files:
	
	I0401 19:31:50.622266   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 19:31:50.634006   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:31:50.634071   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:31:50.647891   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 19:31:50.662548   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:31:50.662596   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:31:50.674627   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 19:31:50.686739   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:31:50.686825   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:31:50.700400   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 19:31:50.712952   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:31:50.713014   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:31:50.725616   71168 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:31:50.739130   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:50.874552   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:51.568640   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:51.850288   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:52.009607   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:52.122887   71168 api_server.go:52] waiting for apiserver process to appear ...
	I0401 19:31:52.122962   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:52.623084   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:53.123783   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:53.623248   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:54.124004   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:54.623873   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:55.123458   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:55.623923   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:56.123441   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:56.623192   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:57.123012   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:57.624010   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:58.123200   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:58.624028   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:59.123026   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:59.623993   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:00.123039   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:00.623632   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:01.123204   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:01.623162   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:02.123264   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:02.623788   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:03.123452   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:03.623784   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:04.123649   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:04.623076   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:05.123822   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:05.623487   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:06.123635   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:06.623689   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:07.123919   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:07.623237   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:08.123689   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:08.623160   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:09.124002   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:09.623090   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:10.123049   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:10.623111   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:11.123042   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:11.623980   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:12.123074   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:12.623530   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:13.123428   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:13.623899   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:14.123324   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:14.623889   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:15.123496   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:15.623779   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:16.124012   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:16.623620   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:17.123867   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:17.623014   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:18.123795   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:18.623529   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:19.123446   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:19.623223   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:20.123133   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:20.623058   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:21.123302   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:21.623115   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:22.123810   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:22.623878   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:23.123507   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:23.623244   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:24.123444   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:24.623346   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:25.123834   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:25.623814   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:26.124028   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:26.623428   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:27.123592   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:27.623451   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:28.123454   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:28.623502   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:29.123265   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:29.623449   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:30.123525   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:30.623634   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:31.123972   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:31.623023   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:32.123346   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:32.623839   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:33.123673   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:33.623088   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:34.123230   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:34.623967   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:35.123420   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:35.623499   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:36.123152   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:36.623963   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:37.123682   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:37.623536   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:38.123238   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:38.623831   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:39.123180   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:39.623801   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:40.123478   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:40.623651   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:41.123687   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:41.624016   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:42.123891   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:42.623493   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:43.123504   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:43.623527   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:44.124016   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:44.623931   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:45.123188   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:45.623649   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:46.123570   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:46.623179   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:47.123273   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:47.623842   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:48.123759   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:48.623092   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:49.123174   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:49.623986   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:50.123301   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:50.623694   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:51.123466   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:51.623618   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:52.123073   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:32:52.123172   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:32:52.164635   71168 cri.go:89] found id: ""
	I0401 19:32:52.164656   71168 logs.go:276] 0 containers: []
	W0401 19:32:52.164663   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:32:52.164669   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:32:52.164738   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:32:52.202531   71168 cri.go:89] found id: ""
	I0401 19:32:52.202560   71168 logs.go:276] 0 containers: []
	W0401 19:32:52.202572   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:32:52.202580   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:32:52.202653   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:32:52.247667   71168 cri.go:89] found id: ""
	I0401 19:32:52.247693   71168 logs.go:276] 0 containers: []
	W0401 19:32:52.247703   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:32:52.247714   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:32:52.247774   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:32:52.289029   71168 cri.go:89] found id: ""
	I0401 19:32:52.289054   71168 logs.go:276] 0 containers: []
	W0401 19:32:52.289062   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:32:52.289068   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:32:52.289114   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:32:52.326820   71168 cri.go:89] found id: ""
	I0401 19:32:52.326864   71168 logs.go:276] 0 containers: []
	W0401 19:32:52.326875   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:32:52.326882   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:32:52.326944   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:32:52.362793   71168 cri.go:89] found id: ""
	I0401 19:32:52.362827   71168 logs.go:276] 0 containers: []
	W0401 19:32:52.362838   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:32:52.362845   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:32:52.362950   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:32:52.400174   71168 cri.go:89] found id: ""
	I0401 19:32:52.400204   71168 logs.go:276] 0 containers: []
	W0401 19:32:52.400215   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:32:52.400222   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:32:52.400282   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:32:52.436027   71168 cri.go:89] found id: ""
	I0401 19:32:52.436056   71168 logs.go:276] 0 containers: []
	W0401 19:32:52.436066   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:32:52.436085   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:32:52.436099   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:32:52.477246   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:32:52.477272   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:32:52.529215   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:32:52.529247   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:32:52.544695   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:32:52.544724   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:32:52.677816   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:32:52.677849   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:32:52.677877   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:32:55.241224   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:55.256975   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:32:55.257045   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:32:55.298280   71168 cri.go:89] found id: ""
	I0401 19:32:55.298307   71168 logs.go:276] 0 containers: []
	W0401 19:32:55.298319   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:32:55.298326   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:32:55.298397   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:32:55.337707   71168 cri.go:89] found id: ""
	I0401 19:32:55.337732   71168 logs.go:276] 0 containers: []
	W0401 19:32:55.337739   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:32:55.337745   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:32:55.337791   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:32:55.381455   71168 cri.go:89] found id: ""
	I0401 19:32:55.381479   71168 logs.go:276] 0 containers: []
	W0401 19:32:55.381490   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:32:55.381496   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:32:55.381557   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:32:55.420715   71168 cri.go:89] found id: ""
	I0401 19:32:55.420739   71168 logs.go:276] 0 containers: []
	W0401 19:32:55.420749   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:32:55.420756   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:32:55.420820   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:32:55.459546   71168 cri.go:89] found id: ""
	I0401 19:32:55.459575   71168 logs.go:276] 0 containers: []
	W0401 19:32:55.459583   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:32:55.459588   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:32:55.459634   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:32:55.504240   71168 cri.go:89] found id: ""
	I0401 19:32:55.504267   71168 logs.go:276] 0 containers: []
	W0401 19:32:55.504277   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:32:55.504285   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:32:55.504368   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:32:55.539399   71168 cri.go:89] found id: ""
	I0401 19:32:55.539426   71168 logs.go:276] 0 containers: []
	W0401 19:32:55.539437   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:32:55.539443   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:32:55.539509   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:32:55.583823   71168 cri.go:89] found id: ""
	I0401 19:32:55.583861   71168 logs.go:276] 0 containers: []
	W0401 19:32:55.583872   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:32:55.583881   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:32:55.583895   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:32:55.645489   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:32:55.645523   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:32:55.712883   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:32:55.712920   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:32:55.734890   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:32:55.734923   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:32:55.853068   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:32:55.853089   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:32:55.853102   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:32:58.435925   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:58.450910   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:32:58.450980   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:32:58.487470   71168 cri.go:89] found id: ""
	I0401 19:32:58.487495   71168 logs.go:276] 0 containers: []
	W0401 19:32:58.487506   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:32:58.487514   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:32:58.487562   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:32:58.529513   71168 cri.go:89] found id: ""
	I0401 19:32:58.529534   71168 logs.go:276] 0 containers: []
	W0401 19:32:58.529543   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:32:58.529547   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:32:58.529592   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:32:58.574170   71168 cri.go:89] found id: ""
	I0401 19:32:58.574197   71168 logs.go:276] 0 containers: []
	W0401 19:32:58.574205   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:32:58.574211   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:32:58.574258   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:32:58.615379   71168 cri.go:89] found id: ""
	I0401 19:32:58.615405   71168 logs.go:276] 0 containers: []
	W0401 19:32:58.615414   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:32:58.615419   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:32:58.615468   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:32:58.655496   71168 cri.go:89] found id: ""
	I0401 19:32:58.655523   71168 logs.go:276] 0 containers: []
	W0401 19:32:58.655534   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:32:58.655542   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:32:58.655593   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:32:58.697199   71168 cri.go:89] found id: ""
	I0401 19:32:58.697229   71168 logs.go:276] 0 containers: []
	W0401 19:32:58.697238   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:32:58.697246   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:32:58.697312   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:32:58.735618   71168 cri.go:89] found id: ""
	I0401 19:32:58.735643   71168 logs.go:276] 0 containers: []
	W0401 19:32:58.735651   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:32:58.735656   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:32:58.735701   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:32:58.780583   71168 cri.go:89] found id: ""
	I0401 19:32:58.780613   71168 logs.go:276] 0 containers: []
	W0401 19:32:58.780624   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:32:58.780635   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:32:58.780649   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:32:58.829717   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:32:58.829743   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:32:58.844836   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:32:58.844866   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:32:58.923138   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:32:58.923157   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:32:58.923172   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:32:58.993680   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:32:58.993713   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:01.538920   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:01.556943   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:01.557017   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:01.608397   71168 cri.go:89] found id: ""
	I0401 19:33:01.608417   71168 logs.go:276] 0 containers: []
	W0401 19:33:01.608425   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:01.608430   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:01.608490   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:01.666573   71168 cri.go:89] found id: ""
	I0401 19:33:01.666599   71168 logs.go:276] 0 containers: []
	W0401 19:33:01.666609   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:01.666615   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:01.666674   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:01.726308   71168 cri.go:89] found id: ""
	I0401 19:33:01.726331   71168 logs.go:276] 0 containers: []
	W0401 19:33:01.726341   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:01.726347   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:01.726412   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:01.773095   71168 cri.go:89] found id: ""
	I0401 19:33:01.773118   71168 logs.go:276] 0 containers: []
	W0401 19:33:01.773125   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:01.773131   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:01.773189   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:01.813011   71168 cri.go:89] found id: ""
	I0401 19:33:01.813034   71168 logs.go:276] 0 containers: []
	W0401 19:33:01.813042   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:01.813048   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:01.813096   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:01.859124   71168 cri.go:89] found id: ""
	I0401 19:33:01.859151   71168 logs.go:276] 0 containers: []
	W0401 19:33:01.859161   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:01.859169   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:01.859228   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:01.904491   71168 cri.go:89] found id: ""
	I0401 19:33:01.904519   71168 logs.go:276] 0 containers: []
	W0401 19:33:01.904530   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:01.904537   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:01.904596   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:01.946768   71168 cri.go:89] found id: ""
	I0401 19:33:01.946794   71168 logs.go:276] 0 containers: []
	W0401 19:33:01.946804   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:01.946815   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:01.946829   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:02.026315   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:02.026362   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:02.072861   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:02.072893   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:02.132064   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:02.132105   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:02.151545   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:02.151575   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:02.234059   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:04.734559   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:04.755071   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:04.755130   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:04.798316   71168 cri.go:89] found id: ""
	I0401 19:33:04.798345   71168 logs.go:276] 0 containers: []
	W0401 19:33:04.798358   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:04.798366   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:04.798426   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:04.840011   71168 cri.go:89] found id: ""
	I0401 19:33:04.840032   71168 logs.go:276] 0 containers: []
	W0401 19:33:04.840043   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:04.840050   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:04.840106   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:04.883686   71168 cri.go:89] found id: ""
	I0401 19:33:04.883713   71168 logs.go:276] 0 containers: []
	W0401 19:33:04.883725   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:04.883733   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:04.883795   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:04.933810   71168 cri.go:89] found id: ""
	I0401 19:33:04.933844   71168 logs.go:276] 0 containers: []
	W0401 19:33:04.933855   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:04.933863   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:04.933925   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:04.983118   71168 cri.go:89] found id: ""
	I0401 19:33:04.983139   71168 logs.go:276] 0 containers: []
	W0401 19:33:04.983146   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:04.983151   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:04.983207   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:05.036146   71168 cri.go:89] found id: ""
	I0401 19:33:05.036169   71168 logs.go:276] 0 containers: []
	W0401 19:33:05.036179   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:05.036186   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:05.036242   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:05.086269   71168 cri.go:89] found id: ""
	I0401 19:33:05.086296   71168 logs.go:276] 0 containers: []
	W0401 19:33:05.086308   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:05.086315   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:05.086378   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:05.140893   71168 cri.go:89] found id: ""
	I0401 19:33:05.140914   71168 logs.go:276] 0 containers: []
	W0401 19:33:05.140922   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:05.140931   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:05.140946   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:05.161222   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:05.161249   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:05.262254   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:05.262276   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:05.262289   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:05.352880   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:05.352908   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:05.400720   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:05.400748   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:07.954227   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:07.970794   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:07.970850   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:08.013694   71168 cri.go:89] found id: ""
	I0401 19:33:08.013719   71168 logs.go:276] 0 containers: []
	W0401 19:33:08.013729   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:08.013737   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:08.013810   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:08.050810   71168 cri.go:89] found id: ""
	I0401 19:33:08.050849   71168 logs.go:276] 0 containers: []
	W0401 19:33:08.050861   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:08.050868   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:08.050932   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:08.092056   71168 cri.go:89] found id: ""
	I0401 19:33:08.092086   71168 logs.go:276] 0 containers: []
	W0401 19:33:08.092096   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:08.092102   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:08.092157   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:08.133171   71168 cri.go:89] found id: ""
	I0401 19:33:08.133195   71168 logs.go:276] 0 containers: []
	W0401 19:33:08.133205   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:08.133212   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:08.133271   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:08.173997   71168 cri.go:89] found id: ""
	I0401 19:33:08.174023   71168 logs.go:276] 0 containers: []
	W0401 19:33:08.174034   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:08.174041   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:08.174102   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:08.212740   71168 cri.go:89] found id: ""
	I0401 19:33:08.212768   71168 logs.go:276] 0 containers: []
	W0401 19:33:08.212778   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:08.212785   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:08.212831   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:08.254815   71168 cri.go:89] found id: ""
	I0401 19:33:08.254837   71168 logs.go:276] 0 containers: []
	W0401 19:33:08.254847   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:08.254854   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:08.254909   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:08.295347   71168 cri.go:89] found id: ""
	I0401 19:33:08.295375   71168 logs.go:276] 0 containers: []
	W0401 19:33:08.295382   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:08.295390   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:08.295402   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:08.311574   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:08.311600   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:08.405437   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:08.405455   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:08.405470   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:08.483687   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:08.483722   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:08.526132   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:08.526158   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:11.076590   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:11.093846   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:11.093983   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:11.146046   71168 cri.go:89] found id: ""
	I0401 19:33:11.146073   71168 logs.go:276] 0 containers: []
	W0401 19:33:11.146083   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:11.146088   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:11.146146   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:11.193751   71168 cri.go:89] found id: ""
	I0401 19:33:11.193782   71168 logs.go:276] 0 containers: []
	W0401 19:33:11.193793   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:11.193801   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:11.193873   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:11.242150   71168 cri.go:89] found id: ""
	I0401 19:33:11.242178   71168 logs.go:276] 0 containers: []
	W0401 19:33:11.242189   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:11.242197   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:11.242271   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:11.294063   71168 cri.go:89] found id: ""
	I0401 19:33:11.294092   71168 logs.go:276] 0 containers: []
	W0401 19:33:11.294103   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:11.294110   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:11.294175   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:11.334764   71168 cri.go:89] found id: ""
	I0401 19:33:11.334784   71168 logs.go:276] 0 containers: []
	W0401 19:33:11.334791   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:11.334797   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:11.334846   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:11.372770   71168 cri.go:89] found id: ""
	I0401 19:33:11.372789   71168 logs.go:276] 0 containers: []
	W0401 19:33:11.372795   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:11.372806   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:11.372871   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:11.413233   71168 cri.go:89] found id: ""
	I0401 19:33:11.413261   71168 logs.go:276] 0 containers: []
	W0401 19:33:11.413271   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:11.413278   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:11.413337   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:11.456044   71168 cri.go:89] found id: ""
	I0401 19:33:11.456073   71168 logs.go:276] 0 containers: []
	W0401 19:33:11.456084   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:11.456093   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:11.456103   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:11.471157   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:11.471183   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:11.550489   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:11.550508   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:11.550523   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:11.635360   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:11.635389   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:11.680683   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:11.680713   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:14.235295   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:14.251513   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:14.251590   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:14.291688   71168 cri.go:89] found id: ""
	I0401 19:33:14.291715   71168 logs.go:276] 0 containers: []
	W0401 19:33:14.291725   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:14.291732   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:14.291792   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:14.332030   71168 cri.go:89] found id: ""
	I0401 19:33:14.332051   71168 logs.go:276] 0 containers: []
	W0401 19:33:14.332060   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:14.332068   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:14.332132   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:14.372098   71168 cri.go:89] found id: ""
	I0401 19:33:14.372122   71168 logs.go:276] 0 containers: []
	W0401 19:33:14.372130   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:14.372137   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:14.372183   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:14.410529   71168 cri.go:89] found id: ""
	I0401 19:33:14.410554   71168 logs.go:276] 0 containers: []
	W0401 19:33:14.410563   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:14.410570   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:14.410624   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:14.451198   71168 cri.go:89] found id: ""
	I0401 19:33:14.451226   71168 logs.go:276] 0 containers: []
	W0401 19:33:14.451238   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:14.451246   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:14.451306   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:14.494588   71168 cri.go:89] found id: ""
	I0401 19:33:14.494616   71168 logs.go:276] 0 containers: []
	W0401 19:33:14.494627   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:14.494635   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:14.494689   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:14.537561   71168 cri.go:89] found id: ""
	I0401 19:33:14.537583   71168 logs.go:276] 0 containers: []
	W0401 19:33:14.537590   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:14.537597   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:14.537674   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:14.580624   71168 cri.go:89] found id: ""
	I0401 19:33:14.580651   71168 logs.go:276] 0 containers: []
	W0401 19:33:14.580662   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:14.580672   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:14.580688   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:14.635769   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:14.635798   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:14.650275   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:14.650304   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:14.742355   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:14.742378   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:14.742394   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:14.827839   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:14.827869   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:17.373408   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:17.390110   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:17.390185   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:17.432355   71168 cri.go:89] found id: ""
	I0401 19:33:17.432384   71168 logs.go:276] 0 containers: []
	W0401 19:33:17.432396   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:17.432409   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:17.432471   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:17.476458   71168 cri.go:89] found id: ""
	I0401 19:33:17.476484   71168 logs.go:276] 0 containers: []
	W0401 19:33:17.476495   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:17.476502   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:17.476587   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:17.519657   71168 cri.go:89] found id: ""
	I0401 19:33:17.519686   71168 logs.go:276] 0 containers: []
	W0401 19:33:17.519694   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:17.519699   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:17.519751   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:17.559962   71168 cri.go:89] found id: ""
	I0401 19:33:17.559985   71168 logs.go:276] 0 containers: []
	W0401 19:33:17.559992   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:17.559997   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:17.560054   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:17.608924   71168 cri.go:89] found id: ""
	I0401 19:33:17.608995   71168 logs.go:276] 0 containers: []
	W0401 19:33:17.609009   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:17.609016   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:17.609075   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:17.648371   71168 cri.go:89] found id: ""
	I0401 19:33:17.648394   71168 logs.go:276] 0 containers: []
	W0401 19:33:17.648401   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:17.648406   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:17.648462   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:17.689217   71168 cri.go:89] found id: ""
	I0401 19:33:17.689239   71168 logs.go:276] 0 containers: []
	W0401 19:33:17.689246   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:17.689252   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:17.689312   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:17.741738   71168 cri.go:89] found id: ""
	I0401 19:33:17.741768   71168 logs.go:276] 0 containers: []
	W0401 19:33:17.741779   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:17.741790   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:17.741805   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:17.839857   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:17.839887   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:17.888684   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:17.888716   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:17.944268   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:17.944298   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:17.959305   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:17.959334   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:18.040820   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:20.541980   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:20.558198   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:20.558270   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:20.596329   71168 cri.go:89] found id: ""
	I0401 19:33:20.596357   71168 logs.go:276] 0 containers: []
	W0401 19:33:20.596366   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:20.596373   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:20.596431   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:20.638611   71168 cri.go:89] found id: ""
	I0401 19:33:20.638639   71168 logs.go:276] 0 containers: []
	W0401 19:33:20.638664   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:20.638672   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:20.638729   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:20.677984   71168 cri.go:89] found id: ""
	I0401 19:33:20.678014   71168 logs.go:276] 0 containers: []
	W0401 19:33:20.678024   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:20.678032   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:20.678080   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:20.718491   71168 cri.go:89] found id: ""
	I0401 19:33:20.718520   71168 logs.go:276] 0 containers: []
	W0401 19:33:20.718530   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:20.718537   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:20.718597   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:20.772147   71168 cri.go:89] found id: ""
	I0401 19:33:20.772174   71168 logs.go:276] 0 containers: []
	W0401 19:33:20.772185   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:20.772199   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:20.772258   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:20.823339   71168 cri.go:89] found id: ""
	I0401 19:33:20.823361   71168 logs.go:276] 0 containers: []
	W0401 19:33:20.823372   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:20.823380   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:20.823463   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:20.884081   71168 cri.go:89] found id: ""
	I0401 19:33:20.884106   71168 logs.go:276] 0 containers: []
	W0401 19:33:20.884117   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:20.884124   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:20.884185   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:20.931679   71168 cri.go:89] found id: ""
	I0401 19:33:20.931703   71168 logs.go:276] 0 containers: []
	W0401 19:33:20.931713   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:20.931722   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:20.931736   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:21.016766   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:21.016797   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:21.067600   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:21.067632   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:21.136989   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:21.137045   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:21.152673   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:21.152706   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:21.250186   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:23.750565   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:23.768458   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:23.768534   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:23.814489   71168 cri.go:89] found id: ""
	I0401 19:33:23.814534   71168 logs.go:276] 0 containers: []
	W0401 19:33:23.814555   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:23.814565   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:23.814632   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:23.854954   71168 cri.go:89] found id: ""
	I0401 19:33:23.854981   71168 logs.go:276] 0 containers: []
	W0401 19:33:23.854989   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:23.854995   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:23.855060   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:23.896115   71168 cri.go:89] found id: ""
	I0401 19:33:23.896148   71168 logs.go:276] 0 containers: []
	W0401 19:33:23.896159   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:23.896169   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:23.896231   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:23.941300   71168 cri.go:89] found id: ""
	I0401 19:33:23.941324   71168 logs.go:276] 0 containers: []
	W0401 19:33:23.941337   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:23.941344   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:23.941390   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:23.983955   71168 cri.go:89] found id: ""
	I0401 19:33:23.983982   71168 logs.go:276] 0 containers: []
	W0401 19:33:23.983991   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:23.983997   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:23.984056   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:24.020756   71168 cri.go:89] found id: ""
	I0401 19:33:24.020777   71168 logs.go:276] 0 containers: []
	W0401 19:33:24.020784   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:24.020789   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:24.020835   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:24.063426   71168 cri.go:89] found id: ""
	I0401 19:33:24.063454   71168 logs.go:276] 0 containers: []
	W0401 19:33:24.063462   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:24.063467   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:24.063529   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:24.110924   71168 cri.go:89] found id: ""
	I0401 19:33:24.110945   71168 logs.go:276] 0 containers: []
	W0401 19:33:24.110952   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:24.110960   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:24.110969   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:24.179200   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:24.179240   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:24.194880   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:24.194909   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:24.280555   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:24.280588   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:24.280603   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:24.359502   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:24.359534   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:26.909147   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:26.925961   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:26.926028   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:26.969502   71168 cri.go:89] found id: ""
	I0401 19:33:26.969525   71168 logs.go:276] 0 containers: []
	W0401 19:33:26.969536   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:26.969543   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:26.969604   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:27.015205   71168 cri.go:89] found id: ""
	I0401 19:33:27.015232   71168 logs.go:276] 0 containers: []
	W0401 19:33:27.015241   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:27.015246   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:27.015296   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:27.055943   71168 cri.go:89] found id: ""
	I0401 19:33:27.055968   71168 logs.go:276] 0 containers: []
	W0401 19:33:27.055977   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:27.055983   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:27.056039   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:27.095447   71168 cri.go:89] found id: ""
	I0401 19:33:27.095474   71168 logs.go:276] 0 containers: []
	W0401 19:33:27.095485   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:27.095497   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:27.095558   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:27.137912   71168 cri.go:89] found id: ""
	I0401 19:33:27.137941   71168 logs.go:276] 0 containers: []
	W0401 19:33:27.137948   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:27.137954   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:27.138008   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:27.183303   71168 cri.go:89] found id: ""
	I0401 19:33:27.183325   71168 logs.go:276] 0 containers: []
	W0401 19:33:27.183335   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:27.183344   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:27.183403   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:27.225780   71168 cri.go:89] found id: ""
	I0401 19:33:27.225804   71168 logs.go:276] 0 containers: []
	W0401 19:33:27.225814   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:27.225822   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:27.225880   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:27.268136   71168 cri.go:89] found id: ""
	I0401 19:33:27.268159   71168 logs.go:276] 0 containers: []
	W0401 19:33:27.268168   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:27.268191   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:27.268215   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:27.325527   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:27.325557   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:27.341727   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:27.341763   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:27.432369   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:27.432389   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:27.432403   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:27.523104   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:27.523135   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:30.066147   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:30.079999   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:30.080062   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:30.121887   71168 cri.go:89] found id: ""
	I0401 19:33:30.121911   71168 logs.go:276] 0 containers: []
	W0401 19:33:30.121920   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:30.121929   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:30.121986   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:30.163939   71168 cri.go:89] found id: ""
	I0401 19:33:30.163967   71168 logs.go:276] 0 containers: []
	W0401 19:33:30.163978   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:30.163986   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:30.164051   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:30.203924   71168 cri.go:89] found id: ""
	I0401 19:33:30.203965   71168 logs.go:276] 0 containers: []
	W0401 19:33:30.203977   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:30.203985   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:30.204048   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:30.243771   71168 cri.go:89] found id: ""
	I0401 19:33:30.243798   71168 logs.go:276] 0 containers: []
	W0401 19:33:30.243809   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:30.243816   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:30.243888   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:30.284039   71168 cri.go:89] found id: ""
	I0401 19:33:30.284066   71168 logs.go:276] 0 containers: []
	W0401 19:33:30.284074   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:30.284079   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:30.284127   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:30.327549   71168 cri.go:89] found id: ""
	I0401 19:33:30.327570   71168 logs.go:276] 0 containers: []
	W0401 19:33:30.327577   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:30.327583   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:30.327630   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:30.365258   71168 cri.go:89] found id: ""
	I0401 19:33:30.365281   71168 logs.go:276] 0 containers: []
	W0401 19:33:30.365291   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:30.365297   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:30.365352   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:30.405959   71168 cri.go:89] found id: ""
	I0401 19:33:30.405984   71168 logs.go:276] 0 containers: []
	W0401 19:33:30.405992   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:30.405999   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:30.406011   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:30.480668   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:30.480692   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:30.480706   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:30.566042   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:30.566077   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:30.629250   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:30.629285   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:30.682185   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:30.682213   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:33.199466   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:33.213557   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:33.213630   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:33.255038   71168 cri.go:89] found id: ""
	I0401 19:33:33.255062   71168 logs.go:276] 0 containers: []
	W0401 19:33:33.255072   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:33.255079   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:33.255143   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:33.297724   71168 cri.go:89] found id: ""
	I0401 19:33:33.297751   71168 logs.go:276] 0 containers: []
	W0401 19:33:33.297761   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:33.297767   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:33.297836   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:33.340694   71168 cri.go:89] found id: ""
	I0401 19:33:33.340718   71168 logs.go:276] 0 containers: []
	W0401 19:33:33.340727   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:33.340735   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:33.340794   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:33.388857   71168 cri.go:89] found id: ""
	I0401 19:33:33.388883   71168 logs.go:276] 0 containers: []
	W0401 19:33:33.388891   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:33.388896   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:33.388940   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:33.430875   71168 cri.go:89] found id: ""
	I0401 19:33:33.430899   71168 logs.go:276] 0 containers: []
	W0401 19:33:33.430906   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:33.430911   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:33.430966   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:33.479877   71168 cri.go:89] found id: ""
	I0401 19:33:33.479905   71168 logs.go:276] 0 containers: []
	W0401 19:33:33.479917   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:33.479923   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:33.479968   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:33.522635   71168 cri.go:89] found id: ""
	I0401 19:33:33.522662   71168 logs.go:276] 0 containers: []
	W0401 19:33:33.522672   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:33.522680   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:33.522737   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:33.560497   71168 cri.go:89] found id: ""
	I0401 19:33:33.560519   71168 logs.go:276] 0 containers: []
	W0401 19:33:33.560527   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:33.560534   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:33.560549   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:33.612141   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:33.612170   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:33.665142   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:33.665170   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:33.681076   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:33.681100   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:33.755938   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:33.755966   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:33.755983   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:36.341957   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:36.359519   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:36.359586   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:36.416339   71168 cri.go:89] found id: ""
	I0401 19:33:36.416362   71168 logs.go:276] 0 containers: []
	W0401 19:33:36.416373   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:36.416381   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:36.416442   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:36.473883   71168 cri.go:89] found id: ""
	I0401 19:33:36.473906   71168 logs.go:276] 0 containers: []
	W0401 19:33:36.473918   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:36.473925   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:36.473988   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:36.521532   71168 cri.go:89] found id: ""
	I0401 19:33:36.521558   71168 logs.go:276] 0 containers: []
	W0401 19:33:36.521568   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:36.521575   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:36.521639   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:36.563420   71168 cri.go:89] found id: ""
	I0401 19:33:36.563446   71168 logs.go:276] 0 containers: []
	W0401 19:33:36.563454   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:36.563459   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:36.563520   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:36.605658   71168 cri.go:89] found id: ""
	I0401 19:33:36.605678   71168 logs.go:276] 0 containers: []
	W0401 19:33:36.605689   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:36.605697   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:36.605759   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:36.645611   71168 cri.go:89] found id: ""
	I0401 19:33:36.645631   71168 logs.go:276] 0 containers: []
	W0401 19:33:36.645638   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:36.645656   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:36.645715   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:36.685994   71168 cri.go:89] found id: ""
	I0401 19:33:36.686022   71168 logs.go:276] 0 containers: []
	W0401 19:33:36.686033   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:36.686041   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:36.686099   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:36.725573   71168 cri.go:89] found id: ""
	I0401 19:33:36.725598   71168 logs.go:276] 0 containers: []
	W0401 19:33:36.725608   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:36.725618   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:36.725630   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:36.778854   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:36.778885   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:36.795003   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:36.795036   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:36.872648   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:36.872666   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:36.872678   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:36.956648   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:36.956683   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:39.502868   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:39.519090   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:39.519161   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:39.562347   71168 cri.go:89] found id: ""
	I0401 19:33:39.562371   71168 logs.go:276] 0 containers: []
	W0401 19:33:39.562379   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:39.562384   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:39.562442   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:39.607250   71168 cri.go:89] found id: ""
	I0401 19:33:39.607276   71168 logs.go:276] 0 containers: []
	W0401 19:33:39.607286   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:39.607293   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:39.607343   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:39.650683   71168 cri.go:89] found id: ""
	I0401 19:33:39.650704   71168 logs.go:276] 0 containers: []
	W0401 19:33:39.650712   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:39.650717   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:39.650764   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:39.694676   71168 cri.go:89] found id: ""
	I0401 19:33:39.694706   71168 logs.go:276] 0 containers: []
	W0401 19:33:39.694718   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:39.694724   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:39.694783   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:39.733873   71168 cri.go:89] found id: ""
	I0401 19:33:39.733901   71168 logs.go:276] 0 containers: []
	W0401 19:33:39.733911   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:39.733919   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:39.733980   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:39.773625   71168 cri.go:89] found id: ""
	I0401 19:33:39.773668   71168 logs.go:276] 0 containers: []
	W0401 19:33:39.773679   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:39.773686   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:39.773735   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:39.815020   71168 cri.go:89] found id: ""
	I0401 19:33:39.815053   71168 logs.go:276] 0 containers: []
	W0401 19:33:39.815064   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:39.815071   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:39.815134   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:39.855575   71168 cri.go:89] found id: ""
	I0401 19:33:39.855606   71168 logs.go:276] 0 containers: []
	W0401 19:33:39.855615   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:39.855626   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:39.855641   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:39.873827   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:39.873857   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:39.948487   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:39.948507   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:39.948521   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:40.034026   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:40.034062   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:40.077798   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:40.077828   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:42.637999   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:42.654991   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:42.655063   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:42.695920   71168 cri.go:89] found id: ""
	I0401 19:33:42.695953   71168 logs.go:276] 0 containers: []
	W0401 19:33:42.695964   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:42.695971   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:42.696030   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:42.737303   71168 cri.go:89] found id: ""
	I0401 19:33:42.737325   71168 logs.go:276] 0 containers: []
	W0401 19:33:42.737333   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:42.737341   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:42.737393   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:42.777922   71168 cri.go:89] found id: ""
	I0401 19:33:42.777953   71168 logs.go:276] 0 containers: []
	W0401 19:33:42.777965   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:42.777972   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:42.778036   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:42.818339   71168 cri.go:89] found id: ""
	I0401 19:33:42.818364   71168 logs.go:276] 0 containers: []
	W0401 19:33:42.818372   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:42.818379   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:42.818435   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:42.859470   71168 cri.go:89] found id: ""
	I0401 19:33:42.859494   71168 logs.go:276] 0 containers: []
	W0401 19:33:42.859502   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:42.859507   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:42.859556   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:42.901950   71168 cri.go:89] found id: ""
	I0401 19:33:42.901980   71168 logs.go:276] 0 containers: []
	W0401 19:33:42.901989   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:42.901996   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:42.902063   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:42.947230   71168 cri.go:89] found id: ""
	I0401 19:33:42.947258   71168 logs.go:276] 0 containers: []
	W0401 19:33:42.947268   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:42.947275   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:42.947351   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:42.988997   71168 cri.go:89] found id: ""
	I0401 19:33:42.989022   71168 logs.go:276] 0 containers: []
	W0401 19:33:42.989032   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:42.989049   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:42.989066   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:43.075323   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:43.075352   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:43.075363   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:43.164445   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:43.164479   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:43.215852   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:43.215885   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:43.271301   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:43.271334   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:45.786705   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:45.804389   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:45.804445   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:45.849838   71168 cri.go:89] found id: ""
	I0401 19:33:45.849872   71168 logs.go:276] 0 containers: []
	W0401 19:33:45.849883   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:45.849891   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:45.849950   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:45.890603   71168 cri.go:89] found id: ""
	I0401 19:33:45.890625   71168 logs.go:276] 0 containers: []
	W0401 19:33:45.890635   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:45.890642   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:45.890703   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:45.929189   71168 cri.go:89] found id: ""
	I0401 19:33:45.929210   71168 logs.go:276] 0 containers: []
	W0401 19:33:45.929218   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:45.929223   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:45.929268   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:45.968266   71168 cri.go:89] found id: ""
	I0401 19:33:45.968292   71168 logs.go:276] 0 containers: []
	W0401 19:33:45.968303   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:45.968310   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:45.968365   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:46.007114   71168 cri.go:89] found id: ""
	I0401 19:33:46.007135   71168 logs.go:276] 0 containers: []
	W0401 19:33:46.007143   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:46.007148   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:46.007195   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:46.046067   71168 cri.go:89] found id: ""
	I0401 19:33:46.046088   71168 logs.go:276] 0 containers: []
	W0401 19:33:46.046095   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:46.046101   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:46.046186   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:46.083604   71168 cri.go:89] found id: ""
	I0401 19:33:46.083630   71168 logs.go:276] 0 containers: []
	W0401 19:33:46.083644   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:46.083651   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:46.083709   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:46.125435   71168 cri.go:89] found id: ""
	I0401 19:33:46.125457   71168 logs.go:276] 0 containers: []
	W0401 19:33:46.125464   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:46.125472   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:46.125483   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:46.179060   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:46.179092   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:46.195139   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:46.195179   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:46.275876   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:46.275903   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:46.275914   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:46.365430   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:46.365465   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:48.908390   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:48.924357   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:48.924416   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:48.969325   71168 cri.go:89] found id: ""
	I0401 19:33:48.969351   71168 logs.go:276] 0 containers: []
	W0401 19:33:48.969359   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:48.969364   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:48.969421   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:49.006702   71168 cri.go:89] found id: ""
	I0401 19:33:49.006724   71168 logs.go:276] 0 containers: []
	W0401 19:33:49.006731   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:49.006736   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:49.006785   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:49.051196   71168 cri.go:89] found id: ""
	I0401 19:33:49.051229   71168 logs.go:276] 0 containers: []
	W0401 19:33:49.051241   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:49.051260   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:49.051336   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:49.098123   71168 cri.go:89] found id: ""
	I0401 19:33:49.098150   71168 logs.go:276] 0 containers: []
	W0401 19:33:49.098159   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:49.098166   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:49.098225   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:49.138203   71168 cri.go:89] found id: ""
	I0401 19:33:49.138232   71168 logs.go:276] 0 containers: []
	W0401 19:33:49.138239   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:49.138244   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:49.138290   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:49.185441   71168 cri.go:89] found id: ""
	I0401 19:33:49.185465   71168 logs.go:276] 0 containers: []
	W0401 19:33:49.185473   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:49.185478   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:49.185537   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:49.235649   71168 cri.go:89] found id: ""
	I0401 19:33:49.235670   71168 logs.go:276] 0 containers: []
	W0401 19:33:49.235678   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:49.235683   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:49.235762   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:49.279638   71168 cri.go:89] found id: ""
	I0401 19:33:49.279662   71168 logs.go:276] 0 containers: []
	W0401 19:33:49.279673   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:49.279683   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:49.279699   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:49.340761   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:49.340798   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:49.356552   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:49.356581   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:49.441110   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:49.441129   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:49.441140   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:49.523159   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:49.523189   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:52.067710   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:52.082986   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:52.083046   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:52.128510   71168 cri.go:89] found id: ""
	I0401 19:33:52.128531   71168 logs.go:276] 0 containers: []
	W0401 19:33:52.128538   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:52.128543   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:52.128590   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:52.167767   71168 cri.go:89] found id: ""
	I0401 19:33:52.167792   71168 logs.go:276] 0 containers: []
	W0401 19:33:52.167803   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:52.167810   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:52.167871   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:52.206384   71168 cri.go:89] found id: ""
	I0401 19:33:52.206416   71168 logs.go:276] 0 containers: []
	W0401 19:33:52.206426   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:52.206433   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:52.206493   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:52.245277   71168 cri.go:89] found id: ""
	I0401 19:33:52.245301   71168 logs.go:276] 0 containers: []
	W0401 19:33:52.245309   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:52.245318   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:52.245388   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:52.283925   71168 cri.go:89] found id: ""
	I0401 19:33:52.283954   71168 logs.go:276] 0 containers: []
	W0401 19:33:52.283964   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:52.283971   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:52.284032   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:52.323944   71168 cri.go:89] found id: ""
	I0401 19:33:52.323970   71168 logs.go:276] 0 containers: []
	W0401 19:33:52.323981   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:52.323988   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:52.324045   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:52.364853   71168 cri.go:89] found id: ""
	I0401 19:33:52.364882   71168 logs.go:276] 0 containers: []
	W0401 19:33:52.364893   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:52.364901   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:52.364958   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:52.404136   71168 cri.go:89] found id: ""
	I0401 19:33:52.404158   71168 logs.go:276] 0 containers: []
	W0401 19:33:52.404165   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:52.404173   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:52.404184   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:52.459097   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:52.459129   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:52.474392   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:52.474417   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:52.551817   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:52.551843   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:52.551860   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:52.650710   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:52.650750   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:55.205689   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:55.222840   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:55.222901   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:55.263783   71168 cri.go:89] found id: ""
	I0401 19:33:55.263813   71168 logs.go:276] 0 containers: []
	W0401 19:33:55.263820   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:55.263828   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:55.263883   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:55.300788   71168 cri.go:89] found id: ""
	I0401 19:33:55.300818   71168 logs.go:276] 0 containers: []
	W0401 19:33:55.300826   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:55.300834   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:55.300888   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:55.343189   71168 cri.go:89] found id: ""
	I0401 19:33:55.343215   71168 logs.go:276] 0 containers: []
	W0401 19:33:55.343223   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:55.343229   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:55.343286   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:55.387560   71168 cri.go:89] found id: ""
	I0401 19:33:55.387587   71168 logs.go:276] 0 containers: []
	W0401 19:33:55.387597   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:55.387604   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:55.387663   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:55.428078   71168 cri.go:89] found id: ""
	I0401 19:33:55.428103   71168 logs.go:276] 0 containers: []
	W0401 19:33:55.428112   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:55.428119   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:55.428181   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:55.472696   71168 cri.go:89] found id: ""
	I0401 19:33:55.472722   71168 logs.go:276] 0 containers: []
	W0401 19:33:55.472734   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:55.472741   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:55.472797   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:55.518071   71168 cri.go:89] found id: ""
	I0401 19:33:55.518115   71168 logs.go:276] 0 containers: []
	W0401 19:33:55.518126   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:55.518136   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:55.518201   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:55.555697   71168 cri.go:89] found id: ""
	I0401 19:33:55.555717   71168 logs.go:276] 0 containers: []
	W0401 19:33:55.555724   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:55.555732   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:55.555747   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:55.637462   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:55.637492   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:55.682353   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:55.682380   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:55.735451   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:55.735484   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:55.750928   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:55.750954   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:55.824610   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:58.325742   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:58.341022   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:58.341092   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:58.380910   71168 cri.go:89] found id: ""
	I0401 19:33:58.380932   71168 logs.go:276] 0 containers: []
	W0401 19:33:58.380940   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:58.380946   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:58.380990   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:58.420387   71168 cri.go:89] found id: ""
	I0401 19:33:58.420413   71168 logs.go:276] 0 containers: []
	W0401 19:33:58.420425   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:58.420431   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:58.420479   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:58.460470   71168 cri.go:89] found id: ""
	I0401 19:33:58.460501   71168 logs.go:276] 0 containers: []
	W0401 19:33:58.460511   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:58.460520   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:58.460580   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:58.496844   71168 cri.go:89] found id: ""
	I0401 19:33:58.496867   71168 logs.go:276] 0 containers: []
	W0401 19:33:58.496875   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:58.496881   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:58.496930   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:58.535883   71168 cri.go:89] found id: ""
	I0401 19:33:58.535905   71168 logs.go:276] 0 containers: []
	W0401 19:33:58.535915   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:58.535922   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:58.535979   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:58.576833   71168 cri.go:89] found id: ""
	I0401 19:33:58.576855   71168 logs.go:276] 0 containers: []
	W0401 19:33:58.576863   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:58.576869   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:58.576913   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:58.615057   71168 cri.go:89] found id: ""
	I0401 19:33:58.615081   71168 logs.go:276] 0 containers: []
	W0401 19:33:58.615091   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:58.615098   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:58.615156   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:58.657982   71168 cri.go:89] found id: ""
	I0401 19:33:58.658008   71168 logs.go:276] 0 containers: []
	W0401 19:33:58.658018   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:58.658028   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:58.658045   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:58.734579   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:58.734601   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:58.734616   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:58.821779   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:58.821819   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:58.894470   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:58.894506   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:58.949854   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:58.949884   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:01.465820   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:01.481929   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:01.481984   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:01.525371   71168 cri.go:89] found id: ""
	I0401 19:34:01.525397   71168 logs.go:276] 0 containers: []
	W0401 19:34:01.525407   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:01.525415   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:01.525473   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:01.571106   71168 cri.go:89] found id: ""
	I0401 19:34:01.571136   71168 logs.go:276] 0 containers: []
	W0401 19:34:01.571146   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:01.571153   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:01.571214   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:01.617666   71168 cri.go:89] found id: ""
	I0401 19:34:01.617705   71168 logs.go:276] 0 containers: []
	W0401 19:34:01.617717   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:01.617725   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:01.617787   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:01.655286   71168 cri.go:89] found id: ""
	I0401 19:34:01.655311   71168 logs.go:276] 0 containers: []
	W0401 19:34:01.655321   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:01.655328   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:01.655396   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:01.694911   71168 cri.go:89] found id: ""
	I0401 19:34:01.694940   71168 logs.go:276] 0 containers: []
	W0401 19:34:01.694950   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:01.694957   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:01.695040   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:01.734970   71168 cri.go:89] found id: ""
	I0401 19:34:01.734996   71168 logs.go:276] 0 containers: []
	W0401 19:34:01.735007   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:01.735014   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:01.735071   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:01.778846   71168 cri.go:89] found id: ""
	I0401 19:34:01.778871   71168 logs.go:276] 0 containers: []
	W0401 19:34:01.778879   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:01.778885   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:01.778958   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:01.821934   71168 cri.go:89] found id: ""
	I0401 19:34:01.821964   71168 logs.go:276] 0 containers: []
	W0401 19:34:01.821975   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:01.821986   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:01.822002   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:01.880123   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:01.880155   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:01.895178   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:01.895200   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:01.972248   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:01.972275   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:01.972290   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:02.056663   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:02.056694   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:04.603745   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:04.619269   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:04.619344   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:04.658089   71168 cri.go:89] found id: ""
	I0401 19:34:04.658111   71168 logs.go:276] 0 containers: []
	W0401 19:34:04.658118   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:04.658123   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:04.658168   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:04.700596   71168 cri.go:89] found id: ""
	I0401 19:34:04.700622   71168 logs.go:276] 0 containers: []
	W0401 19:34:04.700634   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:04.700641   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:04.700708   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:04.744960   71168 cri.go:89] found id: ""
	I0401 19:34:04.744990   71168 logs.go:276] 0 containers: []
	W0401 19:34:04.744999   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:04.745004   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:04.745052   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:04.788239   71168 cri.go:89] found id: ""
	I0401 19:34:04.788264   71168 logs.go:276] 0 containers: []
	W0401 19:34:04.788272   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:04.788278   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:04.788343   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:04.830788   71168 cri.go:89] found id: ""
	I0401 19:34:04.830812   71168 logs.go:276] 0 containers: []
	W0401 19:34:04.830850   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:04.830859   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:04.830917   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:04.889784   71168 cri.go:89] found id: ""
	I0401 19:34:04.889815   71168 logs.go:276] 0 containers: []
	W0401 19:34:04.889826   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:04.889834   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:04.889902   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:04.931969   71168 cri.go:89] found id: ""
	I0401 19:34:04.931996   71168 logs.go:276] 0 containers: []
	W0401 19:34:04.932004   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:04.932010   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:04.932058   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:04.975668   71168 cri.go:89] found id: ""
	I0401 19:34:04.975689   71168 logs.go:276] 0 containers: []
	W0401 19:34:04.975696   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:04.975704   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:04.975715   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:05.032212   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:05.032246   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:05.047900   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:05.047924   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:05.132371   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:05.132394   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:05.132408   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:05.222591   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:05.222623   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:07.767686   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:07.784473   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:07.784542   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:07.828460   71168 cri.go:89] found id: ""
	I0401 19:34:07.828487   71168 logs.go:276] 0 containers: []
	W0401 19:34:07.828498   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:07.828505   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:07.828564   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:07.872760   71168 cri.go:89] found id: ""
	I0401 19:34:07.872786   71168 logs.go:276] 0 containers: []
	W0401 19:34:07.872797   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:07.872804   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:07.872862   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:07.914241   71168 cri.go:89] found id: ""
	I0401 19:34:07.914263   71168 logs.go:276] 0 containers: []
	W0401 19:34:07.914271   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:07.914276   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:07.914340   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:07.953757   71168 cri.go:89] found id: ""
	I0401 19:34:07.953784   71168 logs.go:276] 0 containers: []
	W0401 19:34:07.953795   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:07.953803   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:07.953869   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:07.994382   71168 cri.go:89] found id: ""
	I0401 19:34:07.994401   71168 logs.go:276] 0 containers: []
	W0401 19:34:07.994409   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:07.994414   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:07.994459   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:08.038178   71168 cri.go:89] found id: ""
	I0401 19:34:08.038202   71168 logs.go:276] 0 containers: []
	W0401 19:34:08.038213   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:08.038220   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:08.038282   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:08.077532   71168 cri.go:89] found id: ""
	I0401 19:34:08.077562   71168 logs.go:276] 0 containers: []
	W0401 19:34:08.077573   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:08.077580   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:08.077657   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:08.119825   71168 cri.go:89] found id: ""
	I0401 19:34:08.119845   71168 logs.go:276] 0 containers: []
	W0401 19:34:08.119855   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:08.119865   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:08.119878   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:08.207688   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:08.207724   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:08.253050   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:08.253085   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:08.309119   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:08.309152   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:08.325675   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:08.325704   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:08.410877   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:10.911211   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:10.925590   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:10.925657   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:10.964180   71168 cri.go:89] found id: ""
	I0401 19:34:10.964205   71168 logs.go:276] 0 containers: []
	W0401 19:34:10.964216   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:10.964224   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:10.964273   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:11.004492   71168 cri.go:89] found id: ""
	I0401 19:34:11.004515   71168 logs.go:276] 0 containers: []
	W0401 19:34:11.004526   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:11.004533   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:11.004588   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:11.048771   71168 cri.go:89] found id: ""
	I0401 19:34:11.048792   71168 logs.go:276] 0 containers: []
	W0401 19:34:11.048804   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:11.048810   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:11.048861   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:11.084956   71168 cri.go:89] found id: ""
	I0401 19:34:11.084982   71168 logs.go:276] 0 containers: []
	W0401 19:34:11.084992   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:11.084999   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:11.085043   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:11.128194   71168 cri.go:89] found id: ""
	I0401 19:34:11.128218   71168 logs.go:276] 0 containers: []
	W0401 19:34:11.128225   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:11.128230   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:11.128274   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:11.169884   71168 cri.go:89] found id: ""
	I0401 19:34:11.169908   71168 logs.go:276] 0 containers: []
	W0401 19:34:11.169918   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:11.169925   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:11.169988   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:11.213032   71168 cri.go:89] found id: ""
	I0401 19:34:11.213066   71168 logs.go:276] 0 containers: []
	W0401 19:34:11.213077   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:11.213084   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:11.213149   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:11.258391   71168 cri.go:89] found id: ""
	I0401 19:34:11.258414   71168 logs.go:276] 0 containers: []
	W0401 19:34:11.258422   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:11.258429   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:11.258445   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:11.341297   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:11.341328   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:11.388628   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:11.388659   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:11.442300   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:11.442326   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:11.457531   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:11.457561   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:11.561556   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:14.062670   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:14.077384   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:14.077449   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:14.119421   71168 cri.go:89] found id: ""
	I0401 19:34:14.119444   71168 logs.go:276] 0 containers: []
	W0401 19:34:14.119455   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:14.119462   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:14.119518   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:14.158762   71168 cri.go:89] found id: ""
	I0401 19:34:14.158783   71168 logs.go:276] 0 containers: []
	W0401 19:34:14.158798   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:14.158805   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:14.158867   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:14.197024   71168 cri.go:89] found id: ""
	I0401 19:34:14.197052   71168 logs.go:276] 0 containers: []
	W0401 19:34:14.197060   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:14.197065   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:14.197115   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:14.235976   71168 cri.go:89] found id: ""
	I0401 19:34:14.236004   71168 logs.go:276] 0 containers: []
	W0401 19:34:14.236015   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:14.236021   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:14.236085   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:14.280596   71168 cri.go:89] found id: ""
	I0401 19:34:14.280623   71168 logs.go:276] 0 containers: []
	W0401 19:34:14.280635   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:14.280642   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:14.280703   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:14.322196   71168 cri.go:89] found id: ""
	I0401 19:34:14.322219   71168 logs.go:276] 0 containers: []
	W0401 19:34:14.322230   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:14.322239   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:14.322298   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:14.364572   71168 cri.go:89] found id: ""
	I0401 19:34:14.364596   71168 logs.go:276] 0 containers: []
	W0401 19:34:14.364607   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:14.364615   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:14.364662   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:14.406043   71168 cri.go:89] found id: ""
	I0401 19:34:14.406066   71168 logs.go:276] 0 containers: []
	W0401 19:34:14.406072   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:14.406082   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:14.406097   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:14.461841   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:14.461870   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:14.479960   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:14.479990   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:14.557039   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:14.557058   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:14.557070   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:14.641945   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:14.641975   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:17.192681   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:17.207913   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:17.207964   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:17.245596   71168 cri.go:89] found id: ""
	I0401 19:34:17.245618   71168 logs.go:276] 0 containers: []
	W0401 19:34:17.245625   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:17.245630   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:17.245701   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:17.310845   71168 cri.go:89] found id: ""
	I0401 19:34:17.310875   71168 logs.go:276] 0 containers: []
	W0401 19:34:17.310887   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:17.310894   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:17.310958   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:17.367726   71168 cri.go:89] found id: ""
	I0401 19:34:17.367753   71168 logs.go:276] 0 containers: []
	W0401 19:34:17.367764   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:17.367770   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:17.367833   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:17.410807   71168 cri.go:89] found id: ""
	I0401 19:34:17.410834   71168 logs.go:276] 0 containers: []
	W0401 19:34:17.410842   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:17.410847   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:17.410892   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:17.448242   71168 cri.go:89] found id: ""
	I0401 19:34:17.448268   71168 logs.go:276] 0 containers: []
	W0401 19:34:17.448278   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:17.448285   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:17.448337   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:17.486552   71168 cri.go:89] found id: ""
	I0401 19:34:17.486580   71168 logs.go:276] 0 containers: []
	W0401 19:34:17.486590   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:17.486595   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:17.486644   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:17.529947   71168 cri.go:89] found id: ""
	I0401 19:34:17.529975   71168 logs.go:276] 0 containers: []
	W0401 19:34:17.529986   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:17.529993   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:17.530052   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:17.571617   71168 cri.go:89] found id: ""
	I0401 19:34:17.571640   71168 logs.go:276] 0 containers: []
	W0401 19:34:17.571648   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:17.571656   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:17.571673   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:17.627326   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:17.627354   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:17.643409   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:17.643431   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:17.723772   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:17.723798   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:17.723811   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:17.803383   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:17.803414   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:20.348949   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:20.363311   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:20.363385   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:20.401558   71168 cri.go:89] found id: ""
	I0401 19:34:20.401585   71168 logs.go:276] 0 containers: []
	W0401 19:34:20.401595   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:20.401603   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:20.401686   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:20.445979   71168 cri.go:89] found id: ""
	I0401 19:34:20.446004   71168 logs.go:276] 0 containers: []
	W0401 19:34:20.446011   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:20.446016   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:20.446060   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:20.487819   71168 cri.go:89] found id: ""
	I0401 19:34:20.487844   71168 logs.go:276] 0 containers: []
	W0401 19:34:20.487854   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:20.487862   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:20.487921   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:20.532107   71168 cri.go:89] found id: ""
	I0401 19:34:20.532131   71168 logs.go:276] 0 containers: []
	W0401 19:34:20.532154   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:20.532186   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:20.532247   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:20.577727   71168 cri.go:89] found id: ""
	I0401 19:34:20.577749   71168 logs.go:276] 0 containers: []
	W0401 19:34:20.577756   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:20.577762   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:20.577841   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:20.616774   71168 cri.go:89] found id: ""
	I0401 19:34:20.616805   71168 logs.go:276] 0 containers: []
	W0401 19:34:20.616816   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:20.616824   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:20.616887   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:20.656122   71168 cri.go:89] found id: ""
	I0401 19:34:20.656150   71168 logs.go:276] 0 containers: []
	W0401 19:34:20.656160   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:20.656167   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:20.656226   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:20.701249   71168 cri.go:89] found id: ""
	I0401 19:34:20.701274   71168 logs.go:276] 0 containers: []
	W0401 19:34:20.701285   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:20.701295   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:20.701310   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:20.746979   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:20.747003   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:20.799197   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:20.799226   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:20.815771   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:20.815808   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:20.895179   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:20.895202   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:20.895218   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:23.481911   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:23.496820   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:23.496889   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:23.538292   71168 cri.go:89] found id: ""
	I0401 19:34:23.538314   71168 logs.go:276] 0 containers: []
	W0401 19:34:23.538322   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:23.538327   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:23.538372   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:23.579171   71168 cri.go:89] found id: ""
	I0401 19:34:23.579200   71168 logs.go:276] 0 containers: []
	W0401 19:34:23.579209   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:23.579214   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:23.579269   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:23.620377   71168 cri.go:89] found id: ""
	I0401 19:34:23.620399   71168 logs.go:276] 0 containers: []
	W0401 19:34:23.620410   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:23.620417   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:23.620477   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:23.663309   71168 cri.go:89] found id: ""
	I0401 19:34:23.663329   71168 logs.go:276] 0 containers: []
	W0401 19:34:23.663337   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:23.663342   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:23.663392   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:23.702724   71168 cri.go:89] found id: ""
	I0401 19:34:23.702755   71168 logs.go:276] 0 containers: []
	W0401 19:34:23.702772   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:23.702778   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:23.702836   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:23.742797   71168 cri.go:89] found id: ""
	I0401 19:34:23.742827   71168 logs.go:276] 0 containers: []
	W0401 19:34:23.742837   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:23.742845   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:23.742913   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:23.781299   71168 cri.go:89] found id: ""
	I0401 19:34:23.781350   71168 logs.go:276] 0 containers: []
	W0401 19:34:23.781367   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:23.781375   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:23.781440   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:23.828244   71168 cri.go:89] found id: ""
	I0401 19:34:23.828270   71168 logs.go:276] 0 containers: []
	W0401 19:34:23.828277   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:23.828284   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:23.828298   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:23.914758   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:23.914782   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:23.914797   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:23.993300   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:23.993332   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:24.037388   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:24.037424   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:24.090157   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:24.090198   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:26.609062   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:26.624241   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:26.624309   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:26.665813   71168 cri.go:89] found id: ""
	I0401 19:34:26.665840   71168 logs.go:276] 0 containers: []
	W0401 19:34:26.665848   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:26.665857   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:26.665917   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:26.709571   71168 cri.go:89] found id: ""
	I0401 19:34:26.709593   71168 logs.go:276] 0 containers: []
	W0401 19:34:26.709600   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:26.709606   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:26.709680   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:26.757286   71168 cri.go:89] found id: ""
	I0401 19:34:26.757309   71168 logs.go:276] 0 containers: []
	W0401 19:34:26.757319   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:26.757325   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:26.757386   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:26.795715   71168 cri.go:89] found id: ""
	I0401 19:34:26.795768   71168 logs.go:276] 0 containers: []
	W0401 19:34:26.795781   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:26.795788   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:26.795839   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:26.835985   71168 cri.go:89] found id: ""
	I0401 19:34:26.836011   71168 logs.go:276] 0 containers: []
	W0401 19:34:26.836022   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:26.836029   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:26.836094   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:26.878890   71168 cri.go:89] found id: ""
	I0401 19:34:26.878918   71168 logs.go:276] 0 containers: []
	W0401 19:34:26.878929   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:26.878936   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:26.878991   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:26.920161   71168 cri.go:89] found id: ""
	I0401 19:34:26.920189   71168 logs.go:276] 0 containers: []
	W0401 19:34:26.920199   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:26.920206   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:26.920262   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:26.961597   71168 cri.go:89] found id: ""
	I0401 19:34:26.961626   71168 logs.go:276] 0 containers: []
	W0401 19:34:26.961637   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:26.961663   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:26.961679   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:27.019814   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:27.019847   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:27.035535   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:27.035564   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:27.111755   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:27.111776   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:27.111790   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:27.194932   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:27.194964   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:29.738592   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:29.752851   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:29.752913   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:29.791808   71168 cri.go:89] found id: ""
	I0401 19:34:29.791863   71168 logs.go:276] 0 containers: []
	W0401 19:34:29.791875   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:29.791883   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:29.791944   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:29.836113   71168 cri.go:89] found id: ""
	I0401 19:34:29.836132   71168 logs.go:276] 0 containers: []
	W0401 19:34:29.836139   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:29.836144   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:29.836200   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:29.879005   71168 cri.go:89] found id: ""
	I0401 19:34:29.879039   71168 logs.go:276] 0 containers: []
	W0401 19:34:29.879050   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:29.879059   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:29.879122   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:29.919349   71168 cri.go:89] found id: ""
	I0401 19:34:29.919383   71168 logs.go:276] 0 containers: []
	W0401 19:34:29.919394   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:29.919400   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:29.919454   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:29.957252   71168 cri.go:89] found id: ""
	I0401 19:34:29.957275   71168 logs.go:276] 0 containers: []
	W0401 19:34:29.957287   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:29.957294   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:29.957354   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:30.003220   71168 cri.go:89] found id: ""
	I0401 19:34:30.003245   71168 logs.go:276] 0 containers: []
	W0401 19:34:30.003256   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:30.003263   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:30.003311   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:30.043873   71168 cri.go:89] found id: ""
	I0401 19:34:30.043900   71168 logs.go:276] 0 containers: []
	W0401 19:34:30.043921   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:30.043928   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:30.043989   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:30.082215   71168 cri.go:89] found id: ""
	I0401 19:34:30.082242   71168 logs.go:276] 0 containers: []
	W0401 19:34:30.082253   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:30.082263   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:30.082277   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:30.098676   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:30.098701   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:30.180857   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:30.180879   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:30.180897   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:30.269982   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:30.270016   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:30.317933   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:30.317967   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:32.874312   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:32.888687   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:32.888742   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:32.926222   71168 cri.go:89] found id: ""
	I0401 19:34:32.926244   71168 logs.go:276] 0 containers: []
	W0401 19:34:32.926252   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:32.926257   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:32.926307   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:32.964838   71168 cri.go:89] found id: ""
	I0401 19:34:32.964858   71168 logs.go:276] 0 containers: []
	W0401 19:34:32.964865   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:32.964870   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:32.964914   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:33.006903   71168 cri.go:89] found id: ""
	I0401 19:34:33.006920   71168 logs.go:276] 0 containers: []
	W0401 19:34:33.006927   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:33.006933   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:33.006983   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:33.045663   71168 cri.go:89] found id: ""
	I0401 19:34:33.045691   71168 logs.go:276] 0 containers: []
	W0401 19:34:33.045701   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:33.045709   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:33.045770   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:33.086262   71168 cri.go:89] found id: ""
	I0401 19:34:33.086290   71168 logs.go:276] 0 containers: []
	W0401 19:34:33.086298   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:33.086303   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:33.086368   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:33.128302   71168 cri.go:89] found id: ""
	I0401 19:34:33.128327   71168 logs.go:276] 0 containers: []
	W0401 19:34:33.128335   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:33.128341   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:33.128402   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:33.171155   71168 cri.go:89] found id: ""
	I0401 19:34:33.171189   71168 logs.go:276] 0 containers: []
	W0401 19:34:33.171200   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:33.171207   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:33.171270   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:33.210793   71168 cri.go:89] found id: ""
	I0401 19:34:33.210820   71168 logs.go:276] 0 containers: []
	W0401 19:34:33.210838   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:33.210848   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:33.210870   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:33.295035   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:33.295072   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:33.345381   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:33.345417   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:33.401082   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:33.401120   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:33.417029   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:33.417055   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:33.497027   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:35.997632   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:36.013106   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:36.013161   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:36.053013   71168 cri.go:89] found id: ""
	I0401 19:34:36.053040   71168 logs.go:276] 0 containers: []
	W0401 19:34:36.053050   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:36.053059   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:36.053116   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:36.092268   71168 cri.go:89] found id: ""
	I0401 19:34:36.092297   71168 logs.go:276] 0 containers: []
	W0401 19:34:36.092308   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:36.092315   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:36.092389   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:36.131347   71168 cri.go:89] found id: ""
	I0401 19:34:36.131391   71168 logs.go:276] 0 containers: []
	W0401 19:34:36.131402   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:36.131409   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:36.131468   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:36.171402   71168 cri.go:89] found id: ""
	I0401 19:34:36.171432   71168 logs.go:276] 0 containers: []
	W0401 19:34:36.171443   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:36.171449   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:36.171511   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:36.211239   71168 cri.go:89] found id: ""
	I0401 19:34:36.211272   71168 logs.go:276] 0 containers: []
	W0401 19:34:36.211283   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:36.211290   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:36.211354   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:36.251246   71168 cri.go:89] found id: ""
	I0401 19:34:36.251275   71168 logs.go:276] 0 containers: []
	W0401 19:34:36.251287   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:36.251294   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:36.251354   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:36.293140   71168 cri.go:89] found id: ""
	I0401 19:34:36.293162   71168 logs.go:276] 0 containers: []
	W0401 19:34:36.293169   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:36.293174   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:36.293231   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:36.330281   71168 cri.go:89] found id: ""
	I0401 19:34:36.330308   71168 logs.go:276] 0 containers: []
	W0401 19:34:36.330318   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:36.330328   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:36.330342   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:36.421753   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:36.421790   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:36.467555   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:36.467581   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:36.524747   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:36.524778   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:36.540946   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:36.540976   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:36.622452   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:39.122969   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:39.139092   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:39.139157   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:39.177337   71168 cri.go:89] found id: ""
	I0401 19:34:39.177368   71168 logs.go:276] 0 containers: []
	W0401 19:34:39.177379   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:39.177387   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:39.177449   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:39.216471   71168 cri.go:89] found id: ""
	I0401 19:34:39.216498   71168 logs.go:276] 0 containers: []
	W0401 19:34:39.216507   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:39.216512   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:39.216558   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:39.255526   71168 cri.go:89] found id: ""
	I0401 19:34:39.255550   71168 logs.go:276] 0 containers: []
	W0401 19:34:39.255557   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:39.255563   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:39.255623   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:39.294682   71168 cri.go:89] found id: ""
	I0401 19:34:39.294711   71168 logs.go:276] 0 containers: []
	W0401 19:34:39.294723   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:39.294735   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:39.294798   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:39.337416   71168 cri.go:89] found id: ""
	I0401 19:34:39.337437   71168 logs.go:276] 0 containers: []
	W0401 19:34:39.337444   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:39.337449   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:39.337510   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:39.384560   71168 cri.go:89] found id: ""
	I0401 19:34:39.384586   71168 logs.go:276] 0 containers: []
	W0401 19:34:39.384598   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:39.384608   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:39.384671   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:39.421459   71168 cri.go:89] found id: ""
	I0401 19:34:39.421480   71168 logs.go:276] 0 containers: []
	W0401 19:34:39.421488   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:39.421493   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:39.421540   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:39.460221   71168 cri.go:89] found id: ""
	I0401 19:34:39.460246   71168 logs.go:276] 0 containers: []
	W0401 19:34:39.460256   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:39.460264   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:39.460275   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:39.543800   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:39.543835   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:39.591012   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:39.591038   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:39.645994   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:39.646025   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:39.662223   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:39.662250   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:39.741574   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:42.242541   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:42.256933   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:42.257006   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:42.294268   71168 cri.go:89] found id: ""
	I0401 19:34:42.294297   71168 logs.go:276] 0 containers: []
	W0401 19:34:42.294308   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:42.294315   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:42.294370   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:42.331978   71168 cri.go:89] found id: ""
	I0401 19:34:42.331999   71168 logs.go:276] 0 containers: []
	W0401 19:34:42.332005   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:42.332013   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:42.332078   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:42.369858   71168 cri.go:89] found id: ""
	I0401 19:34:42.369885   71168 logs.go:276] 0 containers: []
	W0401 19:34:42.369895   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:42.369903   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:42.369989   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:42.412688   71168 cri.go:89] found id: ""
	I0401 19:34:42.412708   71168 logs.go:276] 0 containers: []
	W0401 19:34:42.412715   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:42.412720   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:42.412776   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:42.449180   71168 cri.go:89] found id: ""
	I0401 19:34:42.449209   71168 logs.go:276] 0 containers: []
	W0401 19:34:42.449217   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:42.449225   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:42.449283   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:42.488582   71168 cri.go:89] found id: ""
	I0401 19:34:42.488606   71168 logs.go:276] 0 containers: []
	W0401 19:34:42.488613   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:42.488618   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:42.488665   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:42.527883   71168 cri.go:89] found id: ""
	I0401 19:34:42.527915   71168 logs.go:276] 0 containers: []
	W0401 19:34:42.527924   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:42.527931   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:42.527993   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:42.564372   71168 cri.go:89] found id: ""
	I0401 19:34:42.564394   71168 logs.go:276] 0 containers: []
	W0401 19:34:42.564401   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:42.564408   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:42.564419   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:42.646940   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:42.646974   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:42.689323   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:42.689354   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:42.744996   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:42.745024   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:42.761404   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:42.761429   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:42.836643   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:45.337809   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:45.352936   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:45.353029   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:45.395073   71168 cri.go:89] found id: ""
	I0401 19:34:45.395098   71168 logs.go:276] 0 containers: []
	W0401 19:34:45.395106   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:45.395112   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:45.395160   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:45.433537   71168 cri.go:89] found id: ""
	I0401 19:34:45.433567   71168 logs.go:276] 0 containers: []
	W0401 19:34:45.433578   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:45.433586   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:45.433658   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:45.477108   71168 cri.go:89] found id: ""
	I0401 19:34:45.477138   71168 logs.go:276] 0 containers: []
	W0401 19:34:45.477150   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:45.477157   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:45.477217   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:45.520350   71168 cri.go:89] found id: ""
	I0401 19:34:45.520389   71168 logs.go:276] 0 containers: []
	W0401 19:34:45.520401   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:45.520408   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:45.520466   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:45.562871   71168 cri.go:89] found id: ""
	I0401 19:34:45.562901   71168 logs.go:276] 0 containers: []
	W0401 19:34:45.562911   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:45.562918   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:45.562988   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:45.619214   71168 cri.go:89] found id: ""
	I0401 19:34:45.619237   71168 logs.go:276] 0 containers: []
	W0401 19:34:45.619248   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:45.619255   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:45.619317   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:45.664361   71168 cri.go:89] found id: ""
	I0401 19:34:45.664387   71168 logs.go:276] 0 containers: []
	W0401 19:34:45.664398   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:45.664405   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:45.664463   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:45.701087   71168 cri.go:89] found id: ""
	I0401 19:34:45.701110   71168 logs.go:276] 0 containers: []
	W0401 19:34:45.701120   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:45.701128   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:45.701139   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:45.716839   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:45.716863   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:45.794609   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:45.794630   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:45.794642   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:45.883428   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:45.883464   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:45.934342   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:45.934374   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:48.492128   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:48.508674   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:48.508746   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:48.549522   71168 cri.go:89] found id: ""
	I0401 19:34:48.549545   71168 logs.go:276] 0 containers: []
	W0401 19:34:48.549555   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:48.549561   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:48.549619   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:48.587014   71168 cri.go:89] found id: ""
	I0401 19:34:48.587037   71168 logs.go:276] 0 containers: []
	W0401 19:34:48.587045   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:48.587051   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:48.587108   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:48.629591   71168 cri.go:89] found id: ""
	I0401 19:34:48.629620   71168 logs.go:276] 0 containers: []
	W0401 19:34:48.629630   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:48.629636   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:48.629707   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:48.669335   71168 cri.go:89] found id: ""
	I0401 19:34:48.669363   71168 logs.go:276] 0 containers: []
	W0401 19:34:48.669383   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:48.669400   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:48.669455   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:48.708322   71168 cri.go:89] found id: ""
	I0401 19:34:48.708350   71168 logs.go:276] 0 containers: []
	W0401 19:34:48.708356   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:48.708362   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:48.708407   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:48.750680   71168 cri.go:89] found id: ""
	I0401 19:34:48.750708   71168 logs.go:276] 0 containers: []
	W0401 19:34:48.750718   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:48.750726   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:48.750791   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:48.790946   71168 cri.go:89] found id: ""
	I0401 19:34:48.790974   71168 logs.go:276] 0 containers: []
	W0401 19:34:48.790984   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:48.790998   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:48.791055   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:48.828849   71168 cri.go:89] found id: ""
	I0401 19:34:48.828871   71168 logs.go:276] 0 containers: []
	W0401 19:34:48.828880   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:48.828889   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:48.828904   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:48.909182   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:48.909212   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:48.954285   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:48.954315   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:49.010340   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:49.010372   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:49.026493   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:49.026516   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:49.099662   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:51.599905   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:51.618094   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:51.618168   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:51.657003   71168 cri.go:89] found id: ""
	I0401 19:34:51.657028   71168 logs.go:276] 0 containers: []
	W0401 19:34:51.657038   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:51.657046   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:51.657104   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:51.696415   71168 cri.go:89] found id: ""
	I0401 19:34:51.696441   71168 logs.go:276] 0 containers: []
	W0401 19:34:51.696451   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:51.696456   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:51.696515   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:51.734416   71168 cri.go:89] found id: ""
	I0401 19:34:51.734445   71168 logs.go:276] 0 containers: []
	W0401 19:34:51.734457   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:51.734465   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:51.734523   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:51.774895   71168 cri.go:89] found id: ""
	I0401 19:34:51.774918   71168 logs.go:276] 0 containers: []
	W0401 19:34:51.774925   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:51.774931   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:51.774980   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:51.814602   71168 cri.go:89] found id: ""
	I0401 19:34:51.814623   71168 logs.go:276] 0 containers: []
	W0401 19:34:51.814631   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:51.814637   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:51.814687   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:51.856035   71168 cri.go:89] found id: ""
	I0401 19:34:51.856061   71168 logs.go:276] 0 containers: []
	W0401 19:34:51.856071   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:51.856078   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:51.856132   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:51.897415   71168 cri.go:89] found id: ""
	I0401 19:34:51.897440   71168 logs.go:276] 0 containers: []
	W0401 19:34:51.897451   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:51.897457   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:51.897516   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:51.937406   71168 cri.go:89] found id: ""
	I0401 19:34:51.937428   71168 logs.go:276] 0 containers: []
	W0401 19:34:51.937436   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:51.937443   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:51.937456   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:51.981508   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:51.981535   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:52.039956   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:52.039995   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:52.066403   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:52.066429   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:52.172509   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:52.172530   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:52.172541   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:54.761459   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:54.776972   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:54.777030   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:54.822945   71168 cri.go:89] found id: ""
	I0401 19:34:54.822983   71168 logs.go:276] 0 containers: []
	W0401 19:34:54.822996   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:54.823004   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:54.823066   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:54.861602   71168 cri.go:89] found id: ""
	I0401 19:34:54.861629   71168 logs.go:276] 0 containers: []
	W0401 19:34:54.861639   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:54.861662   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:54.861727   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:54.901283   71168 cri.go:89] found id: ""
	I0401 19:34:54.901309   71168 logs.go:276] 0 containers: []
	W0401 19:34:54.901319   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:54.901327   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:54.901385   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:54.940071   71168 cri.go:89] found id: ""
	I0401 19:34:54.940103   71168 logs.go:276] 0 containers: []
	W0401 19:34:54.940114   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:54.940121   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:54.940179   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:54.978447   71168 cri.go:89] found id: ""
	I0401 19:34:54.978474   71168 logs.go:276] 0 containers: []
	W0401 19:34:54.978485   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:54.978493   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:54.978563   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:55.021786   71168 cri.go:89] found id: ""
	I0401 19:34:55.021810   71168 logs.go:276] 0 containers: []
	W0401 19:34:55.021819   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:55.021827   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:55.021886   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:55.059861   71168 cri.go:89] found id: ""
	I0401 19:34:55.059889   71168 logs.go:276] 0 containers: []
	W0401 19:34:55.059899   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:55.059907   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:55.059963   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:55.104484   71168 cri.go:89] found id: ""
	I0401 19:34:55.104516   71168 logs.go:276] 0 containers: []
	W0401 19:34:55.104527   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:55.104537   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:55.104551   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:55.152197   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:55.152221   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:55.203900   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:55.203942   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:55.221553   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:55.221580   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:55.299651   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:55.299668   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:55.299680   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:57.877382   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:57.899186   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:57.899260   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:57.948146   71168 cri.go:89] found id: ""
	I0401 19:34:57.948182   71168 logs.go:276] 0 containers: []
	W0401 19:34:57.948192   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:57.948203   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:57.948270   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:58.017121   71168 cri.go:89] found id: ""
	I0401 19:34:58.017150   71168 logs.go:276] 0 containers: []
	W0401 19:34:58.017161   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:58.017168   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:58.017230   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:58.073881   71168 cri.go:89] found id: ""
	I0401 19:34:58.073905   71168 logs.go:276] 0 containers: []
	W0401 19:34:58.073916   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:58.073923   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:58.073979   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:58.115410   71168 cri.go:89] found id: ""
	I0401 19:34:58.115435   71168 logs.go:276] 0 containers: []
	W0401 19:34:58.115445   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:58.115452   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:58.115512   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:58.155452   71168 cri.go:89] found id: ""
	I0401 19:34:58.155481   71168 logs.go:276] 0 containers: []
	W0401 19:34:58.155492   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:58.155500   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:58.155562   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:58.197335   71168 cri.go:89] found id: ""
	I0401 19:34:58.197376   71168 logs.go:276] 0 containers: []
	W0401 19:34:58.197397   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:58.197407   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:58.197469   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:58.239782   71168 cri.go:89] found id: ""
	I0401 19:34:58.239808   71168 logs.go:276] 0 containers: []
	W0401 19:34:58.239815   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:58.239820   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:58.239870   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:58.280936   71168 cri.go:89] found id: ""
	I0401 19:34:58.280961   71168 logs.go:276] 0 containers: []
	W0401 19:34:58.280971   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:58.280982   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:58.280998   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:58.368357   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:58.368401   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:58.415104   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:58.415132   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:58.474719   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:58.474749   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:58.491004   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:58.491031   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:58.573999   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:01.074865   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:01.091751   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:01.091822   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:01.140053   71168 cri.go:89] found id: ""
	I0401 19:35:01.140079   71168 logs.go:276] 0 containers: []
	W0401 19:35:01.140089   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:01.140096   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:01.140154   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:01.184046   71168 cri.go:89] found id: ""
	I0401 19:35:01.184078   71168 logs.go:276] 0 containers: []
	W0401 19:35:01.184089   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:01.184096   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:01.184161   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:01.225962   71168 cri.go:89] found id: ""
	I0401 19:35:01.225989   71168 logs.go:276] 0 containers: []
	W0401 19:35:01.225999   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:01.226006   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:01.226072   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:01.267212   71168 cri.go:89] found id: ""
	I0401 19:35:01.267234   71168 logs.go:276] 0 containers: []
	W0401 19:35:01.267242   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:01.267247   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:01.267308   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:01.307039   71168 cri.go:89] found id: ""
	I0401 19:35:01.307066   71168 logs.go:276] 0 containers: []
	W0401 19:35:01.307074   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:01.307080   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:01.307132   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:01.347856   71168 cri.go:89] found id: ""
	I0401 19:35:01.347886   71168 logs.go:276] 0 containers: []
	W0401 19:35:01.347898   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:01.347905   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:01.347962   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:01.385893   71168 cri.go:89] found id: ""
	I0401 19:35:01.385923   71168 logs.go:276] 0 containers: []
	W0401 19:35:01.385933   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:01.385940   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:01.385999   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:01.422983   71168 cri.go:89] found id: ""
	I0401 19:35:01.423012   71168 logs.go:276] 0 containers: []
	W0401 19:35:01.423022   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:01.423033   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:01.423048   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:01.469842   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:01.469875   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:01.527536   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:01.527566   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:01.542332   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:01.542357   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:01.617252   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:01.617270   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:01.617284   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:04.195171   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:04.211963   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:04.212015   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:04.252298   71168 cri.go:89] found id: ""
	I0401 19:35:04.252324   71168 logs.go:276] 0 containers: []
	W0401 19:35:04.252334   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:04.252342   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:04.252396   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:04.299619   71168 cri.go:89] found id: ""
	I0401 19:35:04.299649   71168 logs.go:276] 0 containers: []
	W0401 19:35:04.299659   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:04.299667   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:04.299725   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:04.347386   71168 cri.go:89] found id: ""
	I0401 19:35:04.347409   71168 logs.go:276] 0 containers: []
	W0401 19:35:04.347416   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:04.347426   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:04.347473   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:04.385902   71168 cri.go:89] found id: ""
	I0401 19:35:04.385929   71168 logs.go:276] 0 containers: []
	W0401 19:35:04.385937   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:04.385943   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:04.385993   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:04.425235   71168 cri.go:89] found id: ""
	I0401 19:35:04.425258   71168 logs.go:276] 0 containers: []
	W0401 19:35:04.425266   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:04.425271   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:04.425325   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:04.463849   71168 cri.go:89] found id: ""
	I0401 19:35:04.463881   71168 logs.go:276] 0 containers: []
	W0401 19:35:04.463891   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:04.463899   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:04.463974   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:04.501983   71168 cri.go:89] found id: ""
	I0401 19:35:04.502003   71168 logs.go:276] 0 containers: []
	W0401 19:35:04.502010   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:04.502016   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:04.502072   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:04.544082   71168 cri.go:89] found id: ""
	I0401 19:35:04.544103   71168 logs.go:276] 0 containers: []
	W0401 19:35:04.544113   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:04.544124   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:04.544141   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:04.600545   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:04.600578   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:04.617049   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:04.617075   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:04.696927   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:04.696945   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:04.696957   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:04.780024   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:04.780056   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:07.323161   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:07.339368   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:07.339432   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:07.379407   71168 cri.go:89] found id: ""
	I0401 19:35:07.379429   71168 logs.go:276] 0 containers: []
	W0401 19:35:07.379440   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:07.379452   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:07.379497   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:07.418700   71168 cri.go:89] found id: ""
	I0401 19:35:07.418728   71168 logs.go:276] 0 containers: []
	W0401 19:35:07.418737   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:07.418743   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:07.418788   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:07.457580   71168 cri.go:89] found id: ""
	I0401 19:35:07.457606   71168 logs.go:276] 0 containers: []
	W0401 19:35:07.457617   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:07.457624   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:07.457696   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:07.498211   71168 cri.go:89] found id: ""
	I0401 19:35:07.498240   71168 logs.go:276] 0 containers: []
	W0401 19:35:07.498249   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:07.498256   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:07.498318   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:07.539659   71168 cri.go:89] found id: ""
	I0401 19:35:07.539681   71168 logs.go:276] 0 containers: []
	W0401 19:35:07.539692   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:07.539699   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:07.539759   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:07.577414   71168 cri.go:89] found id: ""
	I0401 19:35:07.577440   71168 logs.go:276] 0 containers: []
	W0401 19:35:07.577450   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:07.577456   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:07.577520   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:07.623318   71168 cri.go:89] found id: ""
	I0401 19:35:07.623340   71168 logs.go:276] 0 containers: []
	W0401 19:35:07.623352   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:07.623358   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:07.623416   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:07.664791   71168 cri.go:89] found id: ""
	I0401 19:35:07.664823   71168 logs.go:276] 0 containers: []
	W0401 19:35:07.664834   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:07.664842   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:07.664854   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:07.722158   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:07.722186   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:07.737838   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:07.737876   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:07.813694   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:07.813717   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:07.813728   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:07.899698   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:07.899740   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:10.446184   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:10.460860   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:10.460927   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:10.505656   71168 cri.go:89] found id: ""
	I0401 19:35:10.505685   71168 logs.go:276] 0 containers: []
	W0401 19:35:10.505692   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:10.505698   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:10.505742   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:10.547771   71168 cri.go:89] found id: ""
	I0401 19:35:10.547796   71168 logs.go:276] 0 containers: []
	W0401 19:35:10.547814   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:10.547820   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:10.547876   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:10.584625   71168 cri.go:89] found id: ""
	I0401 19:35:10.584652   71168 logs.go:276] 0 containers: []
	W0401 19:35:10.584664   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:10.584671   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:10.584737   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:10.625512   71168 cri.go:89] found id: ""
	I0401 19:35:10.625541   71168 logs.go:276] 0 containers: []
	W0401 19:35:10.625552   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:10.625559   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:10.625618   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:10.664905   71168 cri.go:89] found id: ""
	I0401 19:35:10.664936   71168 logs.go:276] 0 containers: []
	W0401 19:35:10.664949   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:10.664955   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:10.665015   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:10.703043   71168 cri.go:89] found id: ""
	I0401 19:35:10.703071   71168 logs.go:276] 0 containers: []
	W0401 19:35:10.703082   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:10.703090   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:10.703149   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:10.747750   71168 cri.go:89] found id: ""
	I0401 19:35:10.747777   71168 logs.go:276] 0 containers: []
	W0401 19:35:10.747790   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:10.747796   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:10.747841   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:10.792944   71168 cri.go:89] found id: ""
	I0401 19:35:10.792970   71168 logs.go:276] 0 containers: []
	W0401 19:35:10.792980   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:10.792989   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:10.793004   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:10.854029   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:10.854058   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:10.868968   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:10.868991   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:10.940537   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:10.940564   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:10.940579   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:11.018201   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:11.018231   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:13.562139   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:13.579370   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:13.579435   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:13.620811   71168 cri.go:89] found id: ""
	I0401 19:35:13.620838   71168 logs.go:276] 0 containers: []
	W0401 19:35:13.620847   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:13.620859   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:13.620919   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:13.661377   71168 cri.go:89] found id: ""
	I0401 19:35:13.661408   71168 logs.go:276] 0 containers: []
	W0401 19:35:13.661419   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:13.661427   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:13.661489   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:13.702413   71168 cri.go:89] found id: ""
	I0401 19:35:13.702436   71168 logs.go:276] 0 containers: []
	W0401 19:35:13.702445   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:13.702453   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:13.702519   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:13.748760   71168 cri.go:89] found id: ""
	I0401 19:35:13.748788   71168 logs.go:276] 0 containers: []
	W0401 19:35:13.748796   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:13.748803   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:13.748874   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:13.795438   71168 cri.go:89] found id: ""
	I0401 19:35:13.795460   71168 logs.go:276] 0 containers: []
	W0401 19:35:13.795472   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:13.795479   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:13.795537   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:13.835572   71168 cri.go:89] found id: ""
	I0401 19:35:13.835601   71168 logs.go:276] 0 containers: []
	W0401 19:35:13.835612   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:13.835619   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:13.835677   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:13.874301   71168 cri.go:89] found id: ""
	I0401 19:35:13.874327   71168 logs.go:276] 0 containers: []
	W0401 19:35:13.874336   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:13.874342   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:13.874387   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:13.914847   71168 cri.go:89] found id: ""
	I0401 19:35:13.914876   71168 logs.go:276] 0 containers: []
	W0401 19:35:13.914883   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:13.914891   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:13.914904   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:13.929329   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:13.929355   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:14.004332   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:14.004358   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:14.004373   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:14.084901   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:14.084935   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:14.134471   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:14.134500   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:16.693432   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:16.710258   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:16.710332   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:16.757213   71168 cri.go:89] found id: ""
	I0401 19:35:16.757243   71168 logs.go:276] 0 containers: []
	W0401 19:35:16.757254   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:16.757261   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:16.757320   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:16.797134   71168 cri.go:89] found id: ""
	I0401 19:35:16.797174   71168 logs.go:276] 0 containers: []
	W0401 19:35:16.797182   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:16.797188   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:16.797233   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:16.839502   71168 cri.go:89] found id: ""
	I0401 19:35:16.839530   71168 logs.go:276] 0 containers: []
	W0401 19:35:16.839541   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:16.839549   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:16.839609   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:16.881380   71168 cri.go:89] found id: ""
	I0401 19:35:16.881406   71168 logs.go:276] 0 containers: []
	W0401 19:35:16.881413   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:16.881419   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:16.881472   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:16.922968   71168 cri.go:89] found id: ""
	I0401 19:35:16.922991   71168 logs.go:276] 0 containers: []
	W0401 19:35:16.923002   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:16.923009   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:16.923069   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:16.961262   71168 cri.go:89] found id: ""
	I0401 19:35:16.961290   71168 logs.go:276] 0 containers: []
	W0401 19:35:16.961301   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:16.961310   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:16.961369   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:16.996901   71168 cri.go:89] found id: ""
	I0401 19:35:16.996929   71168 logs.go:276] 0 containers: []
	W0401 19:35:16.996940   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:16.996947   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:16.997004   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:17.038447   71168 cri.go:89] found id: ""
	I0401 19:35:17.038473   71168 logs.go:276] 0 containers: []
	W0401 19:35:17.038481   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:17.038489   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:17.038500   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:17.079979   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:17.080013   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:17.136973   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:17.137010   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:17.153083   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:17.153108   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:17.232055   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:17.232078   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:17.232096   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:19.813327   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:19.830168   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:19.830229   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:19.875502   71168 cri.go:89] found id: ""
	I0401 19:35:19.875524   71168 logs.go:276] 0 containers: []
	W0401 19:35:19.875532   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:19.875537   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:19.875591   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:19.916084   71168 cri.go:89] found id: ""
	I0401 19:35:19.916107   71168 logs.go:276] 0 containers: []
	W0401 19:35:19.916117   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:19.916125   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:19.916188   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:19.960673   71168 cri.go:89] found id: ""
	I0401 19:35:19.960699   71168 logs.go:276] 0 containers: []
	W0401 19:35:19.960710   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:19.960717   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:19.960796   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:19.998736   71168 cri.go:89] found id: ""
	I0401 19:35:19.998760   71168 logs.go:276] 0 containers: []
	W0401 19:35:19.998768   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:19.998776   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:19.998840   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:20.043382   71168 cri.go:89] found id: ""
	I0401 19:35:20.043408   71168 logs.go:276] 0 containers: []
	W0401 19:35:20.043418   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:20.043425   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:20.043492   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:20.086132   71168 cri.go:89] found id: ""
	I0401 19:35:20.086158   71168 logs.go:276] 0 containers: []
	W0401 19:35:20.086171   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:20.086178   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:20.086239   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:20.131052   71168 cri.go:89] found id: ""
	I0401 19:35:20.131074   71168 logs.go:276] 0 containers: []
	W0401 19:35:20.131081   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:20.131091   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:20.131151   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:20.174668   71168 cri.go:89] found id: ""
	I0401 19:35:20.174693   71168 logs.go:276] 0 containers: []
	W0401 19:35:20.174699   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:20.174707   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:20.174718   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:20.266503   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:20.266521   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:20.266534   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:20.351555   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:20.351586   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:20.400261   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:20.400289   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:20.455149   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:20.455183   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:22.972675   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:22.987481   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:22.987555   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:23.032429   71168 cri.go:89] found id: ""
	I0401 19:35:23.032453   71168 logs.go:276] 0 containers: []
	W0401 19:35:23.032461   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:23.032467   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:23.032522   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:23.073286   71168 cri.go:89] found id: ""
	I0401 19:35:23.073313   71168 logs.go:276] 0 containers: []
	W0401 19:35:23.073322   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:23.073330   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:23.073397   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:23.115424   71168 cri.go:89] found id: ""
	I0401 19:35:23.115447   71168 logs.go:276] 0 containers: []
	W0401 19:35:23.115454   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:23.115459   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:23.115506   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:23.164883   71168 cri.go:89] found id: ""
	I0401 19:35:23.164908   71168 logs.go:276] 0 containers: []
	W0401 19:35:23.164918   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:23.164925   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:23.164985   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:23.213617   71168 cri.go:89] found id: ""
	I0401 19:35:23.213656   71168 logs.go:276] 0 containers: []
	W0401 19:35:23.213668   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:23.213675   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:23.213787   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:23.264846   71168 cri.go:89] found id: ""
	I0401 19:35:23.264874   71168 logs.go:276] 0 containers: []
	W0401 19:35:23.264886   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:23.264893   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:23.264958   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:23.306467   71168 cri.go:89] found id: ""
	I0401 19:35:23.306495   71168 logs.go:276] 0 containers: []
	W0401 19:35:23.306506   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:23.306514   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:23.306566   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:23.358574   71168 cri.go:89] found id: ""
	I0401 19:35:23.358597   71168 logs.go:276] 0 containers: []
	W0401 19:35:23.358608   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:23.358619   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:23.358634   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:23.437486   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:23.437510   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:23.437525   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:23.555307   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:23.555350   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:23.601776   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:23.601808   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:23.666654   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:23.666688   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:26.184503   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:26.199924   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:26.199997   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:26.252151   71168 cri.go:89] found id: ""
	I0401 19:35:26.252181   71168 logs.go:276] 0 containers: []
	W0401 19:35:26.252192   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:26.252199   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:26.252266   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:26.299094   71168 cri.go:89] found id: ""
	I0401 19:35:26.299126   71168 logs.go:276] 0 containers: []
	W0401 19:35:26.299134   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:26.299139   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:26.299194   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:26.340483   71168 cri.go:89] found id: ""
	I0401 19:35:26.340516   71168 logs.go:276] 0 containers: []
	W0401 19:35:26.340533   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:26.340540   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:26.340599   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:26.387153   71168 cri.go:89] found id: ""
	I0401 19:35:26.387180   71168 logs.go:276] 0 containers: []
	W0401 19:35:26.387188   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:26.387194   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:26.387261   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:26.430746   71168 cri.go:89] found id: ""
	I0401 19:35:26.430773   71168 logs.go:276] 0 containers: []
	W0401 19:35:26.430781   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:26.430787   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:26.430854   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:26.478412   71168 cri.go:89] found id: ""
	I0401 19:35:26.478440   71168 logs.go:276] 0 containers: []
	W0401 19:35:26.478451   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:26.478458   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:26.478523   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:26.521120   71168 cri.go:89] found id: ""
	I0401 19:35:26.521150   71168 logs.go:276] 0 containers: []
	W0401 19:35:26.521161   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:26.521168   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:26.521229   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:26.564678   71168 cri.go:89] found id: ""
	I0401 19:35:26.564721   71168 logs.go:276] 0 containers: []
	W0401 19:35:26.564731   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:26.564742   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:26.564757   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:26.625271   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:26.625308   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:26.640505   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:26.640529   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:26.722753   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:26.722777   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:26.722795   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:26.830507   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:26.830551   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:29.386655   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:29.401232   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:29.401308   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:29.440479   71168 cri.go:89] found id: ""
	I0401 19:35:29.440511   71168 logs.go:276] 0 containers: []
	W0401 19:35:29.440522   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:29.440530   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:29.440590   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:29.479022   71168 cri.go:89] found id: ""
	I0401 19:35:29.479049   71168 logs.go:276] 0 containers: []
	W0401 19:35:29.479057   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:29.479062   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:29.479119   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:29.518179   71168 cri.go:89] found id: ""
	I0401 19:35:29.518208   71168 logs.go:276] 0 containers: []
	W0401 19:35:29.518216   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:29.518222   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:29.518281   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:29.556654   71168 cri.go:89] found id: ""
	I0401 19:35:29.556682   71168 logs.go:276] 0 containers: []
	W0401 19:35:29.556692   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:29.556712   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:29.556772   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:29.593258   71168 cri.go:89] found id: ""
	I0401 19:35:29.593287   71168 logs.go:276] 0 containers: []
	W0401 19:35:29.593295   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:29.593301   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:29.593349   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:29.637215   71168 cri.go:89] found id: ""
	I0401 19:35:29.637243   71168 logs.go:276] 0 containers: []
	W0401 19:35:29.637253   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:29.637261   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:29.637321   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:29.683052   71168 cri.go:89] found id: ""
	I0401 19:35:29.683090   71168 logs.go:276] 0 containers: []
	W0401 19:35:29.683100   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:29.683108   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:29.683164   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:29.730948   71168 cri.go:89] found id: ""
	I0401 19:35:29.730979   71168 logs.go:276] 0 containers: []
	W0401 19:35:29.730991   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:29.731001   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:29.731014   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:29.781969   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:29.782001   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:29.800700   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:29.800729   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:29.877200   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:29.877225   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:29.877244   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:29.958110   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:29.958144   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:32.501060   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:32.519551   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:32.519619   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:32.579776   71168 cri.go:89] found id: ""
	I0401 19:35:32.579802   71168 logs.go:276] 0 containers: []
	W0401 19:35:32.579813   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:32.579824   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:32.579886   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:32.643271   71168 cri.go:89] found id: ""
	I0401 19:35:32.643300   71168 logs.go:276] 0 containers: []
	W0401 19:35:32.643312   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:32.643322   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:32.643387   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:32.688576   71168 cri.go:89] found id: ""
	I0401 19:35:32.688605   71168 logs.go:276] 0 containers: []
	W0401 19:35:32.688614   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:32.688619   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:32.688678   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:32.729867   71168 cri.go:89] found id: ""
	I0401 19:35:32.729890   71168 logs.go:276] 0 containers: []
	W0401 19:35:32.729898   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:32.729906   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:32.729962   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:32.771485   71168 cri.go:89] found id: ""
	I0401 19:35:32.771508   71168 logs.go:276] 0 containers: []
	W0401 19:35:32.771515   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:32.771521   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:32.771574   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:32.809362   71168 cri.go:89] found id: ""
	I0401 19:35:32.809385   71168 logs.go:276] 0 containers: []
	W0401 19:35:32.809393   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:32.809398   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:32.809458   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:32.844916   71168 cri.go:89] found id: ""
	I0401 19:35:32.844941   71168 logs.go:276] 0 containers: []
	W0401 19:35:32.844950   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:32.844955   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:32.845000   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:32.884638   71168 cri.go:89] found id: ""
	I0401 19:35:32.884660   71168 logs.go:276] 0 containers: []
	W0401 19:35:32.884670   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:32.884680   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:32.884695   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:32.937462   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:32.937489   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:32.952842   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:32.952871   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:33.035254   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:33.035278   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:33.035294   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:33.114963   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:33.114994   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:35.662190   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:35.675960   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:35.676016   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:35.717300   71168 cri.go:89] found id: ""
	I0401 19:35:35.717329   71168 logs.go:276] 0 containers: []
	W0401 19:35:35.717340   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:35.717347   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:35.717409   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:35.756687   71168 cri.go:89] found id: ""
	I0401 19:35:35.756713   71168 logs.go:276] 0 containers: []
	W0401 19:35:35.756723   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:35.756730   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:35.756788   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:35.796995   71168 cri.go:89] found id: ""
	I0401 19:35:35.797017   71168 logs.go:276] 0 containers: []
	W0401 19:35:35.797025   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:35.797030   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:35.797083   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:35.840419   71168 cri.go:89] found id: ""
	I0401 19:35:35.840444   71168 logs.go:276] 0 containers: []
	W0401 19:35:35.840455   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:35.840462   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:35.840523   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:35.880059   71168 cri.go:89] found id: ""
	I0401 19:35:35.880093   71168 logs.go:276] 0 containers: []
	W0401 19:35:35.880107   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:35.880113   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:35.880171   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:35.929491   71168 cri.go:89] found id: ""
	I0401 19:35:35.929515   71168 logs.go:276] 0 containers: []
	W0401 19:35:35.929523   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:35.929530   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:35.929584   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:35.968745   71168 cri.go:89] found id: ""
	I0401 19:35:35.968771   71168 logs.go:276] 0 containers: []
	W0401 19:35:35.968778   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:35.968784   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:35.968833   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:36.014294   71168 cri.go:89] found id: ""
	I0401 19:35:36.014318   71168 logs.go:276] 0 containers: []
	W0401 19:35:36.014328   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:36.014338   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:36.014359   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:36.068418   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:36.068450   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:36.086343   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:36.086367   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:36.172027   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:36.172053   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:36.172067   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:36.250046   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:36.250080   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:38.794261   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:38.809535   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:38.809597   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:38.849139   71168 cri.go:89] found id: ""
	I0401 19:35:38.849167   71168 logs.go:276] 0 containers: []
	W0401 19:35:38.849176   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:38.849181   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:38.849238   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:38.886787   71168 cri.go:89] found id: ""
	I0401 19:35:38.886811   71168 logs.go:276] 0 containers: []
	W0401 19:35:38.886821   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:38.886828   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:38.886891   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:38.923388   71168 cri.go:89] found id: ""
	I0401 19:35:38.923419   71168 logs.go:276] 0 containers: []
	W0401 19:35:38.923431   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:38.923438   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:38.923497   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:38.959583   71168 cri.go:89] found id: ""
	I0401 19:35:38.959608   71168 logs.go:276] 0 containers: []
	W0401 19:35:38.959619   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:38.959626   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:38.959682   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:38.998201   71168 cri.go:89] found id: ""
	I0401 19:35:38.998226   71168 logs.go:276] 0 containers: []
	W0401 19:35:38.998233   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:38.998238   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:38.998294   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:39.039669   71168 cri.go:89] found id: ""
	I0401 19:35:39.039692   71168 logs.go:276] 0 containers: []
	W0401 19:35:39.039703   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:39.039710   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:39.039767   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:39.077331   71168 cri.go:89] found id: ""
	I0401 19:35:39.077358   71168 logs.go:276] 0 containers: []
	W0401 19:35:39.077366   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:39.077371   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:39.077423   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:39.125999   71168 cri.go:89] found id: ""
	I0401 19:35:39.126021   71168 logs.go:276] 0 containers: []
	W0401 19:35:39.126031   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:39.126041   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:39.126054   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:39.183579   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:39.183612   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:39.201200   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:39.201227   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:39.282262   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:39.282280   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:39.282291   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:39.365340   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:39.365370   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:41.914909   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:41.929243   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:41.929317   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:41.975594   71168 cri.go:89] found id: ""
	I0401 19:35:41.975622   71168 logs.go:276] 0 containers: []
	W0401 19:35:41.975632   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:41.975639   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:41.975701   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:42.023558   71168 cri.go:89] found id: ""
	I0401 19:35:42.023585   71168 logs.go:276] 0 containers: []
	W0401 19:35:42.023596   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:42.023602   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:42.023662   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:42.074242   71168 cri.go:89] found id: ""
	I0401 19:35:42.074266   71168 logs.go:276] 0 containers: []
	W0401 19:35:42.074276   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:42.074283   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:42.074340   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:42.123327   71168 cri.go:89] found id: ""
	I0401 19:35:42.123358   71168 logs.go:276] 0 containers: []
	W0401 19:35:42.123370   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:42.123378   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:42.123452   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:42.168931   71168 cri.go:89] found id: ""
	I0401 19:35:42.168961   71168 logs.go:276] 0 containers: []
	W0401 19:35:42.168972   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:42.168980   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:42.169037   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:42.211747   71168 cri.go:89] found id: ""
	I0401 19:35:42.211774   71168 logs.go:276] 0 containers: []
	W0401 19:35:42.211784   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:42.211793   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:42.211849   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:42.251809   71168 cri.go:89] found id: ""
	I0401 19:35:42.251830   71168 logs.go:276] 0 containers: []
	W0401 19:35:42.251841   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:42.251849   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:42.251908   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:42.293266   71168 cri.go:89] found id: ""
	I0401 19:35:42.293361   71168 logs.go:276] 0 containers: []
	W0401 19:35:42.293377   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:42.293388   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:42.293405   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:42.364502   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:42.364553   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:42.381147   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:42.381180   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:42.464219   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:42.464238   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:42.464249   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:42.544564   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:42.544594   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:45.105777   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:45.119911   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:45.119976   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:45.161871   71168 cri.go:89] found id: ""
	I0401 19:35:45.161890   71168 logs.go:276] 0 containers: []
	W0401 19:35:45.161897   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:45.161902   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:45.161949   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:45.198677   71168 cri.go:89] found id: ""
	I0401 19:35:45.198702   71168 logs.go:276] 0 containers: []
	W0401 19:35:45.198710   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:45.198715   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:45.198776   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:45.236938   71168 cri.go:89] found id: ""
	I0401 19:35:45.236972   71168 logs.go:276] 0 containers: []
	W0401 19:35:45.236983   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:45.236990   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:45.237052   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:45.280621   71168 cri.go:89] found id: ""
	I0401 19:35:45.280650   71168 logs.go:276] 0 containers: []
	W0401 19:35:45.280661   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:45.280668   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:45.280727   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:45.326794   71168 cri.go:89] found id: ""
	I0401 19:35:45.326818   71168 logs.go:276] 0 containers: []
	W0401 19:35:45.326827   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:45.326834   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:45.326892   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:45.369405   71168 cri.go:89] found id: ""
	I0401 19:35:45.369431   71168 logs.go:276] 0 containers: []
	W0401 19:35:45.369441   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:45.369446   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:45.369501   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:45.407609   71168 cri.go:89] found id: ""
	I0401 19:35:45.407635   71168 logs.go:276] 0 containers: []
	W0401 19:35:45.407643   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:45.407648   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:45.407720   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:45.444848   71168 cri.go:89] found id: ""
	I0401 19:35:45.444871   71168 logs.go:276] 0 containers: []
	W0401 19:35:45.444881   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:45.444891   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:45.444911   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:45.531938   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:45.531957   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:45.531972   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:45.617109   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:45.617141   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:45.663559   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:45.663591   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:45.717622   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:45.717670   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:48.234834   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:48.250543   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:48.250606   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:48.294396   71168 cri.go:89] found id: ""
	I0401 19:35:48.294423   71168 logs.go:276] 0 containers: []
	W0401 19:35:48.294432   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:48.294439   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:48.294504   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:48.336866   71168 cri.go:89] found id: ""
	I0401 19:35:48.336892   71168 logs.go:276] 0 containers: []
	W0401 19:35:48.336902   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:48.336908   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:48.336965   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:48.376031   71168 cri.go:89] found id: ""
	I0401 19:35:48.376065   71168 logs.go:276] 0 containers: []
	W0401 19:35:48.376076   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:48.376084   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:48.376142   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:48.414975   71168 cri.go:89] found id: ""
	I0401 19:35:48.414995   71168 logs.go:276] 0 containers: []
	W0401 19:35:48.415003   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:48.415008   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:48.415058   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:48.453484   71168 cri.go:89] found id: ""
	I0401 19:35:48.453513   71168 logs.go:276] 0 containers: []
	W0401 19:35:48.453524   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:48.453532   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:48.453593   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:48.487712   71168 cri.go:89] found id: ""
	I0401 19:35:48.487739   71168 logs.go:276] 0 containers: []
	W0401 19:35:48.487749   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:48.487757   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:48.487815   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:48.533331   71168 cri.go:89] found id: ""
	I0401 19:35:48.533364   71168 logs.go:276] 0 containers: []
	W0401 19:35:48.533375   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:48.533383   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:48.533442   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:48.574103   71168 cri.go:89] found id: ""
	I0401 19:35:48.574131   71168 logs.go:276] 0 containers: []
	W0401 19:35:48.574139   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:48.574147   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:48.574160   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:48.632068   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:48.632098   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:48.649342   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:48.649369   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:48.721799   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:48.721822   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:48.721836   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:48.821549   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:48.821584   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:51.364852   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:51.380281   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:51.380362   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:51.423383   71168 cri.go:89] found id: ""
	I0401 19:35:51.423412   71168 logs.go:276] 0 containers: []
	W0401 19:35:51.423422   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:51.423430   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:51.423490   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:51.470331   71168 cri.go:89] found id: ""
	I0401 19:35:51.470359   71168 logs.go:276] 0 containers: []
	W0401 19:35:51.470370   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:51.470378   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:51.470441   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:51.520310   71168 cri.go:89] found id: ""
	I0401 19:35:51.520339   71168 logs.go:276] 0 containers: []
	W0401 19:35:51.520350   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:51.520358   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:51.520414   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:51.568681   71168 cri.go:89] found id: ""
	I0401 19:35:51.568706   71168 logs.go:276] 0 containers: []
	W0401 19:35:51.568716   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:51.568724   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:51.568843   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:51.615146   71168 cri.go:89] found id: ""
	I0401 19:35:51.615174   71168 logs.go:276] 0 containers: []
	W0401 19:35:51.615185   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:51.615193   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:51.615256   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:51.658678   71168 cri.go:89] found id: ""
	I0401 19:35:51.658703   71168 logs.go:276] 0 containers: []
	W0401 19:35:51.658712   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:51.658720   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:51.658791   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:51.700071   71168 cri.go:89] found id: ""
	I0401 19:35:51.700097   71168 logs.go:276] 0 containers: []
	W0401 19:35:51.700108   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:51.700114   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:51.700177   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:51.746772   71168 cri.go:89] found id: ""
	I0401 19:35:51.746798   71168 logs.go:276] 0 containers: []
	W0401 19:35:51.746809   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:51.746826   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:51.746849   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:51.762321   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:51.762350   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:51.843300   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:51.843322   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:51.843337   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:51.919059   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:51.919090   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:51.965899   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:51.965925   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:54.523484   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:54.542004   71168 kubeadm.go:591] duration metric: took 4m4.024054342s to restartPrimaryControlPlane
	W0401 19:35:54.542067   71168 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0401 19:35:54.542088   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0401 19:35:55.179619   71168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:35:55.196424   71168 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:35:55.209517   71168 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:35:55.222643   71168 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:35:55.222664   71168 kubeadm.go:156] found existing configuration files:
	
	I0401 19:35:55.222714   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 19:35:55.234756   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:35:55.234813   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:35:55.246725   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 19:35:55.258440   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:35:55.258499   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:35:55.270106   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 19:35:55.280724   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:35:55.280776   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:35:55.293630   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 19:35:55.305588   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:35:55.305660   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:35:55.318308   71168 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 19:35:55.574896   71168 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 19:37:51.561231   71168 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0401 19:37:51.561356   71168 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0401 19:37:51.563350   71168 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0401 19:37:51.563417   71168 kubeadm.go:309] [preflight] Running pre-flight checks
	I0401 19:37:51.563497   71168 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 19:37:51.563596   71168 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 19:37:51.563711   71168 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 19:37:51.563797   71168 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 19:37:51.565710   71168 out.go:204]   - Generating certificates and keys ...
	I0401 19:37:51.565809   71168 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0401 19:37:51.565908   71168 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0401 19:37:51.566051   71168 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0401 19:37:51.566136   71168 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0401 19:37:51.566230   71168 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0401 19:37:51.566325   71168 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0401 19:37:51.566402   71168 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0401 19:37:51.566464   71168 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0401 19:37:51.566580   71168 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0401 19:37:51.566688   71168 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0401 19:37:51.566727   71168 kubeadm.go:309] [certs] Using the existing "sa" key
	I0401 19:37:51.566774   71168 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 19:37:51.566822   71168 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 19:37:51.566917   71168 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 19:37:51.567001   71168 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 19:37:51.567068   71168 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 19:37:51.567210   71168 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 19:37:51.567314   71168 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 19:37:51.567371   71168 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0401 19:37:51.567473   71168 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 19:37:51.569285   71168 out.go:204]   - Booting up control plane ...
	I0401 19:37:51.569394   71168 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 19:37:51.569498   71168 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 19:37:51.569568   71168 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 19:37:51.569661   71168 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 19:37:51.569802   71168 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 19:37:51.569866   71168 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0401 19:37:51.569957   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:37:51.570195   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:37:51.570287   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:37:51.570514   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:37:51.570589   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:37:51.570769   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:37:51.570859   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:37:51.571033   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:37:51.571134   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:37:51.571342   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:37:51.571351   71168 kubeadm.go:309] 
	I0401 19:37:51.571394   71168 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0401 19:37:51.571453   71168 kubeadm.go:309] 		timed out waiting for the condition
	I0401 19:37:51.571475   71168 kubeadm.go:309] 
	I0401 19:37:51.571521   71168 kubeadm.go:309] 	This error is likely caused by:
	I0401 19:37:51.571558   71168 kubeadm.go:309] 		- The kubelet is not running
	I0401 19:37:51.571676   71168 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0401 19:37:51.571687   71168 kubeadm.go:309] 
	I0401 19:37:51.571824   71168 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0401 19:37:51.571880   71168 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0401 19:37:51.571921   71168 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0401 19:37:51.571931   71168 kubeadm.go:309] 
	I0401 19:37:51.572077   71168 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0401 19:37:51.572198   71168 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0401 19:37:51.572209   71168 kubeadm.go:309] 
	I0401 19:37:51.572359   71168 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0401 19:37:51.572477   71168 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0401 19:37:51.572576   71168 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0401 19:37:51.572676   71168 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0401 19:37:51.572731   71168 kubeadm.go:309] 
	W0401 19:37:51.572793   71168 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0401 19:37:51.572851   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0401 19:37:52.428554   71168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:37:52.445151   71168 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:37:52.456989   71168 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:37:52.457010   71168 kubeadm.go:156] found existing configuration files:
	
	I0401 19:37:52.457053   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 19:37:52.468305   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:37:52.468375   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:37:52.479305   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 19:37:52.489703   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:37:52.489753   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:37:52.501023   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 19:37:52.512418   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:37:52.512480   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:37:52.523850   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 19:37:52.534358   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:37:52.534425   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:37:52.546135   71168 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 19:37:52.779427   71168 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 19:39:48.856665   71168 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0401 19:39:48.856779   71168 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0401 19:39:48.858840   71168 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0401 19:39:48.858896   71168 kubeadm.go:309] [preflight] Running pre-flight checks
	I0401 19:39:48.858987   71168 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 19:39:48.859122   71168 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 19:39:48.859222   71168 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 19:39:48.859314   71168 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 19:39:48.861104   71168 out.go:204]   - Generating certificates and keys ...
	I0401 19:39:48.861202   71168 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0401 19:39:48.861277   71168 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0401 19:39:48.861381   71168 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0401 19:39:48.861492   71168 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0401 19:39:48.861596   71168 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0401 19:39:48.861699   71168 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0401 19:39:48.861791   71168 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0401 19:39:48.861897   71168 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0401 19:39:48.862009   71168 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0401 19:39:48.862118   71168 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0401 19:39:48.862176   71168 kubeadm.go:309] [certs] Using the existing "sa" key
	I0401 19:39:48.862260   71168 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 19:39:48.862338   71168 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 19:39:48.862420   71168 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 19:39:48.862480   71168 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 19:39:48.862527   71168 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 19:39:48.862618   71168 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 19:39:48.862693   71168 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 19:39:48.862734   71168 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0401 19:39:48.862804   71168 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 19:39:48.864199   71168 out.go:204]   - Booting up control plane ...
	I0401 19:39:48.864291   71168 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 19:39:48.864359   71168 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 19:39:48.864420   71168 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 19:39:48.864504   71168 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 19:39:48.864712   71168 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 19:39:48.864788   71168 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0401 19:39:48.864871   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:39:48.865069   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:39:48.865153   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:39:48.865344   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:39:48.865453   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:39:48.865674   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:39:48.865755   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:39:48.865989   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:39:48.866095   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:39:48.866269   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:39:48.866285   71168 kubeadm.go:309] 
	I0401 19:39:48.866343   71168 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0401 19:39:48.866402   71168 kubeadm.go:309] 		timed out waiting for the condition
	I0401 19:39:48.866414   71168 kubeadm.go:309] 
	I0401 19:39:48.866458   71168 kubeadm.go:309] 	This error is likely caused by:
	I0401 19:39:48.866506   71168 kubeadm.go:309] 		- The kubelet is not running
	I0401 19:39:48.866651   71168 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0401 19:39:48.866665   71168 kubeadm.go:309] 
	I0401 19:39:48.866816   71168 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0401 19:39:48.866865   71168 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0401 19:39:48.866895   71168 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0401 19:39:48.866901   71168 kubeadm.go:309] 
	I0401 19:39:48.866989   71168 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0401 19:39:48.867061   71168 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0401 19:39:48.867070   71168 kubeadm.go:309] 
	I0401 19:39:48.867194   71168 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0401 19:39:48.867327   71168 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0401 19:39:48.867417   71168 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0401 19:39:48.867526   71168 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0401 19:39:48.867555   71168 kubeadm.go:309] 
	I0401 19:39:48.867633   71168 kubeadm.go:393] duration metric: took 7m58.404831893s to StartCluster
	I0401 19:39:48.867702   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:39:48.867764   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:39:48.922329   71168 cri.go:89] found id: ""
	I0401 19:39:48.922359   71168 logs.go:276] 0 containers: []
	W0401 19:39:48.922369   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:39:48.922377   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:39:48.922435   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:39:48.966212   71168 cri.go:89] found id: ""
	I0401 19:39:48.966235   71168 logs.go:276] 0 containers: []
	W0401 19:39:48.966243   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:39:48.966248   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:39:48.966309   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:39:49.015141   71168 cri.go:89] found id: ""
	I0401 19:39:49.015171   71168 logs.go:276] 0 containers: []
	W0401 19:39:49.015182   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:39:49.015189   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:39:49.015249   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:39:49.053042   71168 cri.go:89] found id: ""
	I0401 19:39:49.053067   71168 logs.go:276] 0 containers: []
	W0401 19:39:49.053077   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:39:49.053085   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:39:49.053144   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:39:49.093880   71168 cri.go:89] found id: ""
	I0401 19:39:49.093906   71168 logs.go:276] 0 containers: []
	W0401 19:39:49.093914   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:39:49.093923   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:39:49.093976   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:39:49.129730   71168 cri.go:89] found id: ""
	I0401 19:39:49.129752   71168 logs.go:276] 0 containers: []
	W0401 19:39:49.129760   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:39:49.129766   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:39:49.129818   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:39:49.171075   71168 cri.go:89] found id: ""
	I0401 19:39:49.171107   71168 logs.go:276] 0 containers: []
	W0401 19:39:49.171118   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:39:49.171125   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:39:49.171204   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:39:49.208279   71168 cri.go:89] found id: ""
	I0401 19:39:49.208308   71168 logs.go:276] 0 containers: []
	W0401 19:39:49.208319   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:39:49.208330   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:39:49.208345   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:39:49.294128   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:39:49.294148   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:39:49.294162   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:39:49.400930   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:39:49.400963   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:39:49.443111   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:39:49.443140   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:39:49.501382   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:39:49.501417   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0401 19:39:49.516418   71168 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0401 19:39:49.516461   71168 out.go:239] * 
	* 
	W0401 19:39:49.516521   71168 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0401 19:39:49.516591   71168 out.go:239] * 
	* 
	W0401 19:39:49.517377   71168 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 19:39:49.520389   71168 out.go:177] 
	W0401 19:39:49.521593   71168 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0401 19:39:49.521639   71168 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0401 19:39:49.521686   71168 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0401 19:39:49.523181   71168 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-163608 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-163608 -n old-k8s-version-163608
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-163608 -n old-k8s-version-163608: exit status 2 (277.143436ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-163608 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-163608 logs -n 25: (1.587695265s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p bridge-408543 sudo cat                              | bridge-408543                | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |                |                     |                     |
	| ssh     | -p bridge-408543 sudo                                  | bridge-408543                | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | containerd config dump                                 |                              |         |                |                     |                     |
	| ssh     | -p bridge-408543 sudo                                  | bridge-408543                | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | systemctl status crio --all                            |                              |         |                |                     |                     |
	|         | --full --no-pager                                      |                              |         |                |                     |                     |
	| ssh     | -p bridge-408543 sudo                                  | bridge-408543                | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |                |                     |                     |
	| ssh     | -p bridge-408543 sudo find                             | bridge-408543                | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |                |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |                |                     |                     |
	| ssh     | -p bridge-408543 sudo crio                             | bridge-408543                | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | config                                                 |                              |         |                |                     |                     |
	| delete  | -p bridge-408543                                       | bridge-408543                | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	| delete  | -p                                                     | disable-driver-mounts-580301 | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | disable-driver-mounts-580301                           |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-734648 | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:24 UTC |
	|         | default-k8s-diff-port-734648                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p no-preload-472858             | no-preload-472858            | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p no-preload-472858                                   | no-preload-472858            | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-882095            | embed-certs-882095           | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:24 UTC | 01 Apr 24 19:24 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-882095                                  | embed-certs-882095           | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:24 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-734648  | default-k8s-diff-port-734648 | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:25 UTC | 01 Apr 24 19:25 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-734648 | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:25 UTC |                     |
	|         | default-k8s-diff-port-734648                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-472858                  | no-preload-472858            | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p no-preload-472858                                   | no-preload-472858            | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:26 UTC | 01 Apr 24 19:38 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |                |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.0                      |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-163608        | old-k8s-version-163608       | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:26 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-882095                 | embed-certs-882095           | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:26 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-882095                                  | embed-certs-882095           | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:26 UTC | 01 Apr 24 19:36 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-734648       | default-k8s-diff-port-734648 | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:27 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-734648 | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:27 UTC | 01 Apr 24 19:36 UTC |
	|         | default-k8s-diff-port-734648                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-163608                              | old-k8s-version-163608       | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:27 UTC | 01 Apr 24 19:27 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-163608             | old-k8s-version-163608       | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:27 UTC | 01 Apr 24 19:27 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-163608                              | old-k8s-version-163608       | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:27 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/01 19:27:52
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 19:27:52.967684   71168 out.go:291] Setting OutFile to fd 1 ...
	I0401 19:27:52.967904   71168 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 19:27:52.967912   71168 out.go:304] Setting ErrFile to fd 2...
	I0401 19:27:52.967916   71168 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 19:27:52.968071   71168 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-10493/.minikube/bin
	I0401 19:27:52.968601   71168 out.go:298] Setting JSON to false
	I0401 19:27:52.969458   71168 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7825,"bootTime":1711991848,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 19:27:52.969511   71168 start.go:139] virtualization: kvm guest
	I0401 19:27:52.972337   71168 out.go:177] * [old-k8s-version-163608] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 19:27:52.973728   71168 out.go:177]   - MINIKUBE_LOCATION=18233
	I0401 19:27:52.973774   71168 notify.go:220] Checking for updates...
	I0401 19:27:52.975050   71168 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 19:27:52.976498   71168 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 19:27:52.977880   71168 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-10493/.minikube
	I0401 19:27:52.979140   71168 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 19:27:52.980397   71168 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 19:27:52.982116   71168 config.go:182] Loaded profile config "old-k8s-version-163608": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 19:27:52.982478   71168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:27:52.982569   71168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:27:52.996903   71168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44083
	I0401 19:27:52.997230   71168 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:27:52.997702   71168 main.go:141] libmachine: Using API Version  1
	I0401 19:27:52.997724   71168 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:27:52.998082   71168 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:27:52.998286   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:27:53.000287   71168 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0401 19:27:53.001714   71168 driver.go:392] Setting default libvirt URI to qemu:///system
	I0401 19:27:53.001993   71168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:27:53.002030   71168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:27:53.016155   71168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43947
	I0401 19:27:53.016524   71168 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:27:53.016981   71168 main.go:141] libmachine: Using API Version  1
	I0401 19:27:53.017003   71168 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:27:53.017352   71168 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:27:53.017550   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:27:53.051163   71168 out.go:177] * Using the kvm2 driver based on existing profile
	I0401 19:27:53.052475   71168 start.go:297] selected driver: kvm2
	I0401 19:27:53.052488   71168 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-163608 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-163608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.106 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:27:53.052621   71168 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 19:27:53.053266   71168 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 19:27:53.053349   71168 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18233-10493/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0401 19:27:53.067629   71168 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0401 19:27:53.067994   71168 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 19:27:53.068065   71168 cni.go:84] Creating CNI manager for ""
	I0401 19:27:53.068083   71168 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:27:53.068130   71168 start.go:340] cluster config:
	{Name:old-k8s-version-163608 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-163608 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.106 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:27:53.068640   71168 iso.go:125] acquiring lock: {Name:mka511ffe42ecd86bd7f46e7a17ddcdd3e5e4327 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 19:27:53.070506   71168 out.go:177] * Starting "old-k8s-version-163608" primary control-plane node in "old-k8s-version-163608" cluster
	I0401 19:27:53.071686   71168 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0401 19:27:53.071716   71168 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0401 19:27:53.071726   71168 cache.go:56] Caching tarball of preloaded images
	I0401 19:27:53.071807   71168 preload.go:173] Found /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 19:27:53.071818   71168 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0401 19:27:53.071904   71168 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/config.json ...
	I0401 19:27:53.072076   71168 start.go:360] acquireMachinesLock for old-k8s-version-163608: {Name:mk6b7472209a8db5f40be4c2f0565da7e0094c19 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 19:27:57.821850   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:00.893934   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:06.973950   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:10.045903   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:16.125969   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:19.197902   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:25.277903   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:28.349963   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:34.429888   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:37.501886   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:43.581910   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:46.653871   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:52.733856   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:55.805957   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:01.885878   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:04.957919   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:11.037896   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:14.109854   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:20.189885   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:23.261848   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:29.341931   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:32.414013   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:38.493870   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:41.565912   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:47.645887   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:50.717882   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:56.797886   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:59.869824   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:30:05.949894   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:30:09.021905   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:30:15.101943   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:30:18.173911   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:30:24.253875   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:30:27.325874   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:30:33.405945   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:30:36.477889   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:30:39.482773   70687 start.go:364] duration metric: took 3m52.901392005s to acquireMachinesLock for "embed-certs-882095"
	I0401 19:30:39.482825   70687 start.go:96] Skipping create...Using existing machine configuration
	I0401 19:30:39.482831   70687 fix.go:54] fixHost starting: 
	I0401 19:30:39.483206   70687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:30:39.483272   70687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:30:39.498155   70687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43779
	I0401 19:30:39.498587   70687 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:30:39.499013   70687 main.go:141] libmachine: Using API Version  1
	I0401 19:30:39.499032   70687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:30:39.499400   70687 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:30:39.499572   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:30:39.499760   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetState
	I0401 19:30:39.501361   70687 fix.go:112] recreateIfNeeded on embed-certs-882095: state=Stopped err=<nil>
	I0401 19:30:39.501398   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	W0401 19:30:39.501552   70687 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 19:30:39.504183   70687 out.go:177] * Restarting existing kvm2 VM for "embed-certs-882095" ...
	I0401 19:30:39.505410   70687 main.go:141] libmachine: (embed-certs-882095) Calling .Start
	I0401 19:30:39.505549   70687 main.go:141] libmachine: (embed-certs-882095) Ensuring networks are active...
	I0401 19:30:39.506257   70687 main.go:141] libmachine: (embed-certs-882095) Ensuring network default is active
	I0401 19:30:39.506533   70687 main.go:141] libmachine: (embed-certs-882095) Ensuring network mk-embed-certs-882095 is active
	I0401 19:30:39.506892   70687 main.go:141] libmachine: (embed-certs-882095) Getting domain xml...
	I0401 19:30:39.507632   70687 main.go:141] libmachine: (embed-certs-882095) Creating domain...
	I0401 19:30:40.693316   70687 main.go:141] libmachine: (embed-certs-882095) Waiting to get IP...
	I0401 19:30:40.694095   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:40.694551   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:40.694597   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:40.694519   71595 retry.go:31] will retry after 283.185096ms: waiting for machine to come up
	I0401 19:30:40.979028   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:40.979500   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:40.979523   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:40.979452   71595 retry.go:31] will retry after 297.637907ms: waiting for machine to come up
	I0401 19:30:41.279111   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:41.279457   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:41.279479   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:41.279411   71595 retry.go:31] will retry after 366.625363ms: waiting for machine to come up
	I0401 19:30:39.480214   70284 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 19:30:39.480252   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetMachineName
	I0401 19:30:39.480557   70284 buildroot.go:166] provisioning hostname "no-preload-472858"
	I0401 19:30:39.480583   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetMachineName
	I0401 19:30:39.480787   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:30:39.482626   70284 machine.go:97] duration metric: took 4m37.415031648s to provisionDockerMachine
	I0401 19:30:39.482666   70284 fix.go:56] duration metric: took 4m37.43830515s for fixHost
	I0401 19:30:39.482676   70284 start.go:83] releasing machines lock for "no-preload-472858", held for 4m37.438344965s
	W0401 19:30:39.482704   70284 start.go:713] error starting host: provision: host is not running
	W0401 19:30:39.482794   70284 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0401 19:30:39.482805   70284 start.go:728] Will try again in 5 seconds ...
	I0401 19:30:41.647682   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:41.648045   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:41.648097   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:41.648026   71595 retry.go:31] will retry after 373.762437ms: waiting for machine to come up
	I0401 19:30:42.023500   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:42.023868   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:42.023904   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:42.023836   71595 retry.go:31] will retry after 461.430639ms: waiting for machine to come up
	I0401 19:30:42.486384   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:42.486836   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:42.486863   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:42.486784   71595 retry.go:31] will retry after 718.511667ms: waiting for machine to come up
	I0401 19:30:43.206555   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:43.206983   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:43.207006   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:43.206939   71595 retry.go:31] will retry after 907.934415ms: waiting for machine to come up
	I0401 19:30:44.115840   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:44.116223   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:44.116259   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:44.116173   71595 retry.go:31] will retry after 1.178492069s: waiting for machine to come up
	I0401 19:30:45.295704   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:45.296117   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:45.296146   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:45.296071   71595 retry.go:31] will retry after 1.188920707s: waiting for machine to come up
	I0401 19:30:44.484802   70284 start.go:360] acquireMachinesLock for no-preload-472858: {Name:mk6b7472209a8db5f40be4c2f0565da7e0094c19 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 19:30:46.486217   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:46.486777   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:46.486816   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:46.486740   71595 retry.go:31] will retry after 2.12728618s: waiting for machine to come up
	I0401 19:30:48.617124   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:48.617521   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:48.617553   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:48.617468   71595 retry.go:31] will retry after 2.867613028s: waiting for machine to come up
	I0401 19:30:51.488009   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:51.491502   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:51.491533   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:51.488532   71595 retry.go:31] will retry after 3.42206094s: waiting for machine to come up
	I0401 19:30:54.911723   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:54.912098   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:54.912127   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:54.912059   71595 retry.go:31] will retry after 4.263880792s: waiting for machine to come up
	I0401 19:31:00.450770   70962 start.go:364] duration metric: took 3m22.921307899s to acquireMachinesLock for "default-k8s-diff-port-734648"
	I0401 19:31:00.450836   70962 start.go:96] Skipping create...Using existing machine configuration
	I0401 19:31:00.450854   70962 fix.go:54] fixHost starting: 
	I0401 19:31:00.451364   70962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:31:00.451401   70962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:31:00.467219   70962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45255
	I0401 19:31:00.467579   70962 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:31:00.467998   70962 main.go:141] libmachine: Using API Version  1
	I0401 19:31:00.468021   70962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:31:00.468368   70962 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:31:00.468567   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:31:00.468740   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetState
	I0401 19:31:00.470224   70962 fix.go:112] recreateIfNeeded on default-k8s-diff-port-734648: state=Stopped err=<nil>
	I0401 19:31:00.470251   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	W0401 19:31:00.470396   70962 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 19:31:00.472906   70962 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-734648" ...
	I0401 19:30:59.180302   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.180756   70687 main.go:141] libmachine: (embed-certs-882095) Found IP for machine: 192.168.39.190
	I0401 19:30:59.180778   70687 main.go:141] libmachine: (embed-certs-882095) Reserving static IP address...
	I0401 19:30:59.180794   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has current primary IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.181269   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "embed-certs-882095", mac: "52:54:00:8c:f1:a7", ip: "192.168.39.190"} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.181300   70687 main.go:141] libmachine: (embed-certs-882095) DBG | skip adding static IP to network mk-embed-certs-882095 - found existing host DHCP lease matching {name: "embed-certs-882095", mac: "52:54:00:8c:f1:a7", ip: "192.168.39.190"}
	I0401 19:30:59.181311   70687 main.go:141] libmachine: (embed-certs-882095) Reserved static IP address: 192.168.39.190
	I0401 19:30:59.181324   70687 main.go:141] libmachine: (embed-certs-882095) DBG | Getting to WaitForSSH function...
	I0401 19:30:59.181331   70687 main.go:141] libmachine: (embed-certs-882095) Waiting for SSH to be available...
	I0401 19:30:59.183293   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.183599   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.183630   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.183756   70687 main.go:141] libmachine: (embed-certs-882095) DBG | Using SSH client type: external
	I0401 19:30:59.183784   70687 main.go:141] libmachine: (embed-certs-882095) DBG | Using SSH private key: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/embed-certs-882095/id_rsa (-rw-------)
	I0401 19:30:59.183837   70687 main.go:141] libmachine: (embed-certs-882095) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.190 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18233-10493/.minikube/machines/embed-certs-882095/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0401 19:30:59.183863   70687 main.go:141] libmachine: (embed-certs-882095) DBG | About to run SSH command:
	I0401 19:30:59.183924   70687 main.go:141] libmachine: (embed-certs-882095) DBG | exit 0
	I0401 19:30:59.305707   70687 main.go:141] libmachine: (embed-certs-882095) DBG | SSH cmd err, output: <nil>: 
	I0401 19:30:59.306036   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetConfigRaw
	I0401 19:30:59.306679   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetIP
	I0401 19:30:59.309266   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.309680   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.309711   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.309938   70687 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/embed-certs-882095/config.json ...
	I0401 19:30:59.310193   70687 machine.go:94] provisionDockerMachine start ...
	I0401 19:30:59.310219   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:30:59.310435   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:30:59.312549   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.312908   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.312930   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.313088   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:30:59.313247   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:30:59.313385   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:30:59.313502   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:30:59.313721   70687 main.go:141] libmachine: Using SSH client type: native
	I0401 19:30:59.313894   70687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0401 19:30:59.313904   70687 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 19:30:59.418216   70687 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0401 19:30:59.418244   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetMachineName
	I0401 19:30:59.418506   70687 buildroot.go:166] provisioning hostname "embed-certs-882095"
	I0401 19:30:59.418537   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetMachineName
	I0401 19:30:59.418703   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:30:59.421075   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.421411   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.421453   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.421534   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:30:59.421721   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:30:59.421867   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:30:59.421978   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:30:59.422122   70687 main.go:141] libmachine: Using SSH client type: native
	I0401 19:30:59.422317   70687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0401 19:30:59.422332   70687 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-882095 && echo "embed-certs-882095" | sudo tee /etc/hostname
	I0401 19:30:59.541974   70687 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-882095
	
	I0401 19:30:59.542006   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:30:59.544628   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.544992   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.545025   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.545193   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:30:59.545403   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:30:59.545566   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:30:59.545720   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:30:59.545906   70687 main.go:141] libmachine: Using SSH client type: native
	I0401 19:30:59.546060   70687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0401 19:30:59.546077   70687 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-882095' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-882095/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-882095' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 19:30:59.660103   70687 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 19:30:59.660134   70687 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18233-10493/.minikube CaCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18233-10493/.minikube}
	I0401 19:30:59.660161   70687 buildroot.go:174] setting up certificates
	I0401 19:30:59.660172   70687 provision.go:84] configureAuth start
	I0401 19:30:59.660193   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetMachineName
	I0401 19:30:59.660465   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetIP
	I0401 19:30:59.662943   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.663260   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.663302   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.663413   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:30:59.665390   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.665688   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.665719   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.665821   70687 provision.go:143] copyHostCerts
	I0401 19:30:59.665879   70687 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem, removing ...
	I0401 19:30:59.665892   70687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem
	I0401 19:30:59.665956   70687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem (1679 bytes)
	I0401 19:30:59.666041   70687 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem, removing ...
	I0401 19:30:59.666048   70687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem
	I0401 19:30:59.666071   70687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem (1082 bytes)
	I0401 19:30:59.666121   70687 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem, removing ...
	I0401 19:30:59.666128   70687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem
	I0401 19:30:59.666148   70687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem (1123 bytes)
	I0401 19:30:59.666193   70687 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem org=jenkins.embed-certs-882095 san=[127.0.0.1 192.168.39.190 embed-certs-882095 localhost minikube]
	I0401 19:30:59.761975   70687 provision.go:177] copyRemoteCerts
	I0401 19:30:59.762033   70687 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 19:30:59.762058   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:30:59.764277   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.764601   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.764626   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.764832   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:30:59.765006   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:30:59.765155   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:30:59.765250   70687 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/embed-certs-882095/id_rsa Username:docker}
	I0401 19:30:59.848158   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 19:30:59.875879   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 19:30:59.902573   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0401 19:30:59.928757   70687 provision.go:87] duration metric: took 268.570153ms to configureAuth
	I0401 19:30:59.928781   70687 buildroot.go:189] setting minikube options for container-runtime
	I0401 19:30:59.928924   70687 config.go:182] Loaded profile config "embed-certs-882095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 19:30:59.928988   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:30:59.931187   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.931571   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.931600   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.931755   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:30:59.931914   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:30:59.932067   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:30:59.932176   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:30:59.932325   70687 main.go:141] libmachine: Using SSH client type: native
	I0401 19:30:59.932506   70687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0401 19:30:59.932530   70687 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 19:31:00.214527   70687 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 19:31:00.214552   70687 machine.go:97] duration metric: took 904.342981ms to provisionDockerMachine
	I0401 19:31:00.214563   70687 start.go:293] postStartSetup for "embed-certs-882095" (driver="kvm2")
	I0401 19:31:00.214574   70687 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 19:31:00.214587   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:31:00.214892   70687 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 19:31:00.214920   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:31:00.217289   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.217580   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:31:00.217608   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.217828   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:31:00.218014   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:31:00.218137   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:31:00.218267   70687 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/embed-certs-882095/id_rsa Username:docker}
	I0401 19:31:00.301379   70687 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 19:31:00.306211   70687 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 19:31:00.306231   70687 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/addons for local assets ...
	I0401 19:31:00.306284   70687 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/files for local assets ...
	I0401 19:31:00.306377   70687 filesync.go:149] local asset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> 177512.pem in /etc/ssl/certs
	I0401 19:31:00.306459   70687 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 19:31:00.316524   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:31:00.342848   70687 start.go:296] duration metric: took 128.272743ms for postStartSetup
	I0401 19:31:00.342887   70687 fix.go:56] duration metric: took 20.860054972s for fixHost
	I0401 19:31:00.342910   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:31:00.345429   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.345883   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:31:00.345915   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.346060   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:31:00.346288   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:31:00.346504   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:31:00.346656   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:31:00.346806   70687 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:00.346961   70687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0401 19:31:00.346972   70687 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 19:31:00.450606   70687 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711999860.420567604
	
	I0401 19:31:00.450627   70687 fix.go:216] guest clock: 1711999860.420567604
	I0401 19:31:00.450635   70687 fix.go:229] Guest: 2024-04-01 19:31:00.420567604 +0000 UTC Remote: 2024-04-01 19:31:00.34289204 +0000 UTC m=+253.905703085 (delta=77.675564ms)
	I0401 19:31:00.450683   70687 fix.go:200] guest clock delta is within tolerance: 77.675564ms
	I0401 19:31:00.450693   70687 start.go:83] releasing machines lock for "embed-certs-882095", held for 20.967887876s
	I0401 19:31:00.450725   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:31:00.451011   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetIP
	I0401 19:31:00.453581   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.453959   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:31:00.453990   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.454112   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:31:00.454613   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:31:00.454788   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:31:00.454844   70687 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 19:31:00.454886   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:31:00.454997   70687 ssh_runner.go:195] Run: cat /version.json
	I0401 19:31:00.455019   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:31:00.457540   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.457811   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.457846   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:31:00.457878   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.458053   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:31:00.458141   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:31:00.458173   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.458217   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:31:00.458295   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:31:00.458387   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:31:00.458471   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:31:00.458556   70687 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/embed-certs-882095/id_rsa Username:docker}
	I0401 19:31:00.458602   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:31:00.458741   70687 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/embed-certs-882095/id_rsa Username:docker}
	I0401 19:31:00.569039   70687 ssh_runner.go:195] Run: systemctl --version
	I0401 19:31:00.575452   70687 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 19:31:00.728549   70687 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 19:31:00.735559   70687 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 19:31:00.735642   70687 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 19:31:00.756640   70687 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 19:31:00.756669   70687 start.go:494] detecting cgroup driver to use...
	I0401 19:31:00.756743   70687 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 19:31:00.776638   70687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 19:31:00.793006   70687 docker.go:217] disabling cri-docker service (if available) ...
	I0401 19:31:00.793063   70687 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 19:31:00.809240   70687 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 19:31:00.825245   70687 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 19:31:00.952595   70687 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 19:31:01.109771   70687 docker.go:233] disabling docker service ...
	I0401 19:31:01.109841   70687 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 19:31:01.126814   70687 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 19:31:01.141976   70687 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 19:31:01.301634   70687 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 19:31:01.440350   70687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 19:31:01.458083   70687 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 19:31:01.479653   70687 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0401 19:31:01.479730   70687 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:01.492598   70687 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 19:31:01.492677   70687 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:01.506469   70687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:01.521981   70687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:01.534406   70687 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 19:31:01.546817   70687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:01.558857   70687 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:01.578922   70687 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:01.593381   70687 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 19:31:01.605265   70687 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0401 19:31:01.605341   70687 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0401 19:31:01.621681   70687 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 19:31:01.633336   70687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:31:01.770373   70687 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 19:31:01.927892   70687 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 19:31:01.927952   70687 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 19:31:01.935046   70687 start.go:562] Will wait 60s for crictl version
	I0401 19:31:01.935101   70687 ssh_runner.go:195] Run: which crictl
	I0401 19:31:01.940563   70687 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 19:31:01.986956   70687 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 19:31:01.987030   70687 ssh_runner.go:195] Run: crio --version
	I0401 19:31:02.018567   70687 ssh_runner.go:195] Run: crio --version
	I0401 19:31:02.059077   70687 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0401 19:31:00.474118   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .Start
	I0401 19:31:00.474275   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Ensuring networks are active...
	I0401 19:31:00.474896   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Ensuring network default is active
	I0401 19:31:00.475289   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Ensuring network mk-default-k8s-diff-port-734648 is active
	I0401 19:31:00.475650   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Getting domain xml...
	I0401 19:31:00.476263   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Creating domain...
	I0401 19:31:01.736646   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting to get IP...
	I0401 19:31:01.737490   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:01.737889   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:01.737939   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:01.737867   71724 retry.go:31] will retry after 198.445345ms: waiting for machine to come up
	I0401 19:31:01.938446   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:01.938981   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:01.939012   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:01.938936   71724 retry.go:31] will retry after 320.128802ms: waiting for machine to come up
	I0401 19:31:02.260257   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:02.260673   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:02.260703   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:02.260633   71724 retry.go:31] will retry after 357.316906ms: waiting for machine to come up
	I0401 19:31:02.060343   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetIP
	I0401 19:31:02.063382   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:02.063775   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:31:02.063808   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:02.064047   70687 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0401 19:31:02.069227   70687 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:31:02.085344   70687 kubeadm.go:877] updating cluster {Name:embed-certs-882095 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.3 ClusterName:embed-certs-882095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.190 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 19:31:02.085451   70687 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0401 19:31:02.085490   70687 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:31:02.139383   70687 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0401 19:31:02.139454   70687 ssh_runner.go:195] Run: which lz4
	I0401 19:31:02.144331   70687 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0401 19:31:02.149534   70687 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 19:31:02.149561   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0401 19:31:03.954448   70687 crio.go:462] duration metric: took 1.810143668s to copy over tarball
	I0401 19:31:03.954523   70687 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 19:31:06.445735   70687 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.491184732s)
	I0401 19:31:06.445759   70687 crio.go:469] duration metric: took 2.491285648s to extract the tarball
	I0401 19:31:06.445765   70687 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 19:31:02.620250   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:02.620729   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:02.620760   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:02.620666   71724 retry.go:31] will retry after 520.509423ms: waiting for machine to come up
	I0401 19:31:03.142471   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:03.142902   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:03.142930   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:03.142864   71724 retry.go:31] will retry after 714.309176ms: waiting for machine to come up
	I0401 19:31:03.858594   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:03.859071   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:03.859104   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:03.859035   71724 retry.go:31] will retry after 620.601084ms: waiting for machine to come up
	I0401 19:31:04.480923   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:04.481350   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:04.481381   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:04.481313   71724 retry.go:31] will retry after 1.00716549s: waiting for machine to come up
	I0401 19:31:05.489788   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:05.490243   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:05.490273   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:05.490186   71724 retry.go:31] will retry after 1.158564029s: waiting for machine to come up
	I0401 19:31:06.650440   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:06.650969   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:06.650997   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:06.650915   71724 retry.go:31] will retry after 1.172294728s: waiting for machine to come up
	I0401 19:31:06.485475   70687 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:31:06.532426   70687 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 19:31:06.532448   70687 cache_images.go:84] Images are preloaded, skipping loading
	I0401 19:31:06.532455   70687 kubeadm.go:928] updating node { 192.168.39.190 8443 v1.29.3 crio true true} ...
	I0401 19:31:06.532544   70687 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-882095 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.190
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:embed-certs-882095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 19:31:06.532611   70687 ssh_runner.go:195] Run: crio config
	I0401 19:31:06.585119   70687 cni.go:84] Creating CNI manager for ""
	I0401 19:31:06.585144   70687 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:31:06.585158   70687 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 19:31:06.585185   70687 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.190 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-882095 NodeName:embed-certs-882095 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.190"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.190 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 19:31:06.585374   70687 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.190
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-882095"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.190
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.190"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 19:31:06.585473   70687 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0401 19:31:06.596747   70687 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 19:31:06.596818   70687 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 19:31:06.606959   70687 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0401 19:31:06.628202   70687 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 19:31:06.649043   70687 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0401 19:31:06.668400   70687 ssh_runner.go:195] Run: grep 192.168.39.190	control-plane.minikube.internal$ /etc/hosts
	I0401 19:31:06.672469   70687 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.190	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:31:06.685666   70687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:31:06.806186   70687 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:31:06.823315   70687 certs.go:68] Setting up /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/embed-certs-882095 for IP: 192.168.39.190
	I0401 19:31:06.823355   70687 certs.go:194] generating shared ca certs ...
	I0401 19:31:06.823376   70687 certs.go:226] acquiring lock for ca certs: {Name:mk348b3e250c104b662139cd7212c6c6dfda3180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:31:06.823569   70687 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key
	I0401 19:31:06.823645   70687 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key
	I0401 19:31:06.823659   70687 certs.go:256] generating profile certs ...
	I0401 19:31:06.823764   70687 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/embed-certs-882095/client.key
	I0401 19:31:06.823872   70687 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/embed-certs-882095/apiserver.key.c07921ce
	I0401 19:31:06.823945   70687 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/embed-certs-882095/proxy-client.key
	I0401 19:31:06.824092   70687 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem (1338 bytes)
	W0401 19:31:06.824132   70687 certs.go:480] ignoring /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751_empty.pem, impossibly tiny 0 bytes
	I0401 19:31:06.824145   70687 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem (1675 bytes)
	I0401 19:31:06.824183   70687 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem (1082 bytes)
	I0401 19:31:06.824223   70687 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem (1123 bytes)
	I0401 19:31:06.824254   70687 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem (1679 bytes)
	I0401 19:31:06.824309   70687 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:31:06.824942   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 19:31:06.867274   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 19:31:06.907288   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 19:31:06.948328   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 19:31:06.975058   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/embed-certs-882095/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0401 19:31:07.003183   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/embed-certs-882095/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0401 19:31:07.032030   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/embed-certs-882095/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 19:31:07.061612   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/embed-certs-882095/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 19:31:07.090149   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /usr/share/ca-certificates/177512.pem (1708 bytes)
	I0401 19:31:07.116885   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 19:31:07.143296   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem --> /usr/share/ca-certificates/17751.pem (1338 bytes)
	I0401 19:31:07.169420   70687 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0401 19:31:07.188908   70687 ssh_runner.go:195] Run: openssl version
	I0401 19:31:07.195591   70687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177512.pem && ln -fs /usr/share/ca-certificates/177512.pem /etc/ssl/certs/177512.pem"
	I0401 19:31:07.211583   70687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177512.pem
	I0401 19:31:07.217049   70687 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 18:15 /usr/share/ca-certificates/177512.pem
	I0401 19:31:07.217110   70687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177512.pem
	I0401 19:31:07.223751   70687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177512.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 19:31:07.237393   70687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 19:31:07.250523   70687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:07.255928   70687 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 18:07 /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:07.255981   70687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:07.262373   70687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 19:31:07.275174   70687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17751.pem && ln -fs /usr/share/ca-certificates/17751.pem /etc/ssl/certs/17751.pem"
	I0401 19:31:07.288039   70687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17751.pem
	I0401 19:31:07.293339   70687 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 18:15 /usr/share/ca-certificates/17751.pem
	I0401 19:31:07.293392   70687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17751.pem
	I0401 19:31:07.299983   70687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17751.pem /etc/ssl/certs/51391683.0"
	I0401 19:31:07.313120   70687 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 19:31:07.318425   70687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 19:31:07.325172   70687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 19:31:07.331674   70687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 19:31:07.338299   70687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 19:31:07.344896   70687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 19:31:07.351424   70687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 19:31:07.357898   70687 kubeadm.go:391] StartCluster: {Name:embed-certs-882095 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.3 ClusterName:embed-certs-882095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.190 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:31:07.357995   70687 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 19:31:07.358047   70687 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:31:07.401268   70687 cri.go:89] found id: ""
	I0401 19:31:07.401326   70687 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0401 19:31:07.414232   70687 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0401 19:31:07.414255   70687 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0401 19:31:07.414262   70687 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0401 19:31:07.414308   70687 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 19:31:07.425972   70687 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 19:31:07.426977   70687 kubeconfig.go:125] found "embed-certs-882095" server: "https://192.168.39.190:8443"
	I0401 19:31:07.428767   70687 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 19:31:07.440164   70687 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.190
	I0401 19:31:07.440191   70687 kubeadm.go:1154] stopping kube-system containers ...
	I0401 19:31:07.440201   70687 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0401 19:31:07.440244   70687 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:31:07.484303   70687 cri.go:89] found id: ""
	I0401 19:31:07.484407   70687 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0401 19:31:07.505186   70687 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:31:07.518316   70687 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:31:07.518342   70687 kubeadm.go:156] found existing configuration files:
	
	I0401 19:31:07.518393   70687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 19:31:07.530759   70687 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:31:07.530832   70687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:31:07.542799   70687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 19:31:07.553972   70687 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:31:07.554031   70687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:31:07.565324   70687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 19:31:07.576244   70687 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:31:07.576318   70687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:31:07.588874   70687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 19:31:07.600440   70687 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:31:07.600526   70687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:31:07.611963   70687 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:31:07.623225   70687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:07.740800   70687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:09.050887   70687 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.310046744s)
	I0401 19:31:09.050920   70687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:09.266170   70687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:09.336585   70687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:09.422513   70687 api_server.go:52] waiting for apiserver process to appear ...
	I0401 19:31:09.422594   70687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:09.923709   70687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:10.422822   70687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:10.922892   70687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:10.946590   70687 api_server.go:72] duration metric: took 1.524076694s to wait for apiserver process to appear ...
	I0401 19:31:10.946627   70687 api_server.go:88] waiting for apiserver healthz status ...
	I0401 19:31:10.946650   70687 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0401 19:31:07.825239   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:07.825629   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:07.825676   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:07.825586   71724 retry.go:31] will retry after 1.412332675s: waiting for machine to come up
	I0401 19:31:09.240010   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:09.240385   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:09.240416   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:09.240327   71724 retry.go:31] will retry after 2.601344034s: waiting for machine to come up
	I0401 19:31:11.843464   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:11.843948   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:11.843976   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:11.843900   71724 retry.go:31] will retry after 3.297720076s: waiting for machine to come up
	I0401 19:31:13.350274   70687 api_server.go:279] https://192.168.39.190:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0401 19:31:13.350309   70687 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0401 19:31:13.350325   70687 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0401 19:31:13.383494   70687 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0401 19:31:13.383543   70687 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0401 19:31:13.447744   70687 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0401 19:31:13.452796   70687 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0401 19:31:13.452852   70687 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0401 19:31:13.946971   70687 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0401 19:31:13.951522   70687 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0401 19:31:13.951554   70687 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0401 19:31:14.447104   70687 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0401 19:31:14.455165   70687 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0401 19:31:14.455204   70687 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0401 19:31:14.947278   70687 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0401 19:31:14.951487   70687 api_server.go:279] https://192.168.39.190:8443/healthz returned 200:
	ok
	I0401 19:31:14.958647   70687 api_server.go:141] control plane version: v1.29.3
	I0401 19:31:14.958670   70687 api_server.go:131] duration metric: took 4.012036456s to wait for apiserver health ...
	I0401 19:31:14.958687   70687 cni.go:84] Creating CNI manager for ""
	I0401 19:31:14.958693   70687 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:31:14.960494   70687 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0401 19:31:14.961899   70687 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0401 19:31:14.973709   70687 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0401 19:31:14.998105   70687 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 19:31:15.008481   70687 system_pods.go:59] 8 kube-system pods found
	I0401 19:31:15.008525   70687 system_pods.go:61] "coredns-76f75df574-nvcq4" [663bd69b-6da8-4a66-b20f-ea1eb507096a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:31:15.008536   70687 system_pods.go:61] "etcd-embed-certs-882095" [2b56dddc-b309-4965-811e-459c59b86dac] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0401 19:31:15.008551   70687 system_pods.go:61] "kube-apiserver-embed-certs-882095" [2e376ce4-504c-441a-baf8-0184a17e5bf4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0401 19:31:15.008561   70687 system_pods.go:61] "kube-controller-manager-embed-certs-882095" [e6bf3b2f-289b-4719-86f7-43e873fe8d85] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0401 19:31:15.008571   70687 system_pods.go:61] "kube-proxy-td6jk" [275536ff-4ec0-4d2c-8658-57aadda367b2] Running
	I0401 19:31:15.008580   70687 system_pods.go:61] "kube-scheduler-embed-certs-882095" [4551eb2a-9560-4d4f-aac0-9cfe6c790649] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0401 19:31:15.008591   70687 system_pods.go:61] "metrics-server-57f55c9bc5-g6z6c" [dc8aee6a-f101-4109-a259-351fddbddd44] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:31:15.008599   70687 system_pods.go:61] "storage-provisioner" [82a76833-c874-45d8-8ba7-1a483c15a997] Running
	I0401 19:31:15.008609   70687 system_pods.go:74] duration metric: took 10.480741ms to wait for pod list to return data ...
	I0401 19:31:15.008622   70687 node_conditions.go:102] verifying NodePressure condition ...
	I0401 19:31:15.012256   70687 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 19:31:15.012289   70687 node_conditions.go:123] node cpu capacity is 2
	I0401 19:31:15.012303   70687 node_conditions.go:105] duration metric: took 3.672159ms to run NodePressure ...
	I0401 19:31:15.012327   70687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:15.288861   70687 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0401 19:31:15.293731   70687 kubeadm.go:733] kubelet initialised
	I0401 19:31:15.293750   70687 kubeadm.go:734] duration metric: took 4.868595ms waiting for restarted kubelet to initialise ...
	I0401 19:31:15.293758   70687 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:31:15.298657   70687 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-nvcq4" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:15.304795   70687 pod_ready.go:97] node "embed-certs-882095" hosting pod "coredns-76f75df574-nvcq4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-882095" has status "Ready":"False"
	I0401 19:31:15.304813   70687 pod_ready.go:81] duration metric: took 6.134849ms for pod "coredns-76f75df574-nvcq4" in "kube-system" namespace to be "Ready" ...
	E0401 19:31:15.304822   70687 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-882095" hosting pod "coredns-76f75df574-nvcq4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-882095" has status "Ready":"False"
	I0401 19:31:15.304827   70687 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:15.309184   70687 pod_ready.go:97] node "embed-certs-882095" hosting pod "etcd-embed-certs-882095" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-882095" has status "Ready":"False"
	I0401 19:31:15.309204   70687 pod_ready.go:81] duration metric: took 4.369325ms for pod "etcd-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	E0401 19:31:15.309213   70687 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-882095" hosting pod "etcd-embed-certs-882095" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-882095" has status "Ready":"False"
	I0401 19:31:15.309221   70687 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:15.313737   70687 pod_ready.go:97] node "embed-certs-882095" hosting pod "kube-apiserver-embed-certs-882095" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-882095" has status "Ready":"False"
	I0401 19:31:15.313755   70687 pod_ready.go:81] duration metric: took 4.525801ms for pod "kube-apiserver-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	E0401 19:31:15.313764   70687 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-882095" hosting pod "kube-apiserver-embed-certs-882095" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-882095" has status "Ready":"False"
	I0401 19:31:15.313771   70687 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:15.401827   70687 pod_ready.go:97] node "embed-certs-882095" hosting pod "kube-controller-manager-embed-certs-882095" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-882095" has status "Ready":"False"
	I0401 19:31:15.401857   70687 pod_ready.go:81] duration metric: took 88.077915ms for pod "kube-controller-manager-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	E0401 19:31:15.401871   70687 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-882095" hosting pod "kube-controller-manager-embed-certs-882095" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-882095" has status "Ready":"False"
	I0401 19:31:15.401878   70687 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-td6jk" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:15.802462   70687 pod_ready.go:92] pod "kube-proxy-td6jk" in "kube-system" namespace has status "Ready":"True"
	I0401 19:31:15.802484   70687 pod_ready.go:81] duration metric: took 400.599194ms for pod "kube-proxy-td6jk" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:15.802494   70687 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:15.142653   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:15.143000   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:15.143062   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:15.142972   71724 retry.go:31] will retry after 3.764823961s: waiting for machine to come up
	I0401 19:31:20.350903   71168 start.go:364] duration metric: took 3m27.278785625s to acquireMachinesLock for "old-k8s-version-163608"
	I0401 19:31:20.350993   71168 start.go:96] Skipping create...Using existing machine configuration
	I0401 19:31:20.351010   71168 fix.go:54] fixHost starting: 
	I0401 19:31:20.351490   71168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:31:20.351571   71168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:31:20.368575   71168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38247
	I0401 19:31:20.368936   71168 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:31:20.369448   71168 main.go:141] libmachine: Using API Version  1
	I0401 19:31:20.369469   71168 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:31:20.369822   71168 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:31:20.370033   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:31:20.370195   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetState
	I0401 19:31:20.371625   71168 fix.go:112] recreateIfNeeded on old-k8s-version-163608: state=Stopped err=<nil>
	I0401 19:31:20.371681   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	W0401 19:31:20.371842   71168 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 19:31:20.374328   71168 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-163608" ...
	I0401 19:31:17.809256   70687 pod_ready.go:102] pod "kube-scheduler-embed-certs-882095" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:19.809947   70687 pod_ready.go:102] pod "kube-scheduler-embed-certs-882095" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:20.818455   70687 pod_ready.go:92] pod "kube-scheduler-embed-certs-882095" in "kube-system" namespace has status "Ready":"True"
	I0401 19:31:20.818481   70687 pod_ready.go:81] duration metric: took 5.015979611s for pod "kube-scheduler-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:20.818493   70687 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:18.910798   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:18.911231   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Found IP for machine: 192.168.61.145
	I0401 19:31:18.911266   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has current primary IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:18.911277   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Reserving static IP address...
	I0401 19:31:18.911761   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-734648", mac: "52:54:00:49:dc:50", ip: "192.168.61.145"} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:18.911795   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | skip adding static IP to network mk-default-k8s-diff-port-734648 - found existing host DHCP lease matching {name: "default-k8s-diff-port-734648", mac: "52:54:00:49:dc:50", ip: "192.168.61.145"}
	I0401 19:31:18.911819   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Reserved static IP address: 192.168.61.145
	I0401 19:31:18.911835   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for SSH to be available...
	I0401 19:31:18.911869   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | Getting to WaitForSSH function...
	I0401 19:31:18.913767   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:18.914054   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:18.914082   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:18.914207   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | Using SSH client type: external
	I0401 19:31:18.914236   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | Using SSH private key: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/default-k8s-diff-port-734648/id_rsa (-rw-------)
	I0401 19:31:18.914278   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.145 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18233-10493/.minikube/machines/default-k8s-diff-port-734648/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0401 19:31:18.914300   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | About to run SSH command:
	I0401 19:31:18.914313   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | exit 0
	I0401 19:31:19.037713   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | SSH cmd err, output: <nil>: 
	I0401 19:31:19.038080   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetConfigRaw
	I0401 19:31:19.038767   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetIP
	I0401 19:31:19.042390   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.043249   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:19.043311   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.043949   70962 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/default-k8s-diff-port-734648/config.json ...
	I0401 19:31:19.044504   70962 machine.go:94] provisionDockerMachine start ...
	I0401 19:31:19.044554   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:31:19.044916   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:19.047637   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.047908   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:19.047941   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.048088   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:31:19.048265   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:19.048408   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:19.048522   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:31:19.048636   70962 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:19.048790   70962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.145 22 <nil> <nil>}
	I0401 19:31:19.048800   70962 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 19:31:19.154415   70962 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0401 19:31:19.154444   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetMachineName
	I0401 19:31:19.154683   70962 buildroot.go:166] provisioning hostname "default-k8s-diff-port-734648"
	I0401 19:31:19.154713   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetMachineName
	I0401 19:31:19.154887   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:19.157442   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.157867   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:19.157896   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.158041   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:31:19.158237   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:19.158402   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:19.158540   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:31:19.158713   70962 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:19.158905   70962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.145 22 <nil> <nil>}
	I0401 19:31:19.158920   70962 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-734648 && echo "default-k8s-diff-port-734648" | sudo tee /etc/hostname
	I0401 19:31:19.276129   70962 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-734648
	
	I0401 19:31:19.276160   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:19.278657   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.278918   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:19.278940   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.279158   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:31:19.279353   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:19.279523   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:19.279671   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:31:19.279831   70962 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:19.280057   70962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.145 22 <nil> <nil>}
	I0401 19:31:19.280082   70962 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-734648' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-734648/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-734648' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 19:31:19.395730   70962 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 19:31:19.395755   70962 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18233-10493/.minikube CaCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18233-10493/.minikube}
	I0401 19:31:19.395779   70962 buildroot.go:174] setting up certificates
	I0401 19:31:19.395788   70962 provision.go:84] configureAuth start
	I0401 19:31:19.395798   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetMachineName
	I0401 19:31:19.396046   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetIP
	I0401 19:31:19.398668   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.399036   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:19.399065   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.399219   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:19.401309   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.401611   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:19.401656   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.401750   70962 provision.go:143] copyHostCerts
	I0401 19:31:19.401812   70962 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem, removing ...
	I0401 19:31:19.401822   70962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem
	I0401 19:31:19.401876   70962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem (1082 bytes)
	I0401 19:31:19.401978   70962 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem, removing ...
	I0401 19:31:19.401988   70962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem
	I0401 19:31:19.402015   70962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem (1123 bytes)
	I0401 19:31:19.402121   70962 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem, removing ...
	I0401 19:31:19.402129   70962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem
	I0401 19:31:19.402147   70962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem (1679 bytes)
	I0401 19:31:19.402205   70962 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-734648 san=[127.0.0.1 192.168.61.145 default-k8s-diff-port-734648 localhost minikube]
	I0401 19:31:19.655203   70962 provision.go:177] copyRemoteCerts
	I0401 19:31:19.655256   70962 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 19:31:19.655281   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:19.658194   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.658512   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:19.658540   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.658693   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:31:19.658896   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:19.659039   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:31:19.659187   70962 sshutil.go:53] new ssh client: &{IP:192.168.61.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/default-k8s-diff-port-734648/id_rsa Username:docker}
	I0401 19:31:19.743131   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 19:31:19.771327   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0401 19:31:19.797350   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 19:31:19.824244   70962 provision.go:87] duration metric: took 428.444366ms to configureAuth
	I0401 19:31:19.824274   70962 buildroot.go:189] setting minikube options for container-runtime
	I0401 19:31:19.824473   70962 config.go:182] Loaded profile config "default-k8s-diff-port-734648": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 19:31:19.824563   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:19.827376   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.827798   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:19.827838   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.827984   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:31:19.828184   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:19.828352   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:19.828496   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:31:19.828653   70962 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:19.828827   70962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.145 22 <nil> <nil>}
	I0401 19:31:19.828865   70962 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 19:31:20.107291   70962 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 19:31:20.107320   70962 machine.go:97] duration metric: took 1.062788118s to provisionDockerMachine
	I0401 19:31:20.107333   70962 start.go:293] postStartSetup for "default-k8s-diff-port-734648" (driver="kvm2")
	I0401 19:31:20.107347   70962 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 19:31:20.107369   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:31:20.107671   70962 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 19:31:20.107693   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:20.110380   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.110739   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:20.110780   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.110895   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:31:20.111075   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:20.111218   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:31:20.111353   70962 sshutil.go:53] new ssh client: &{IP:192.168.61.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/default-k8s-diff-port-734648/id_rsa Username:docker}
	I0401 19:31:20.193908   70962 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 19:31:20.198544   70962 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 19:31:20.198572   70962 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/addons for local assets ...
	I0401 19:31:20.198639   70962 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/files for local assets ...
	I0401 19:31:20.198704   70962 filesync.go:149] local asset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> 177512.pem in /etc/ssl/certs
	I0401 19:31:20.198788   70962 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 19:31:20.209866   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:31:20.240362   70962 start.go:296] duration metric: took 133.016405ms for postStartSetup
	I0401 19:31:20.240399   70962 fix.go:56] duration metric: took 19.789546756s for fixHost
	I0401 19:31:20.240418   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:20.243069   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.243448   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:20.243479   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.243657   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:31:20.243865   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:20.244061   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:20.244209   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:31:20.244399   70962 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:20.244600   70962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.145 22 <nil> <nil>}
	I0401 19:31:20.244616   70962 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 19:31:20.350752   70962 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711999880.326440079
	
	I0401 19:31:20.350779   70962 fix.go:216] guest clock: 1711999880.326440079
	I0401 19:31:20.350789   70962 fix.go:229] Guest: 2024-04-01 19:31:20.326440079 +0000 UTC Remote: 2024-04-01 19:31:20.240403038 +0000 UTC m=+222.858311555 (delta=86.037041ms)
	I0401 19:31:20.350808   70962 fix.go:200] guest clock delta is within tolerance: 86.037041ms
	I0401 19:31:20.350812   70962 start.go:83] releasing machines lock for "default-k8s-diff-port-734648", held for 19.899997669s
	I0401 19:31:20.350838   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:31:20.351118   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetIP
	I0401 19:31:20.354040   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.354395   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:20.354413   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.354595   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:31:20.355068   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:31:20.355238   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:31:20.355317   70962 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 19:31:20.355356   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:20.355530   70962 ssh_runner.go:195] Run: cat /version.json
	I0401 19:31:20.355557   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:20.357970   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.358372   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:20.358405   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.358430   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.358585   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:31:20.358766   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:20.358807   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:20.358834   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.358957   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:31:20.359013   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:31:20.359150   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:20.359203   70962 sshutil.go:53] new ssh client: &{IP:192.168.61.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/default-k8s-diff-port-734648/id_rsa Username:docker}
	I0401 19:31:20.359292   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:31:20.359439   70962 sshutil.go:53] new ssh client: &{IP:192.168.61.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/default-k8s-diff-port-734648/id_rsa Username:docker}
	I0401 19:31:20.466422   70962 ssh_runner.go:195] Run: systemctl --version
	I0401 19:31:20.472949   70962 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 19:31:20.626069   70962 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 19:31:20.633425   70962 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 19:31:20.633497   70962 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 19:31:20.658883   70962 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 19:31:20.658910   70962 start.go:494] detecting cgroup driver to use...
	I0401 19:31:20.658979   70962 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 19:31:20.686302   70962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 19:31:20.704507   70962 docker.go:217] disabling cri-docker service (if available) ...
	I0401 19:31:20.704583   70962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 19:31:20.725216   70962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 19:31:20.740635   70962 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 19:31:20.864184   70962 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 19:31:21.010752   70962 docker.go:233] disabling docker service ...
	I0401 19:31:21.010821   70962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 19:31:21.030718   70962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 19:31:21.047787   70962 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 19:31:21.194455   70962 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 19:31:21.337547   70962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 19:31:21.357144   70962 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 19:31:21.381709   70962 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0401 19:31:21.381782   70962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:21.393160   70962 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 19:31:21.393229   70962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:21.405047   70962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:21.416810   70962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:21.428947   70962 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 19:31:21.440886   70962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:21.452872   70962 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:21.473096   70962 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:21.484427   70962 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 19:31:21.494121   70962 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0401 19:31:21.494190   70962 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0401 19:31:21.509859   70962 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 19:31:21.520329   70962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:31:21.671075   70962 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 19:31:21.818822   70962 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 19:31:21.818892   70962 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 19:31:21.825189   70962 start.go:562] Will wait 60s for crictl version
	I0401 19:31:21.825260   70962 ssh_runner.go:195] Run: which crictl
	I0401 19:31:21.830058   70962 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 19:31:21.869617   70962 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 19:31:21.869721   70962 ssh_runner.go:195] Run: crio --version
	I0401 19:31:21.906091   70962 ssh_runner.go:195] Run: crio --version
	I0401 19:31:21.946240   70962 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0401 19:31:21.947653   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetIP
	I0401 19:31:21.950691   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:21.951156   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:21.951201   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:21.951445   70962 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0401 19:31:21.959376   70962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:31:21.974226   70962 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-734648 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:default-k8s-diff-port-734648 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.145 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 19:31:21.974348   70962 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0401 19:31:21.974426   70962 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:31:22.011856   70962 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0401 19:31:22.011930   70962 ssh_runner.go:195] Run: which lz4
	I0401 19:31:22.016672   70962 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0401 19:31:22.021864   70962 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 19:31:22.021893   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0401 19:31:20.375755   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .Start
	I0401 19:31:20.375932   71168 main.go:141] libmachine: (old-k8s-version-163608) Ensuring networks are active...
	I0401 19:31:20.376713   71168 main.go:141] libmachine: (old-k8s-version-163608) Ensuring network default is active
	I0401 19:31:20.377858   71168 main.go:141] libmachine: (old-k8s-version-163608) Ensuring network mk-old-k8s-version-163608 is active
	I0401 19:31:20.378278   71168 main.go:141] libmachine: (old-k8s-version-163608) Getting domain xml...
	I0401 19:31:20.378972   71168 main.go:141] libmachine: (old-k8s-version-163608) Creating domain...
	I0401 19:31:21.643237   71168 main.go:141] libmachine: (old-k8s-version-163608) Waiting to get IP...
	I0401 19:31:21.644082   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:21.644468   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:21.644535   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:21.644446   71902 retry.go:31] will retry after 208.251344ms: waiting for machine to come up
	I0401 19:31:21.854070   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:21.854545   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:21.854593   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:21.854527   71902 retry.go:31] will retry after 240.466964ms: waiting for machine to come up
	I0401 19:31:22.096940   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:22.097447   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:22.097470   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:22.097405   71902 retry.go:31] will retry after 480.217755ms: waiting for machine to come up
	I0401 19:31:22.579111   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:22.579596   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:22.579628   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:22.579518   71902 retry.go:31] will retry after 581.713487ms: waiting for machine to come up
	I0401 19:31:22.826723   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:25.326165   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:23.813558   70962 crio.go:462] duration metric: took 1.796902191s to copy over tarball
	I0401 19:31:23.813619   70962 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 19:31:26.447802   70962 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.634145928s)
	I0401 19:31:26.447840   70962 crio.go:469] duration metric: took 2.634257029s to extract the tarball
	I0401 19:31:26.447849   70962 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 19:31:26.488228   70962 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:31:26.535741   70962 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 19:31:26.535770   70962 cache_images.go:84] Images are preloaded, skipping loading
	I0401 19:31:26.535780   70962 kubeadm.go:928] updating node { 192.168.61.145 8444 v1.29.3 crio true true} ...
	I0401 19:31:26.535931   70962 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-734648 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.145
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-734648 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 19:31:26.536019   70962 ssh_runner.go:195] Run: crio config
	I0401 19:31:26.590211   70962 cni.go:84] Creating CNI manager for ""
	I0401 19:31:26.590239   70962 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:31:26.590254   70962 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 19:31:26.590282   70962 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.145 APIServerPort:8444 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-734648 NodeName:default-k8s-diff-port-734648 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.145"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.145 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 19:31:26.590459   70962 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.145
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-734648"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.145
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.145"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 19:31:26.590533   70962 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0401 19:31:26.602186   70962 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 19:31:26.602264   70962 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 19:31:26.616193   70962 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0401 19:31:26.636634   70962 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 19:31:26.660339   70962 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0401 19:31:26.687935   70962 ssh_runner.go:195] Run: grep 192.168.61.145	control-plane.minikube.internal$ /etc/hosts
	I0401 19:31:26.693966   70962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.145	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:31:26.709876   70962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:31:26.854990   70962 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:31:26.877303   70962 certs.go:68] Setting up /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/default-k8s-diff-port-734648 for IP: 192.168.61.145
	I0401 19:31:26.877327   70962 certs.go:194] generating shared ca certs ...
	I0401 19:31:26.877350   70962 certs.go:226] acquiring lock for ca certs: {Name:mk348b3e250c104b662139cd7212c6c6dfda3180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:31:26.877578   70962 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key
	I0401 19:31:26.877621   70962 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key
	I0401 19:31:26.877637   70962 certs.go:256] generating profile certs ...
	I0401 19:31:26.877777   70962 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/default-k8s-diff-port-734648/client.key
	I0401 19:31:26.877864   70962 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/default-k8s-diff-port-734648/apiserver.key.e4671486
	I0401 19:31:26.877909   70962 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/default-k8s-diff-port-734648/proxy-client.key
	I0401 19:31:26.878007   70962 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem (1338 bytes)
	W0401 19:31:26.878049   70962 certs.go:480] ignoring /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751_empty.pem, impossibly tiny 0 bytes
	I0401 19:31:26.878062   70962 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem (1675 bytes)
	I0401 19:31:26.878094   70962 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem (1082 bytes)
	I0401 19:31:26.878128   70962 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem (1123 bytes)
	I0401 19:31:26.878153   70962 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem (1679 bytes)
	I0401 19:31:26.878203   70962 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:31:26.879101   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 19:31:26.917600   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 19:31:26.968606   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 19:31:27.012527   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 19:31:27.078525   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/default-k8s-diff-port-734648/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0401 19:31:27.125195   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/default-k8s-diff-port-734648/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 19:31:27.157190   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/default-k8s-diff-port-734648/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 19:31:27.185434   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/default-k8s-diff-port-734648/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 19:31:27.215215   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 19:31:27.246938   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem --> /usr/share/ca-certificates/17751.pem (1338 bytes)
	I0401 19:31:27.277210   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /usr/share/ca-certificates/177512.pem (1708 bytes)
	I0401 19:31:27.307099   70962 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0401 19:31:27.326664   70962 ssh_runner.go:195] Run: openssl version
	I0401 19:31:27.333292   70962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 19:31:27.344724   70962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:27.350096   70962 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 18:07 /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:27.350146   70962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:27.356421   70962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 19:31:27.368124   70962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17751.pem && ln -fs /usr/share/ca-certificates/17751.pem /etc/ssl/certs/17751.pem"
	I0401 19:31:27.379331   70962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17751.pem
	I0401 19:31:27.384465   70962 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 18:15 /usr/share/ca-certificates/17751.pem
	I0401 19:31:27.384518   70962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17751.pem
	I0401 19:31:27.391192   70962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17751.pem /etc/ssl/certs/51391683.0"
	I0401 19:31:27.403898   70962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177512.pem && ln -fs /usr/share/ca-certificates/177512.pem /etc/ssl/certs/177512.pem"
	I0401 19:31:27.418676   70962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177512.pem
	I0401 19:31:27.424254   70962 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 18:15 /usr/share/ca-certificates/177512.pem
	I0401 19:31:27.424308   70962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177512.pem
	I0401 19:31:23.163331   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:23.163803   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:23.163838   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:23.163770   71902 retry.go:31] will retry after 737.12898ms: waiting for machine to come up
	I0401 19:31:23.902739   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:23.903192   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:23.903222   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:23.903139   71902 retry.go:31] will retry after 718.826495ms: waiting for machine to come up
	I0401 19:31:24.624169   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:24.624620   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:24.624648   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:24.624574   71902 retry.go:31] will retry after 1.020701715s: waiting for machine to come up
	I0401 19:31:25.647470   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:25.647957   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:25.647988   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:25.647921   71902 retry.go:31] will retry after 1.318891306s: waiting for machine to come up
	I0401 19:31:26.968134   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:26.968588   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:26.968613   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:26.968535   71902 retry.go:31] will retry after 1.465864517s: waiting for machine to come up
	I0401 19:31:27.752110   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:29.827324   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:27.431798   70962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177512.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 19:31:27.749367   70962 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 19:31:27.757123   70962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 19:31:27.768626   70962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 19:31:27.778119   70962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 19:31:27.786893   70962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 19:31:27.797129   70962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 19:31:27.804804   70962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 19:31:27.813194   70962 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-734648 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.29.3 ClusterName:default-k8s-diff-port-734648 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.145 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:31:27.813274   70962 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 19:31:27.813325   70962 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:31:27.864565   70962 cri.go:89] found id: ""
	I0401 19:31:27.864637   70962 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0401 19:31:27.876745   70962 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0401 19:31:27.876789   70962 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0401 19:31:27.876797   70962 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0401 19:31:27.876862   70962 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 19:31:27.887494   70962 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 19:31:27.888632   70962 kubeconfig.go:125] found "default-k8s-diff-port-734648" server: "https://192.168.61.145:8444"
	I0401 19:31:27.890729   70962 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 19:31:27.900847   70962 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.145
	I0401 19:31:27.900877   70962 kubeadm.go:1154] stopping kube-system containers ...
	I0401 19:31:27.900889   70962 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0401 19:31:27.900936   70962 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:31:27.952874   70962 cri.go:89] found id: ""
	I0401 19:31:27.952954   70962 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0401 19:31:27.971647   70962 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:31:27.982541   70962 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:31:27.982576   70962 kubeadm.go:156] found existing configuration files:
	
	I0401 19:31:27.982612   70962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0401 19:31:27.992341   70962 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:31:27.992414   70962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:31:28.002685   70962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0401 19:31:28.012599   70962 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:31:28.012658   70962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:31:28.022731   70962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0401 19:31:28.033584   70962 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:31:28.033661   70962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:31:28.044940   70962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0401 19:31:28.055832   70962 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:31:28.055886   70962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:31:28.066919   70962 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:31:28.078715   70962 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:28.212251   70962 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:29.214190   70962 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.001904972s)
	I0401 19:31:29.214224   70962 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:29.444484   70962 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:29.536112   70962 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:29.664087   70962 api_server.go:52] waiting for apiserver process to appear ...
	I0401 19:31:29.664201   70962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:30.165117   70962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:30.664872   70962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:30.707251   70962 api_server.go:72] duration metric: took 1.04316448s to wait for apiserver process to appear ...
	I0401 19:31:30.707280   70962 api_server.go:88] waiting for apiserver healthz status ...
	I0401 19:31:30.707297   70962 api_server.go:253] Checking apiserver healthz at https://192.168.61.145:8444/healthz ...
	I0401 19:31:30.707881   70962 api_server.go:269] stopped: https://192.168.61.145:8444/healthz: Get "https://192.168.61.145:8444/healthz": dial tcp 192.168.61.145:8444: connect: connection refused
	I0401 19:31:31.207434   70962 api_server.go:253] Checking apiserver healthz at https://192.168.61.145:8444/healthz ...
	I0401 19:31:28.435890   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:28.436304   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:28.436334   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:28.436255   71902 retry.go:31] will retry after 2.062597688s: waiting for machine to come up
	I0401 19:31:30.500523   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:30.500999   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:30.501027   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:30.500954   71902 retry.go:31] will retry after 2.068480339s: waiting for machine to come up
	I0401 19:31:32.571229   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:32.571603   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:32.571635   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:32.571550   71902 retry.go:31] will retry after 3.355965883s: waiting for machine to come up
	I0401 19:31:33.707613   70962 api_server.go:279] https://192.168.61.145:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0401 19:31:33.707647   70962 api_server.go:103] status: https://192.168.61.145:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0401 19:31:33.707663   70962 api_server.go:253] Checking apiserver healthz at https://192.168.61.145:8444/healthz ...
	I0401 19:31:33.728509   70962 api_server.go:279] https://192.168.61.145:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0401 19:31:33.728582   70962 api_server.go:103] status: https://192.168.61.145:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0401 19:31:34.208163   70962 api_server.go:253] Checking apiserver healthz at https://192.168.61.145:8444/healthz ...
	I0401 19:31:34.212754   70962 api_server.go:279] https://192.168.61.145:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0401 19:31:34.212784   70962 api_server.go:103] status: https://192.168.61.145:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0401 19:31:34.708282   70962 api_server.go:253] Checking apiserver healthz at https://192.168.61.145:8444/healthz ...
	I0401 19:31:34.715268   70962 api_server.go:279] https://192.168.61.145:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0401 19:31:34.715294   70962 api_server.go:103] status: https://192.168.61.145:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0401 19:31:35.207460   70962 api_server.go:253] Checking apiserver healthz at https://192.168.61.145:8444/healthz ...
	I0401 19:31:35.212542   70962 api_server.go:279] https://192.168.61.145:8444/healthz returned 200:
	ok
	I0401 19:31:35.219264   70962 api_server.go:141] control plane version: v1.29.3
	I0401 19:31:35.219287   70962 api_server.go:131] duration metric: took 4.512000334s to wait for apiserver health ...
	I0401 19:31:35.219294   70962 cni.go:84] Creating CNI manager for ""
	I0401 19:31:35.219309   70962 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:31:35.221080   70962 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0401 19:31:31.828694   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:34.325740   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:35.222800   70962 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0401 19:31:35.238787   70962 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0401 19:31:35.286002   70962 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 19:31:35.302379   70962 system_pods.go:59] 8 kube-system pods found
	I0401 19:31:35.302420   70962 system_pods.go:61] "coredns-76f75df574-tdwrh" [c1d3b591-fa81-46dd-847c-ffdfc22937fa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:31:35.302437   70962 system_pods.go:61] "etcd-default-k8s-diff-port-734648" [e977793d-ec92-40b8-a0fe-1b2400fb1af6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0401 19:31:35.302447   70962 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-734648" [2d0eae31-35c3-40aa-9d28-a2f51849c15d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0401 19:31:35.302469   70962 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-734648" [cded1171-2e1b-4d70-9f26-d1d3a6558da1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0401 19:31:35.302483   70962 system_pods.go:61] "kube-proxy-mn546" [f9b6366f-7095-418c-ba24-529c0555f438] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:31:35.302493   70962 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-734648" [c1518ece-8cbf-49fe-9091-15b38dc1bd62] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0401 19:31:35.302504   70962 system_pods.go:61] "metrics-server-57f55c9bc5-g7mg2" [d1ede79a-a7e6-42bd-a799-197ffc7c7939] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:31:35.302519   70962 system_pods.go:61] "storage-provisioner" [bd55f9c8-580c-4eb1-adbc-020d5bbedce9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:31:35.302532   70962 system_pods.go:74] duration metric: took 16.508651ms to wait for pod list to return data ...
	I0401 19:31:35.302545   70962 node_conditions.go:102] verifying NodePressure condition ...
	I0401 19:31:35.305826   70962 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 19:31:35.305862   70962 node_conditions.go:123] node cpu capacity is 2
	I0401 19:31:35.305876   70962 node_conditions.go:105] duration metric: took 3.322577ms to run NodePressure ...
	I0401 19:31:35.305895   70962 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:35.603225   70962 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0401 19:31:35.608584   70962 kubeadm.go:733] kubelet initialised
	I0401 19:31:35.608611   70962 kubeadm.go:734] duration metric: took 5.361549ms waiting for restarted kubelet to initialise ...
	I0401 19:31:35.608620   70962 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:31:35.615252   70962 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-tdwrh" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:35.620605   70962 pod_ready.go:97] node "default-k8s-diff-port-734648" hosting pod "coredns-76f75df574-tdwrh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-734648" has status "Ready":"False"
	I0401 19:31:35.620627   70962 pod_ready.go:81] duration metric: took 5.353257ms for pod "coredns-76f75df574-tdwrh" in "kube-system" namespace to be "Ready" ...
	E0401 19:31:35.620634   70962 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-734648" hosting pod "coredns-76f75df574-tdwrh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-734648" has status "Ready":"False"
	I0401 19:31:35.620641   70962 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:35.625280   70962 pod_ready.go:97] node "default-k8s-diff-port-734648" hosting pod "etcd-default-k8s-diff-port-734648" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-734648" has status "Ready":"False"
	I0401 19:31:35.625297   70962 pod_ready.go:81] duration metric: took 4.646748ms for pod "etcd-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	E0401 19:31:35.625311   70962 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-734648" hosting pod "etcd-default-k8s-diff-port-734648" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-734648" has status "Ready":"False"
	I0401 19:31:35.625325   70962 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:35.630150   70962 pod_ready.go:97] node "default-k8s-diff-port-734648" hosting pod "kube-apiserver-default-k8s-diff-port-734648" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-734648" has status "Ready":"False"
	I0401 19:31:35.630170   70962 pod_ready.go:81] duration metric: took 4.83409ms for pod "kube-apiserver-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	E0401 19:31:35.630178   70962 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-734648" hosting pod "kube-apiserver-default-k8s-diff-port-734648" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-734648" has status "Ready":"False"
	I0401 19:31:35.630184   70962 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:35.693865   70962 pod_ready.go:97] node "default-k8s-diff-port-734648" hosting pod "kube-controller-manager-default-k8s-diff-port-734648" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-734648" has status "Ready":"False"
	I0401 19:31:35.693890   70962 pod_ready.go:81] duration metric: took 63.697397ms for pod "kube-controller-manager-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	E0401 19:31:35.693901   70962 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-734648" hosting pod "kube-controller-manager-default-k8s-diff-port-734648" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-734648" has status "Ready":"False"
	I0401 19:31:35.693908   70962 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mn546" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:36.090904   70962 pod_ready.go:92] pod "kube-proxy-mn546" in "kube-system" namespace has status "Ready":"True"
	I0401 19:31:36.090928   70962 pod_ready.go:81] duration metric: took 397.013717ms for pod "kube-proxy-mn546" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:36.090938   70962 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:35.929498   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:35.930010   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:35.930042   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:35.929963   71902 retry.go:31] will retry after 3.806123644s: waiting for machine to come up
	I0401 19:31:41.203538   70284 start.go:364] duration metric: took 56.718693538s to acquireMachinesLock for "no-preload-472858"
	I0401 19:31:41.203592   70284 start.go:96] Skipping create...Using existing machine configuration
	I0401 19:31:41.203607   70284 fix.go:54] fixHost starting: 
	I0401 19:31:41.204096   70284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:31:41.204143   70284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:31:41.221574   70284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42471
	I0401 19:31:41.222045   70284 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:31:41.222527   70284 main.go:141] libmachine: Using API Version  1
	I0401 19:31:41.222547   70284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:31:41.222856   70284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:31:41.223051   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:31:41.223209   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetState
	I0401 19:31:41.224801   70284 fix.go:112] recreateIfNeeded on no-preload-472858: state=Stopped err=<nil>
	I0401 19:31:41.224827   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	W0401 19:31:41.224979   70284 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 19:31:41.226937   70284 out.go:177] * Restarting existing kvm2 VM for "no-preload-472858" ...
	I0401 19:31:36.824790   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:38.824976   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:40.827269   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:41.228315   70284 main.go:141] libmachine: (no-preload-472858) Calling .Start
	I0401 19:31:41.228509   70284 main.go:141] libmachine: (no-preload-472858) Ensuring networks are active...
	I0401 19:31:41.229206   70284 main.go:141] libmachine: (no-preload-472858) Ensuring network default is active
	I0401 19:31:41.229603   70284 main.go:141] libmachine: (no-preload-472858) Ensuring network mk-no-preload-472858 is active
	I0401 19:31:41.229999   70284 main.go:141] libmachine: (no-preload-472858) Getting domain xml...
	I0401 19:31:41.230682   70284 main.go:141] libmachine: (no-preload-472858) Creating domain...
	I0401 19:31:38.097417   70962 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:40.098187   70962 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:42.099891   70962 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:39.739700   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.740313   71168 main.go:141] libmachine: (old-k8s-version-163608) Found IP for machine: 192.168.50.106
	I0401 19:31:39.740369   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has current primary IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.740386   71168 main.go:141] libmachine: (old-k8s-version-163608) Reserving static IP address...
	I0401 19:31:39.740767   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "old-k8s-version-163608", mac: "52:54:00:fe:1b:e7", ip: "192.168.50.106"} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:39.740798   71168 main.go:141] libmachine: (old-k8s-version-163608) Reserved static IP address: 192.168.50.106
	I0401 19:31:39.740818   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | skip adding static IP to network mk-old-k8s-version-163608 - found existing host DHCP lease matching {name: "old-k8s-version-163608", mac: "52:54:00:fe:1b:e7", ip: "192.168.50.106"}
	I0401 19:31:39.740839   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | Getting to WaitForSSH function...
	I0401 19:31:39.740857   71168 main.go:141] libmachine: (old-k8s-version-163608) Waiting for SSH to be available...
	I0401 19:31:39.743023   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.743417   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:39.743447   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.743589   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | Using SSH client type: external
	I0401 19:31:39.743614   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | Using SSH private key: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/old-k8s-version-163608/id_rsa (-rw-------)
	I0401 19:31:39.743648   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.106 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18233-10493/.minikube/machines/old-k8s-version-163608/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0401 19:31:39.743662   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | About to run SSH command:
	I0401 19:31:39.743676   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | exit 0
	I0401 19:31:39.877699   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | SSH cmd err, output: <nil>: 
	I0401 19:31:39.878044   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetConfigRaw
	I0401 19:31:39.878611   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetIP
	I0401 19:31:39.880733   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.881074   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:39.881107   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.881352   71168 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/config.json ...
	I0401 19:31:39.881510   71168 machine.go:94] provisionDockerMachine start ...
	I0401 19:31:39.881529   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:31:39.881766   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:39.883980   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.884318   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:39.884360   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.884483   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:39.884675   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:39.884877   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:39.885029   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:39.885175   71168 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:39.885339   71168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.106 22 <nil> <nil>}
	I0401 19:31:39.885349   71168 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 19:31:39.994935   71168 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0401 19:31:39.994971   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetMachineName
	I0401 19:31:39.995213   71168 buildroot.go:166] provisioning hostname "old-k8s-version-163608"
	I0401 19:31:39.995241   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetMachineName
	I0401 19:31:39.995472   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:39.998179   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.998490   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:39.998525   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.998656   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:39.998805   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:39.998949   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:39.999054   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:39.999183   71168 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:39.999372   71168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.106 22 <nil> <nil>}
	I0401 19:31:39.999390   71168 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-163608 && echo "old-k8s-version-163608" | sudo tee /etc/hostname
	I0401 19:31:40.128852   71168 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-163608
	
	I0401 19:31:40.128880   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:40.131508   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.131817   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:40.131874   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.131987   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:40.132188   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:40.132365   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:40.132503   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:40.132693   71168 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:40.132890   71168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.106 22 <nil> <nil>}
	I0401 19:31:40.132908   71168 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-163608' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-163608/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-163608' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 19:31:40.252693   71168 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 19:31:40.252727   71168 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18233-10493/.minikube CaCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18233-10493/.minikube}
	I0401 19:31:40.252749   71168 buildroot.go:174] setting up certificates
	I0401 19:31:40.252759   71168 provision.go:84] configureAuth start
	I0401 19:31:40.252767   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetMachineName
	I0401 19:31:40.253030   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetIP
	I0401 19:31:40.255827   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.256183   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:40.256210   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.256418   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:40.259041   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.259388   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:40.259418   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.259540   71168 provision.go:143] copyHostCerts
	I0401 19:31:40.259592   71168 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem, removing ...
	I0401 19:31:40.259602   71168 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem
	I0401 19:31:40.259654   71168 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem (1082 bytes)
	I0401 19:31:40.259745   71168 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem, removing ...
	I0401 19:31:40.259754   71168 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem
	I0401 19:31:40.259773   71168 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem (1123 bytes)
	I0401 19:31:40.259822   71168 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem, removing ...
	I0401 19:31:40.259830   71168 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem
	I0401 19:31:40.259846   71168 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem (1679 bytes)
	I0401 19:31:40.259891   71168 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-163608 san=[127.0.0.1 192.168.50.106 localhost minikube old-k8s-version-163608]
	I0401 19:31:40.465177   71168 provision.go:177] copyRemoteCerts
	I0401 19:31:40.465241   71168 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 19:31:40.465265   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:40.467676   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.468040   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:40.468070   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.468272   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:40.468456   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:40.468622   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:40.468767   71168 sshutil.go:53] new ssh client: &{IP:192.168.50.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/old-k8s-version-163608/id_rsa Username:docker}
	I0401 19:31:40.557764   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 19:31:40.585326   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0401 19:31:40.611671   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 19:31:40.639265   71168 provision.go:87] duration metric: took 386.497023ms to configureAuth
	I0401 19:31:40.639296   71168 buildroot.go:189] setting minikube options for container-runtime
	I0401 19:31:40.639521   71168 config.go:182] Loaded profile config "old-k8s-version-163608": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 19:31:40.639590   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:40.642321   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.642733   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:40.642762   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.642921   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:40.643122   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:40.643294   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:40.643442   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:40.643647   71168 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:40.643802   71168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.106 22 <nil> <nil>}
	I0401 19:31:40.643819   71168 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 19:31:40.940619   71168 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 19:31:40.940647   71168 machine.go:97] duration metric: took 1.059122816s to provisionDockerMachine
	I0401 19:31:40.940661   71168 start.go:293] postStartSetup for "old-k8s-version-163608" (driver="kvm2")
	I0401 19:31:40.940672   71168 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 19:31:40.940687   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:31:40.940955   71168 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 19:31:40.940981   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:40.943787   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.944159   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:40.944197   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.944347   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:40.944556   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:40.944700   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:40.944834   71168 sshutil.go:53] new ssh client: &{IP:192.168.50.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/old-k8s-version-163608/id_rsa Username:docker}
	I0401 19:31:41.035824   71168 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 19:31:41.040975   71168 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 19:31:41.041007   71168 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/addons for local assets ...
	I0401 19:31:41.041085   71168 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/files for local assets ...
	I0401 19:31:41.041165   71168 filesync.go:149] local asset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> 177512.pem in /etc/ssl/certs
	I0401 19:31:41.041255   71168 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 19:31:41.052356   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:31:41.080699   71168 start.go:296] duration metric: took 140.024653ms for postStartSetup
	I0401 19:31:41.080737   71168 fix.go:56] duration metric: took 20.729726297s for fixHost
	I0401 19:31:41.080759   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:41.083664   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:41.084045   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:41.084075   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:41.084202   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:41.084405   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:41.084599   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:41.084796   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:41.084971   71168 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:41.085169   71168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.106 22 <nil> <nil>}
	I0401 19:31:41.085180   71168 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 19:31:41.203392   71168 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711999901.182365994
	
	I0401 19:31:41.203412   71168 fix.go:216] guest clock: 1711999901.182365994
	I0401 19:31:41.203419   71168 fix.go:229] Guest: 2024-04-01 19:31:41.182365994 +0000 UTC Remote: 2024-04-01 19:31:41.080741553 +0000 UTC m=+228.159955492 (delta=101.624441ms)
	I0401 19:31:41.203437   71168 fix.go:200] guest clock delta is within tolerance: 101.624441ms
	I0401 19:31:41.203442   71168 start.go:83] releasing machines lock for "old-k8s-version-163608", held for 20.852486097s
	I0401 19:31:41.203462   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:31:41.203744   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetIP
	I0401 19:31:41.206582   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:41.206952   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:41.206973   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:41.207151   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:31:41.207701   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:31:41.207891   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:31:41.207954   71168 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 19:31:41.207996   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:41.208096   71168 ssh_runner.go:195] Run: cat /version.json
	I0401 19:31:41.208127   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:41.210731   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:41.210928   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:41.211107   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:41.211132   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:41.211317   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:41.211446   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:41.211488   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:41.211491   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:41.211636   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:41.211692   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:41.211783   71168 sshutil.go:53] new ssh client: &{IP:192.168.50.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/old-k8s-version-163608/id_rsa Username:docker}
	I0401 19:31:41.211891   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:41.212031   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:41.212187   71168 sshutil.go:53] new ssh client: &{IP:192.168.50.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/old-k8s-version-163608/id_rsa Username:docker}
	I0401 19:31:41.296330   71168 ssh_runner.go:195] Run: systemctl --version
	I0401 19:31:41.326247   71168 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 19:31:41.479411   71168 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 19:31:41.486996   71168 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 19:31:41.487063   71168 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 19:31:41.507840   71168 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 19:31:41.507870   71168 start.go:494] detecting cgroup driver to use...
	I0401 19:31:41.507942   71168 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 19:31:41.533063   71168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 19:31:41.551699   71168 docker.go:217] disabling cri-docker service (if available) ...
	I0401 19:31:41.551754   71168 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 19:31:41.568078   71168 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 19:31:41.584278   71168 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 19:31:41.726884   71168 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 19:31:41.882514   71168 docker.go:233] disabling docker service ...
	I0401 19:31:41.882587   71168 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 19:31:41.901235   71168 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 19:31:41.919787   71168 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 19:31:42.082420   71168 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 19:31:42.248527   71168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 19:31:42.266610   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 19:31:42.295677   71168 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0401 19:31:42.295740   71168 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:42.313855   71168 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 19:31:42.313920   71168 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:42.327176   71168 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:42.339527   71168 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:42.351220   71168 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 19:31:42.363716   71168 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 19:31:42.379911   71168 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0401 19:31:42.379971   71168 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0401 19:31:42.395282   71168 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 19:31:42.407713   71168 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:31:42.579648   71168 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 19:31:42.764748   71168 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 19:31:42.764858   71168 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 19:31:42.771038   71168 start.go:562] Will wait 60s for crictl version
	I0401 19:31:42.771125   71168 ssh_runner.go:195] Run: which crictl
	I0401 19:31:42.775871   71168 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 19:31:42.823135   71168 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 19:31:42.823218   71168 ssh_runner.go:195] Run: crio --version
	I0401 19:31:42.863748   71168 ssh_runner.go:195] Run: crio --version
	I0401 19:31:42.900263   71168 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0401 19:31:42.901631   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetIP
	I0401 19:31:42.904464   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:42.904773   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:42.904812   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:42.905048   71168 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0401 19:31:42.910117   71168 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:31:42.925313   71168 kubeadm.go:877] updating cluster {Name:old-k8s-version-163608 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-163608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.106 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 19:31:42.925475   71168 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0401 19:31:42.925542   71168 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:31:42.828772   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:44.829527   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:42.553437   70284 main.go:141] libmachine: (no-preload-472858) Waiting to get IP...
	I0401 19:31:42.554422   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:42.554810   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:42.554907   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:42.554806   72041 retry.go:31] will retry after 237.823736ms: waiting for machine to come up
	I0401 19:31:42.794546   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:42.795159   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:42.795205   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:42.795117   72041 retry.go:31] will retry after 326.387674ms: waiting for machine to come up
	I0401 19:31:43.123632   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:43.124306   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:43.124342   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:43.124244   72041 retry.go:31] will retry after 455.262949ms: waiting for machine to come up
	I0401 19:31:43.580752   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:43.581420   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:43.581440   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:43.581375   72041 retry.go:31] will retry after 520.307316ms: waiting for machine to come up
	I0401 19:31:44.103924   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:44.104407   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:44.104431   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:44.104361   72041 retry.go:31] will retry after 491.638031ms: waiting for machine to come up
	I0401 19:31:44.598440   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:44.598990   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:44.599015   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:44.598901   72041 retry.go:31] will retry after 652.234963ms: waiting for machine to come up
	I0401 19:31:45.252362   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:45.252901   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:45.252933   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:45.252853   72041 retry.go:31] will retry after 1.047335678s: waiting for machine to come up
	I0401 19:31:46.301894   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:46.302324   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:46.302349   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:46.302281   72041 retry.go:31] will retry after 1.303326069s: waiting for machine to come up
	I0401 19:31:44.101042   70962 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:46.099803   70962 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace has status "Ready":"True"
	I0401 19:31:46.099828   70962 pod_ready.go:81] duration metric: took 10.008882274s for pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:46.099843   70962 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:42.974220   71168 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0401 19:31:42.974307   71168 ssh_runner.go:195] Run: which lz4
	I0401 19:31:42.979179   71168 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0401 19:31:42.984204   71168 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 19:31:42.984236   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0401 19:31:45.108131   71168 crio.go:462] duration metric: took 2.128988098s to copy over tarball
	I0401 19:31:45.108232   71168 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 19:31:47.328534   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:49.827306   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:47.606907   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:47.607392   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:47.607419   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:47.607356   72041 retry.go:31] will retry after 1.729010443s: waiting for machine to come up
	I0401 19:31:49.338200   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:49.338722   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:49.338751   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:49.338667   72041 retry.go:31] will retry after 2.069036941s: waiting for machine to come up
	I0401 19:31:51.409458   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:51.409945   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:51.409976   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:51.409894   72041 retry.go:31] will retry after 2.405834741s: waiting for machine to come up
	I0401 19:31:48.108234   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:50.607720   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:48.581824   71168 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.473552916s)
	I0401 19:31:48.581871   71168 crio.go:469] duration metric: took 3.473700991s to extract the tarball
	I0401 19:31:48.581881   71168 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 19:31:48.630609   71168 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:31:48.673027   71168 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0401 19:31:48.673048   71168 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0401 19:31:48.673085   71168 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:31:48.673129   71168 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 19:31:48.673155   71168 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 19:31:48.673190   71168 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 19:31:48.673133   71168 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0401 19:31:48.673273   71168 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0401 19:31:48.673143   71168 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0401 19:31:48.673336   71168 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 19:31:48.675068   71168 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:31:48.675073   71168 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 19:31:48.675068   71168 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 19:31:48.675093   71168 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0401 19:31:48.675072   71168 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0401 19:31:48.675073   71168 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 19:31:48.675115   71168 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 19:31:48.675096   71168 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0401 19:31:48.827947   71168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0401 19:31:48.846025   71168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0401 19:31:48.848769   71168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0401 19:31:48.858366   71168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0401 19:31:48.858613   71168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0401 19:31:48.859241   71168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 19:31:48.862047   71168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0401 19:31:48.912299   71168 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0401 19:31:48.912346   71168 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0401 19:31:48.912399   71168 ssh_runner.go:195] Run: which crictl
	I0401 19:31:49.030117   71168 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0401 19:31:49.030357   71168 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 19:31:49.030122   71168 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0401 19:31:49.030433   71168 ssh_runner.go:195] Run: which crictl
	I0401 19:31:49.030460   71168 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 19:31:49.030526   71168 ssh_runner.go:195] Run: which crictl
	I0401 19:31:49.062211   71168 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0401 19:31:49.062327   71168 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0401 19:31:49.062234   71168 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0401 19:31:49.062415   71168 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0401 19:31:49.062396   71168 ssh_runner.go:195] Run: which crictl
	I0401 19:31:49.062461   71168 ssh_runner.go:195] Run: which crictl
	I0401 19:31:49.078249   71168 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0401 19:31:49.078308   71168 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 19:31:49.078323   71168 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0401 19:31:49.078358   71168 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 19:31:49.078379   71168 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0401 19:31:49.078398   71168 ssh_runner.go:195] Run: which crictl
	I0401 19:31:49.078426   71168 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0401 19:31:49.078440   71168 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0401 19:31:49.078362   71168 ssh_runner.go:195] Run: which crictl
	I0401 19:31:49.078466   71168 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0401 19:31:49.078494   71168 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0401 19:31:49.225060   71168 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0401 19:31:49.225137   71168 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0401 19:31:49.225160   71168 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0401 19:31:49.225199   71168 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0401 19:31:49.225250   71168 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0401 19:31:49.225252   71168 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 19:31:49.225326   71168 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0401 19:31:49.280782   71168 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0401 19:31:49.281709   71168 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0401 19:31:49.299218   71168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:31:49.465497   71168 cache_images.go:92] duration metric: took 792.432136ms to LoadCachedImages
	W0401 19:31:49.465595   71168 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0401 19:31:49.465613   71168 kubeadm.go:928] updating node { 192.168.50.106 8443 v1.20.0 crio true true} ...
	I0401 19:31:49.465768   71168 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-163608 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.106
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-163608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 19:31:49.465862   71168 ssh_runner.go:195] Run: crio config
	I0401 19:31:49.529730   71168 cni.go:84] Creating CNI manager for ""
	I0401 19:31:49.529757   71168 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:31:49.529771   71168 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 19:31:49.529799   71168 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.106 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-163608 NodeName:old-k8s-version-163608 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.106"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.106 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0401 19:31:49.529969   71168 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.106
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-163608"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.106
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.106"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 19:31:49.530037   71168 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0401 19:31:49.542642   71168 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 19:31:49.542724   71168 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 19:31:49.557001   71168 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0401 19:31:49.579568   71168 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 19:31:49.599692   71168 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0401 19:31:49.619780   71168 ssh_runner.go:195] Run: grep 192.168.50.106	control-plane.minikube.internal$ /etc/hosts
	I0401 19:31:49.625597   71168 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.106	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:31:49.643862   71168 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:31:49.791391   71168 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:31:49.814470   71168 certs.go:68] Setting up /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608 for IP: 192.168.50.106
	I0401 19:31:49.814497   71168 certs.go:194] generating shared ca certs ...
	I0401 19:31:49.814516   71168 certs.go:226] acquiring lock for ca certs: {Name:mk348b3e250c104b662139cd7212c6c6dfda3180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:31:49.814680   71168 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key
	I0401 19:31:49.814736   71168 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key
	I0401 19:31:49.814745   71168 certs.go:256] generating profile certs ...
	I0401 19:31:49.814852   71168 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/client.key
	I0401 19:31:49.814916   71168 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/apiserver.key.f2de0982
	I0401 19:31:49.814964   71168 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/proxy-client.key
	I0401 19:31:49.815119   71168 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem (1338 bytes)
	W0401 19:31:49.815178   71168 certs.go:480] ignoring /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751_empty.pem, impossibly tiny 0 bytes
	I0401 19:31:49.815195   71168 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem (1675 bytes)
	I0401 19:31:49.815224   71168 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem (1082 bytes)
	I0401 19:31:49.815266   71168 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem (1123 bytes)
	I0401 19:31:49.815299   71168 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem (1679 bytes)
	I0401 19:31:49.815362   71168 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:31:49.816196   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 19:31:49.866842   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 19:31:49.913788   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 19:31:49.953223   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 19:31:50.004313   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0401 19:31:50.046972   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 19:31:50.086990   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 19:31:50.134907   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 19:31:50.163395   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /usr/share/ca-certificates/177512.pem (1708 bytes)
	I0401 19:31:50.191901   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 19:31:50.221196   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem --> /usr/share/ca-certificates/17751.pem (1338 bytes)
	I0401 19:31:50.253024   71168 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0401 19:31:50.275781   71168 ssh_runner.go:195] Run: openssl version
	I0401 19:31:50.282795   71168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177512.pem && ln -fs /usr/share/ca-certificates/177512.pem /etc/ssl/certs/177512.pem"
	I0401 19:31:50.296952   71168 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177512.pem
	I0401 19:31:50.303868   71168 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 18:15 /usr/share/ca-certificates/177512.pem
	I0401 19:31:50.303950   71168 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177512.pem
	I0401 19:31:50.312249   71168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177512.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 19:31:50.328985   71168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 19:31:50.345917   71168 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:50.352041   71168 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 18:07 /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:50.352103   71168 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:50.358752   71168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 19:31:50.371702   71168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17751.pem && ln -fs /usr/share/ca-certificates/17751.pem /etc/ssl/certs/17751.pem"
	I0401 19:31:50.384633   71168 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17751.pem
	I0401 19:31:50.391229   71168 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 18:15 /usr/share/ca-certificates/17751.pem
	I0401 19:31:50.391277   71168 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17751.pem
	I0401 19:31:50.397980   71168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17751.pem /etc/ssl/certs/51391683.0"
	I0401 19:31:50.412674   71168 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 19:31:50.418084   71168 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 19:31:50.425102   71168 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 19:31:50.431949   71168 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 19:31:50.438665   71168 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 19:31:50.446633   71168 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 19:31:50.454688   71168 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 19:31:50.462805   71168 kubeadm.go:391] StartCluster: {Name:old-k8s-version-163608 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-163608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.106 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:31:50.462922   71168 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 19:31:50.462956   71168 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:31:50.505702   71168 cri.go:89] found id: ""
	I0401 19:31:50.505788   71168 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0401 19:31:50.517916   71168 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0401 19:31:50.517934   71168 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0401 19:31:50.517940   71168 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0401 19:31:50.517995   71168 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 19:31:50.529459   71168 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 19:31:50.530408   71168 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-163608" does not appear in /home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 19:31:50.531055   71168 kubeconfig.go:62] /home/jenkins/minikube-integration/18233-10493/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-163608" cluster setting kubeconfig missing "old-k8s-version-163608" context setting]
	I0401 19:31:50.532369   71168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/kubeconfig: {Name:mkbd988e40ba29769e9f8a43c4d876f38e957f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:31:50.534578   71168 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 19:31:50.546275   71168 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.106
	I0401 19:31:50.546309   71168 kubeadm.go:1154] stopping kube-system containers ...
	I0401 19:31:50.546328   71168 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0401 19:31:50.546371   71168 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:31:50.588826   71168 cri.go:89] found id: ""
	I0401 19:31:50.588881   71168 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0401 19:31:50.610933   71168 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:31:50.622201   71168 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:31:50.622221   71168 kubeadm.go:156] found existing configuration files:
	
	I0401 19:31:50.622266   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 19:31:50.634006   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:31:50.634071   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:31:50.647891   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 19:31:50.662548   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:31:50.662596   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:31:50.674627   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 19:31:50.686739   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:31:50.686825   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:31:50.700400   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 19:31:50.712952   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:31:50.713014   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:31:50.725616   71168 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:31:50.739130   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:50.874552   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:51.568640   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:51.850288   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:52.009607   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:52.122887   71168 api_server.go:52] waiting for apiserver process to appear ...
	I0401 19:31:52.122962   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:52.623084   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:51.827968   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:54.325686   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:56.325892   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:53.817748   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:53.818158   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:53.818184   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:53.818122   72041 retry.go:31] will retry after 2.747390243s: waiting for machine to come up
	I0401 19:31:56.567288   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:56.567711   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:56.567742   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:56.567657   72041 retry.go:31] will retry after 3.904473051s: waiting for machine to come up
	I0401 19:31:53.107786   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:55.108974   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:53.123783   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:53.623248   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:54.124004   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:54.623873   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:55.123458   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:55.623923   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:56.123441   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:56.623192   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:57.123012   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:57.624010   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:58.325934   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:00.825343   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:00.476692   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.477192   70284 main.go:141] libmachine: (no-preload-472858) Found IP for machine: 192.168.72.119
	I0401 19:32:00.477217   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has current primary IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.477223   70284 main.go:141] libmachine: (no-preload-472858) Reserving static IP address...
	I0401 19:32:00.477672   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "no-preload-472858", mac: "52:54:00:0a:2e:03", ip: "192.168.72.119"} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:00.477708   70284 main.go:141] libmachine: (no-preload-472858) DBG | skip adding static IP to network mk-no-preload-472858 - found existing host DHCP lease matching {name: "no-preload-472858", mac: "52:54:00:0a:2e:03", ip: "192.168.72.119"}
	I0401 19:32:00.477726   70284 main.go:141] libmachine: (no-preload-472858) Reserved static IP address: 192.168.72.119
	I0401 19:32:00.477742   70284 main.go:141] libmachine: (no-preload-472858) Waiting for SSH to be available...
	I0401 19:32:00.477770   70284 main.go:141] libmachine: (no-preload-472858) DBG | Getting to WaitForSSH function...
	I0401 19:32:00.479949   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.480306   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:00.480334   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.480475   70284 main.go:141] libmachine: (no-preload-472858) DBG | Using SSH client type: external
	I0401 19:32:00.480508   70284 main.go:141] libmachine: (no-preload-472858) DBG | Using SSH private key: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/no-preload-472858/id_rsa (-rw-------)
	I0401 19:32:00.480538   70284 main.go:141] libmachine: (no-preload-472858) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.119 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18233-10493/.minikube/machines/no-preload-472858/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0401 19:32:00.480554   70284 main.go:141] libmachine: (no-preload-472858) DBG | About to run SSH command:
	I0401 19:32:00.480566   70284 main.go:141] libmachine: (no-preload-472858) DBG | exit 0
	I0401 19:32:00.610108   70284 main.go:141] libmachine: (no-preload-472858) DBG | SSH cmd err, output: <nil>: 
	I0401 19:32:00.610458   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetConfigRaw
	I0401 19:32:00.611059   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetIP
	I0401 19:32:00.613496   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.613872   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:00.613906   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.614179   70284 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/no-preload-472858/config.json ...
	I0401 19:32:00.614363   70284 machine.go:94] provisionDockerMachine start ...
	I0401 19:32:00.614382   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:32:00.614593   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:00.617019   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.617404   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:00.617430   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.617585   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:32:00.617780   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:00.617953   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:00.618098   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:32:00.618260   70284 main.go:141] libmachine: Using SSH client type: native
	I0401 19:32:00.618451   70284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.119 22 <nil> <nil>}
	I0401 19:32:00.618462   70284 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 19:32:00.730438   70284 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0401 19:32:00.730473   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetMachineName
	I0401 19:32:00.730725   70284 buildroot.go:166] provisioning hostname "no-preload-472858"
	I0401 19:32:00.730754   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetMachineName
	I0401 19:32:00.730994   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:00.733932   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.734274   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:00.734308   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.734419   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:32:00.734591   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:00.734752   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:00.734918   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:32:00.735092   70284 main.go:141] libmachine: Using SSH client type: native
	I0401 19:32:00.735296   70284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.119 22 <nil> <nil>}
	I0401 19:32:00.735313   70284 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-472858 && echo "no-preload-472858" | sudo tee /etc/hostname
	I0401 19:32:00.865664   70284 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-472858
	
	I0401 19:32:00.865702   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:00.868247   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.868619   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:00.868649   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.868845   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:32:00.869037   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:00.869244   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:00.869420   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:32:00.869671   70284 main.go:141] libmachine: Using SSH client type: native
	I0401 19:32:00.869840   70284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.119 22 <nil> <nil>}
	I0401 19:32:00.869859   70284 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-472858' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-472858/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-472858' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 19:32:00.991430   70284 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 19:32:00.991460   70284 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18233-10493/.minikube CaCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18233-10493/.minikube}
	I0401 19:32:00.991484   70284 buildroot.go:174] setting up certificates
	I0401 19:32:00.991493   70284 provision.go:84] configureAuth start
	I0401 19:32:00.991504   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetMachineName
	I0401 19:32:00.991748   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetIP
	I0401 19:32:00.994239   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.994566   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:00.994596   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.994722   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:00.996735   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.997064   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:00.997090   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.997212   70284 provision.go:143] copyHostCerts
	I0401 19:32:00.997265   70284 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem, removing ...
	I0401 19:32:00.997281   70284 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem
	I0401 19:32:00.997346   70284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem (1082 bytes)
	I0401 19:32:00.997493   70284 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem, removing ...
	I0401 19:32:00.997507   70284 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem
	I0401 19:32:00.997533   70284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem (1123 bytes)
	I0401 19:32:00.997619   70284 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem, removing ...
	I0401 19:32:00.997629   70284 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem
	I0401 19:32:00.997667   70284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem (1679 bytes)
	I0401 19:32:00.997733   70284 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem org=jenkins.no-preload-472858 san=[127.0.0.1 192.168.72.119 localhost minikube no-preload-472858]
	I0401 19:32:01.212397   70284 provision.go:177] copyRemoteCerts
	I0401 19:32:01.212453   70284 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 19:32:01.212473   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:01.214810   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.215170   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:01.215198   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.215398   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:32:01.215603   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:01.215761   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:32:01.215903   70284 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/no-preload-472858/id_rsa Username:docker}
	I0401 19:32:01.303113   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0401 19:32:01.331807   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 19:32:01.358429   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 19:32:01.384521   70284 provision.go:87] duration metric: took 393.005717ms to configureAuth
	I0401 19:32:01.384559   70284 buildroot.go:189] setting minikube options for container-runtime
	I0401 19:32:01.384748   70284 config.go:182] Loaded profile config "no-preload-472858": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0401 19:32:01.384862   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:01.387446   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.387828   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:01.387866   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.387966   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:32:01.388168   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:01.388356   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:01.388509   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:32:01.388663   70284 main.go:141] libmachine: Using SSH client type: native
	I0401 19:32:01.388847   70284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.119 22 <nil> <nil>}
	I0401 19:32:01.388867   70284 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 19:32:01.692586   70284 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 19:32:01.692615   70284 machine.go:97] duration metric: took 1.078237975s to provisionDockerMachine
	I0401 19:32:01.692628   70284 start.go:293] postStartSetup for "no-preload-472858" (driver="kvm2")
	I0401 19:32:01.692644   70284 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 19:32:01.692668   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:32:01.692988   70284 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 19:32:01.693012   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:01.696033   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.696405   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:01.696450   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.696603   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:32:01.696763   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:01.696901   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:32:01.697089   70284 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/no-preload-472858/id_rsa Username:docker}
	I0401 19:32:01.786626   70284 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 19:32:01.791703   70284 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 19:32:01.791726   70284 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/addons for local assets ...
	I0401 19:32:01.791802   70284 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/files for local assets ...
	I0401 19:32:01.791901   70284 filesync.go:149] local asset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> 177512.pem in /etc/ssl/certs
	I0401 19:32:01.791991   70284 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 19:32:01.803733   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:32:01.831768   70284 start.go:296] duration metric: took 139.126077ms for postStartSetup
	I0401 19:32:01.831804   70284 fix.go:56] duration metric: took 20.628199635s for fixHost
	I0401 19:32:01.831823   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:01.834218   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.834548   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:01.834574   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.834725   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:32:01.834901   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:01.835066   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:01.835188   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:32:01.835327   70284 main.go:141] libmachine: Using SSH client type: native
	I0401 19:32:01.835544   70284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.119 22 <nil> <nil>}
	I0401 19:32:01.835558   70284 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 19:31:57.607923   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:59.608857   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:02.106942   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:58.123200   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:58.624028   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:59.123026   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:59.623993   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:00.123039   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:00.623632   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:01.123204   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:01.623162   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:02.123264   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:02.623788   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:01.947198   70284 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711999921.892647753
	
	I0401 19:32:01.947267   70284 fix.go:216] guest clock: 1711999921.892647753
	I0401 19:32:01.947279   70284 fix.go:229] Guest: 2024-04-01 19:32:01.892647753 +0000 UTC Remote: 2024-04-01 19:32:01.831808507 +0000 UTC m=+359.938807685 (delta=60.839246ms)
	I0401 19:32:01.947305   70284 fix.go:200] guest clock delta is within tolerance: 60.839246ms
	I0401 19:32:01.947317   70284 start.go:83] releasing machines lock for "no-preload-472858", held for 20.743748352s
	I0401 19:32:01.947347   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:32:01.947621   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetIP
	I0401 19:32:01.950387   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.950719   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:01.950750   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.950940   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:32:01.951438   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:32:01.951631   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:32:01.951681   70284 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 19:32:01.951737   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:01.951854   70284 ssh_runner.go:195] Run: cat /version.json
	I0401 19:32:01.951881   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:01.954468   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.954603   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.954780   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:01.954815   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.954932   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:01.954960   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.954984   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:32:01.955193   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:01.955230   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:32:01.955341   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:32:01.955388   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:01.955510   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:32:01.955501   70284 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/no-preload-472858/id_rsa Username:docker}
	I0401 19:32:01.955670   70284 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/no-preload-472858/id_rsa Username:docker}
	I0401 19:32:02.035332   70284 ssh_runner.go:195] Run: systemctl --version
	I0401 19:32:02.061178   70284 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 19:32:02.220309   70284 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 19:32:02.227811   70284 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 19:32:02.227885   70284 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 19:32:02.247605   70284 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 19:32:02.247634   70284 start.go:494] detecting cgroup driver to use...
	I0401 19:32:02.247690   70284 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 19:32:02.265463   70284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 19:32:02.280175   70284 docker.go:217] disabling cri-docker service (if available) ...
	I0401 19:32:02.280246   70284 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 19:32:02.295003   70284 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 19:32:02.315072   70284 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 19:32:02.449108   70284 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 19:32:02.627772   70284 docker.go:233] disabling docker service ...
	I0401 19:32:02.627850   70284 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 19:32:02.642924   70284 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 19:32:02.657038   70284 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 19:32:02.787085   70284 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 19:32:02.918355   70284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 19:32:02.934828   70284 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 19:32:02.955495   70284 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0401 19:32:02.955548   70284 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:32:02.966690   70284 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 19:32:02.966754   70284 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:32:02.977812   70284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:32:02.989329   70284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:32:03.000727   70284 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 19:32:03.012341   70284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:32:03.023305   70284 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:32:03.044213   70284 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:32:03.055614   70284 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 19:32:03.065880   70284 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0401 19:32:03.065927   70284 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0401 19:32:03.080514   70284 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 19:32:03.090798   70284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:32:03.224199   70284 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 19:32:03.389414   70284 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 19:32:03.389482   70284 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 19:32:03.395493   70284 start.go:562] Will wait 60s for crictl version
	I0401 19:32:03.395539   70284 ssh_runner.go:195] Run: which crictl
	I0401 19:32:03.399739   70284 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 19:32:03.441020   70284 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 19:32:03.441114   70284 ssh_runner.go:195] Run: crio --version
	I0401 19:32:03.474572   70284 ssh_runner.go:195] Run: crio --version
	I0401 19:32:03.511681   70284 out.go:177] * Preparing Kubernetes v1.30.0-rc.0 on CRI-O 1.29.1 ...
	I0401 19:32:02.825628   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:04.825973   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:03.513067   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetIP
	I0401 19:32:03.515901   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:03.516281   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:03.516315   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:03.516523   70284 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0401 19:32:03.521197   70284 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:32:03.536333   70284 kubeadm.go:877] updating cluster {Name:no-preload-472858 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0-rc.0 ClusterName:no-preload-472858 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.119 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 19:32:03.536459   70284 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.0 and runtime crio
	I0401 19:32:03.536507   70284 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:32:03.582858   70284 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0-rc.0". assuming images are not preloaded.
	I0401 19:32:03.582887   70284 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0-rc.0 registry.k8s.io/kube-controller-manager:v1.30.0-rc.0 registry.k8s.io/kube-scheduler:v1.30.0-rc.0 registry.k8s.io/kube-proxy:v1.30.0-rc.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0401 19:32:03.582970   70284 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:32:03.583026   70284 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0401 19:32:03.583032   70284 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0401 19:32:03.583071   70284 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0401 19:32:03.583161   70284 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0401 19:32:03.582997   70284 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0401 19:32:03.583238   70284 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0401 19:32:03.583388   70284 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0401 19:32:03.584618   70284 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0401 19:32:03.584626   70284 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0401 19:32:03.584630   70284 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:32:03.584619   70284 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0401 19:32:03.584640   70284 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0401 19:32:03.584626   70284 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0401 19:32:03.584701   70284 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0401 19:32:03.584856   70284 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0401 19:32:03.730086   70284 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0401 19:32:03.752217   70284 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0401 19:32:03.765621   70284 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0401 19:32:03.766526   70284 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0401 19:32:03.770748   70284 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0401 19:32:03.777614   70284 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0401 19:32:03.777672   70284 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0401 19:32:03.777699   70284 ssh_runner.go:195] Run: which crictl
	I0401 19:32:03.840814   70284 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0401 19:32:03.852416   70284 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0401 19:32:03.869889   70284 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0-rc.0" does not exist at hash "e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3" in container runtime
	I0401 19:32:03.869929   70284 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0401 19:32:03.869979   70284 ssh_runner.go:195] Run: which crictl
	I0401 19:32:03.874654   70284 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0-rc.0" does not exist at hash "ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a" in container runtime
	I0401 19:32:03.874693   70284 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0401 19:32:03.874737   70284 ssh_runner.go:195] Run: which crictl
	I0401 19:32:03.899207   70284 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:32:03.906139   70284 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0-rc.0" does not exist at hash "fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5" in container runtime
	I0401 19:32:03.906182   70284 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0401 19:32:03.906227   70284 ssh_runner.go:195] Run: which crictl
	I0401 19:32:03.996916   70284 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0401 19:32:03.996987   70284 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0-rc.0" does not exist at hash "33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652" in container runtime
	I0401 19:32:03.997022   70284 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0401 19:32:03.997045   70284 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0401 19:32:03.997053   70284 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0401 19:32:03.997054   70284 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0401 19:32:03.997089   70284 ssh_runner.go:195] Run: which crictl
	I0401 19:32:03.997128   70284 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0401 19:32:03.997142   70284 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0401 19:32:03.997090   70284 ssh_runner.go:195] Run: which crictl
	I0401 19:32:03.997164   70284 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:32:03.997194   70284 ssh_runner.go:195] Run: which crictl
	I0401 19:32:03.997211   70284 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0401 19:32:04.090272   70284 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0401 19:32:04.090548   70284 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0401 19:32:04.090639   70284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0401 19:32:04.102041   70284 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0401 19:32:04.102130   70284 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.0
	I0401 19:32:04.102168   70284 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.0
	I0401 19:32:04.102226   70284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0
	I0401 19:32:04.102241   70284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0
	I0401 19:32:04.102278   70284 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:32:04.108100   70284 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.0
	I0401 19:32:04.108192   70284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0
	I0401 19:32:04.182707   70284 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0401 19:32:04.182747   70284 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0401 19:32:04.182759   70284 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0401 19:32:04.182815   70284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0401 19:32:04.182820   70284 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0401 19:32:04.182883   70284 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.0
	I0401 19:32:04.182988   70284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0
	I0401 19:32:04.186135   70284 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0 (exists)
	I0401 19:32:04.186175   70284 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0 (exists)
	I0401 19:32:04.186221   70284 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0 (exists)
	I0401 19:32:04.186242   70284 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0401 19:32:04.186324   70284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0401 19:32:06.352362   70284 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.169442796s)
	I0401 19:32:06.352398   70284 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0401 19:32:06.352419   70284 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0
	I0401 19:32:06.352416   70284 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0: (2.16957379s)
	I0401 19:32:06.352443   70284 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0401 19:32:06.352465   70284 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0
	I0401 19:32:06.352465   70284 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0: (2.16945688s)
	I0401 19:32:06.352479   70284 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.166139431s)
	I0401 19:32:06.352490   70284 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0401 19:32:06.352491   70284 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0 (exists)
	I0401 19:32:04.109989   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:06.294038   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:03.123452   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:03.623784   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:04.123649   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:04.623076   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:05.123822   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:05.623487   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:06.123635   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:06.623689   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:07.123919   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:07.623237   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:06.826244   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:09.326937   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:09.261547   70284 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0: (2.909056315s)
	I0401 19:32:09.261572   70284 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.0 from cache
	I0401 19:32:09.261600   70284 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0
	I0401 19:32:09.261668   70284 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0
	I0401 19:32:11.739636   70284 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0: (2.477945807s)
	I0401 19:32:11.739667   70284 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.0 from cache
	I0401 19:32:11.739702   70284 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0
	I0401 19:32:11.739761   70284 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0
	I0401 19:32:08.609901   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:11.114752   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:08.123689   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:08.623160   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:09.124002   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:09.623090   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:10.123049   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:10.623111   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:11.123042   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:11.623980   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:12.123074   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:12.623530   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:11.826409   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:13.828437   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:16.326097   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:13.195232   70284 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0: (1.455440816s)
	I0401 19:32:13.195267   70284 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.0 from cache
	I0401 19:32:13.195299   70284 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0401 19:32:13.195350   70284 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0401 19:32:13.607042   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:16.107993   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:13.123428   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:13.623899   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:14.123324   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:14.623889   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:15.123496   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:15.623779   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:16.124012   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:16.623620   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:17.123867   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:17.623014   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:18.326127   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:20.326575   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:17.202247   70284 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.006869591s)
	I0401 19:32:17.202284   70284 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0401 19:32:17.202315   70284 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0401 19:32:17.202364   70284 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0401 19:32:17.962735   70284 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0401 19:32:17.962785   70284 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0
	I0401 19:32:17.962850   70284 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0
	I0401 19:32:20.235136   70284 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0: (2.272262595s)
	I0401 19:32:20.235161   70284 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.0 from cache
	I0401 19:32:20.235193   70284 cache_images.go:123] Successfully loaded all cached images
	I0401 19:32:20.235197   70284 cache_images.go:92] duration metric: took 16.652290938s to LoadCachedImages
	I0401 19:32:20.235205   70284 kubeadm.go:928] updating node { 192.168.72.119 8443 v1.30.0-rc.0 crio true true} ...
	I0401 19:32:20.235332   70284 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-472858 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.119
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-rc.0 ClusterName:no-preload-472858 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 19:32:20.235402   70284 ssh_runner.go:195] Run: crio config
	I0401 19:32:20.296015   70284 cni.go:84] Creating CNI manager for ""
	I0401 19:32:20.296039   70284 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:32:20.296050   70284 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 19:32:20.296074   70284 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.119 APIServerPort:8443 KubernetesVersion:v1.30.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-472858 NodeName:no-preload-472858 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.119"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.119 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 19:32:20.296217   70284 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.119
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-472858"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.119
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.119"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 19:32:20.296275   70284 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.0
	I0401 19:32:20.307937   70284 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 19:32:20.308009   70284 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 19:32:20.318571   70284 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0401 19:32:20.339284   70284 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0401 19:32:20.358601   70284 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0401 19:32:20.379394   70284 ssh_runner.go:195] Run: grep 192.168.72.119	control-plane.minikube.internal$ /etc/hosts
	I0401 19:32:20.383948   70284 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.119	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:32:20.397559   70284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:32:20.549147   70284 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:32:20.568027   70284 certs.go:68] Setting up /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/no-preload-472858 for IP: 192.168.72.119
	I0401 19:32:20.568051   70284 certs.go:194] generating shared ca certs ...
	I0401 19:32:20.568070   70284 certs.go:226] acquiring lock for ca certs: {Name:mk348b3e250c104b662139cd7212c6c6dfda3180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:32:20.568273   70284 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key
	I0401 19:32:20.568337   70284 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key
	I0401 19:32:20.568352   70284 certs.go:256] generating profile certs ...
	I0401 19:32:20.568453   70284 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/no-preload-472858/client.key
	I0401 19:32:20.568534   70284 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/no-preload-472858/apiserver.key.bfc8ff8f
	I0401 19:32:20.568586   70284 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/no-preload-472858/proxy-client.key
	I0401 19:32:20.568691   70284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem (1338 bytes)
	W0401 19:32:20.568718   70284 certs.go:480] ignoring /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751_empty.pem, impossibly tiny 0 bytes
	I0401 19:32:20.568728   70284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem (1675 bytes)
	I0401 19:32:20.568747   70284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem (1082 bytes)
	I0401 19:32:20.568773   70284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem (1123 bytes)
	I0401 19:32:20.568795   70284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem (1679 bytes)
	I0401 19:32:20.568830   70284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:32:20.569519   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 19:32:20.605218   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 19:32:20.650321   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 19:32:20.676884   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 19:32:20.705378   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/no-preload-472858/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0401 19:32:20.733068   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/no-preload-472858/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 19:32:20.767387   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/no-preload-472858/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 19:32:20.793543   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/no-preload-472858/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 19:32:20.820843   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /usr/share/ca-certificates/177512.pem (1708 bytes)
	I0401 19:32:20.848364   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 19:32:20.877551   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem --> /usr/share/ca-certificates/17751.pem (1338 bytes)
	I0401 19:32:20.904650   70284 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0401 19:32:20.922876   70284 ssh_runner.go:195] Run: openssl version
	I0401 19:32:20.929441   70284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 19:32:20.942496   70284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:32:20.948011   70284 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 18:07 /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:32:20.948080   70284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:32:20.954320   70284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 19:32:20.968060   70284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17751.pem && ln -fs /usr/share/ca-certificates/17751.pem /etc/ssl/certs/17751.pem"
	I0401 19:32:20.981591   70284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17751.pem
	I0401 19:32:20.986660   70284 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 18:15 /usr/share/ca-certificates/17751.pem
	I0401 19:32:20.986706   70284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17751.pem
	I0401 19:32:20.993394   70284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17751.pem /etc/ssl/certs/51391683.0"
	I0401 19:32:21.006530   70284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177512.pem && ln -fs /usr/share/ca-certificates/177512.pem /etc/ssl/certs/177512.pem"
	I0401 19:32:21.020014   70284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177512.pem
	I0401 19:32:21.025507   70284 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 18:15 /usr/share/ca-certificates/177512.pem
	I0401 19:32:21.025560   70284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177512.pem
	I0401 19:32:21.032433   70284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177512.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 19:32:21.047002   70284 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 19:32:21.052551   70284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 19:32:21.059875   70284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 19:32:21.067243   70284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 19:32:21.074304   70284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 19:32:21.080978   70284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 19:32:21.088051   70284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 19:32:21.095219   70284 kubeadm.go:391] StartCluster: {Name:no-preload-472858 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0-rc.0 ClusterName:no-preload-472858 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.119 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:32:21.095325   70284 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 19:32:21.095403   70284 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:32:21.144103   70284 cri.go:89] found id: ""
	I0401 19:32:21.144187   70284 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0401 19:32:21.157222   70284 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0401 19:32:21.157241   70284 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0401 19:32:21.157246   70284 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0401 19:32:21.157290   70284 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 19:32:21.169027   70284 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 19:32:21.170123   70284 kubeconfig.go:125] found "no-preload-472858" server: "https://192.168.72.119:8443"
	I0401 19:32:21.172523   70284 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 19:32:21.183801   70284 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.119
	I0401 19:32:21.183838   70284 kubeadm.go:1154] stopping kube-system containers ...
	I0401 19:32:21.183847   70284 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0401 19:32:21.183892   70284 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:32:21.229279   70284 cri.go:89] found id: ""
	I0401 19:32:21.229357   70284 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0401 19:32:21.249719   70284 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:32:21.261894   70284 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:32:21.261929   70284 kubeadm.go:156] found existing configuration files:
	
	I0401 19:32:21.261984   70284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 19:32:21.273961   70284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:32:21.274026   70284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:32:21.286746   70284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 19:32:21.297920   70284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:32:21.297986   70284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:32:21.308793   70284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 19:32:21.319612   70284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:32:21.319658   70284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:32:21.332730   70284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 19:32:21.344752   70284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:32:21.344810   70284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:32:21.355821   70284 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:32:21.366649   70284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:32:21.482208   70284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:32:18.607685   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:20.607824   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:18.123795   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:18.623529   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:19.123446   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:19.623223   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:20.123133   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:20.623058   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:21.123302   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:21.623115   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:22.123810   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:22.623878   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:22.826056   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:24.826357   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:22.312148   70284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:32:22.533156   70284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:32:22.620390   70284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:32:22.704948   70284 api_server.go:52] waiting for apiserver process to appear ...
	I0401 19:32:22.705039   70284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:23.205114   70284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:23.706000   70284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:23.725209   70284 api_server.go:72] duration metric: took 1.020261742s to wait for apiserver process to appear ...
	I0401 19:32:23.725243   70284 api_server.go:88] waiting for apiserver healthz status ...
	I0401 19:32:23.725264   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:23.725749   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": dial tcp 192.168.72.119:8443: connect: connection refused
	I0401 19:32:24.226383   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:23.107450   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:25.109899   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:23.123507   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:23.623244   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:24.123444   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:24.623346   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:25.123834   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:25.623814   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:26.124028   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:26.623428   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:27.123592   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:27.623451   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:27.327961   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:29.826272   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:29.226831   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0401 19:32:29.226876   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:27.607575   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:29.608427   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:32.106668   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:28.123454   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:28.623502   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:29.123265   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:29.623449   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:30.123525   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:30.623634   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:31.123972   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:31.623023   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:32.123346   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:32.623839   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:32.325638   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:34.325777   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:36.326510   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:34.227668   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0401 19:32:34.227723   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:34.606929   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:36.607515   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:33.123673   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:33.623088   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:34.123230   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:34.623967   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:35.123420   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:35.623499   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:36.123152   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:36.623963   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:37.123682   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:37.623536   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:38.829585   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:41.325607   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:39.228117   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0401 19:32:39.228164   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:39.107473   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:41.607043   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:38.123238   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:38.623831   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:39.123180   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:39.623801   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:40.123478   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:40.623651   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:41.123687   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:41.624016   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:42.123891   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:42.623493   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:43.326457   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:45.827310   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:44.228934   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0401 19:32:44.228982   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:44.259601   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": read tcp 192.168.72.1:37026->192.168.72.119:8443: read: connection reset by peer
	I0401 19:32:44.726186   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:44.726759   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": dial tcp 192.168.72.119:8443: connect: connection refused
	I0401 19:32:45.226347   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:43.607936   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:46.106775   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:43.123504   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:43.623527   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:44.124016   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:44.623931   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:45.123188   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:45.623649   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:46.123570   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:46.623179   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:47.123273   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:47.623842   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:48.325252   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:50.327365   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:50.226859   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0401 19:32:50.226907   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:48.109152   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:50.607327   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:48.123759   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:48.623092   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:49.123174   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:49.623986   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:50.123301   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:50.623694   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:51.123466   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:51.623618   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:52.123073   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:32:52.123172   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:32:52.164635   71168 cri.go:89] found id: ""
	I0401 19:32:52.164656   71168 logs.go:276] 0 containers: []
	W0401 19:32:52.164663   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:32:52.164669   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:32:52.164738   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:32:52.202531   71168 cri.go:89] found id: ""
	I0401 19:32:52.202560   71168 logs.go:276] 0 containers: []
	W0401 19:32:52.202572   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:32:52.202580   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:32:52.202653   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:32:52.247667   71168 cri.go:89] found id: ""
	I0401 19:32:52.247693   71168 logs.go:276] 0 containers: []
	W0401 19:32:52.247703   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:32:52.247714   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:32:52.247774   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:32:52.289029   71168 cri.go:89] found id: ""
	I0401 19:32:52.289054   71168 logs.go:276] 0 containers: []
	W0401 19:32:52.289062   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:32:52.289068   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:32:52.289114   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:32:52.326820   71168 cri.go:89] found id: ""
	I0401 19:32:52.326864   71168 logs.go:276] 0 containers: []
	W0401 19:32:52.326875   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:32:52.326882   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:32:52.326944   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:32:52.362793   71168 cri.go:89] found id: ""
	I0401 19:32:52.362827   71168 logs.go:276] 0 containers: []
	W0401 19:32:52.362838   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:32:52.362845   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:32:52.362950   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:32:52.400174   71168 cri.go:89] found id: ""
	I0401 19:32:52.400204   71168 logs.go:276] 0 containers: []
	W0401 19:32:52.400215   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:32:52.400222   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:32:52.400282   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:32:52.436027   71168 cri.go:89] found id: ""
	I0401 19:32:52.436056   71168 logs.go:276] 0 containers: []
	W0401 19:32:52.436066   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:32:52.436085   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:32:52.436099   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:32:52.477246   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:32:52.477272   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:32:52.529215   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:32:52.529247   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:32:52.544695   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:32:52.544724   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:32:52.677816   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:32:52.677849   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:32:52.677877   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:32:52.825288   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:54.826043   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:55.228105   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0401 19:32:55.228139   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:53.106774   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:55.107668   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:55.241224   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:55.256975   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:32:55.257045   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:32:55.298280   71168 cri.go:89] found id: ""
	I0401 19:32:55.298307   71168 logs.go:276] 0 containers: []
	W0401 19:32:55.298319   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:32:55.298326   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:32:55.298397   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:32:55.337707   71168 cri.go:89] found id: ""
	I0401 19:32:55.337732   71168 logs.go:276] 0 containers: []
	W0401 19:32:55.337739   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:32:55.337745   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:32:55.337791   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:32:55.381455   71168 cri.go:89] found id: ""
	I0401 19:32:55.381479   71168 logs.go:276] 0 containers: []
	W0401 19:32:55.381490   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:32:55.381496   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:32:55.381557   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:32:55.420715   71168 cri.go:89] found id: ""
	I0401 19:32:55.420739   71168 logs.go:276] 0 containers: []
	W0401 19:32:55.420749   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:32:55.420756   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:32:55.420820   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:32:55.459546   71168 cri.go:89] found id: ""
	I0401 19:32:55.459575   71168 logs.go:276] 0 containers: []
	W0401 19:32:55.459583   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:32:55.459588   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:32:55.459634   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:32:55.504240   71168 cri.go:89] found id: ""
	I0401 19:32:55.504267   71168 logs.go:276] 0 containers: []
	W0401 19:32:55.504277   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:32:55.504285   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:32:55.504368   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:32:55.539399   71168 cri.go:89] found id: ""
	I0401 19:32:55.539426   71168 logs.go:276] 0 containers: []
	W0401 19:32:55.539437   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:32:55.539443   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:32:55.539509   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:32:55.583823   71168 cri.go:89] found id: ""
	I0401 19:32:55.583861   71168 logs.go:276] 0 containers: []
	W0401 19:32:55.583872   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:32:55.583881   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:32:55.583895   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:32:55.645489   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:32:55.645523   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:32:55.712883   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:32:55.712920   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:32:55.734890   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:32:55.734923   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:32:55.853068   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:32:55.853089   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:32:55.853102   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:32:57.325965   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:59.827753   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:00.228533   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0401 19:33:00.228582   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:57.607203   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:59.610732   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:02.108676   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:58.435925   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:58.450910   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:32:58.450980   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:32:58.487470   71168 cri.go:89] found id: ""
	I0401 19:32:58.487495   71168 logs.go:276] 0 containers: []
	W0401 19:32:58.487506   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:32:58.487514   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:32:58.487562   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:32:58.529513   71168 cri.go:89] found id: ""
	I0401 19:32:58.529534   71168 logs.go:276] 0 containers: []
	W0401 19:32:58.529543   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:32:58.529547   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:32:58.529592   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:32:58.574170   71168 cri.go:89] found id: ""
	I0401 19:32:58.574197   71168 logs.go:276] 0 containers: []
	W0401 19:32:58.574205   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:32:58.574211   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:32:58.574258   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:32:58.615379   71168 cri.go:89] found id: ""
	I0401 19:32:58.615405   71168 logs.go:276] 0 containers: []
	W0401 19:32:58.615414   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:32:58.615419   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:32:58.615468   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:32:58.655496   71168 cri.go:89] found id: ""
	I0401 19:32:58.655523   71168 logs.go:276] 0 containers: []
	W0401 19:32:58.655534   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:32:58.655542   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:32:58.655593   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:32:58.697199   71168 cri.go:89] found id: ""
	I0401 19:32:58.697229   71168 logs.go:276] 0 containers: []
	W0401 19:32:58.697238   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:32:58.697246   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:32:58.697312   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:32:58.735618   71168 cri.go:89] found id: ""
	I0401 19:32:58.735643   71168 logs.go:276] 0 containers: []
	W0401 19:32:58.735651   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:32:58.735656   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:32:58.735701   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:32:58.780583   71168 cri.go:89] found id: ""
	I0401 19:32:58.780613   71168 logs.go:276] 0 containers: []
	W0401 19:32:58.780624   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:32:58.780635   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:32:58.780649   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:32:58.829717   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:32:58.829743   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:32:58.844836   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:32:58.844866   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:32:58.923138   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:32:58.923157   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:32:58.923172   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:32:58.993680   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:32:58.993713   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:01.538920   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:01.556943   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:01.557017   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:01.608397   71168 cri.go:89] found id: ""
	I0401 19:33:01.608417   71168 logs.go:276] 0 containers: []
	W0401 19:33:01.608425   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:01.608430   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:01.608490   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:01.666573   71168 cri.go:89] found id: ""
	I0401 19:33:01.666599   71168 logs.go:276] 0 containers: []
	W0401 19:33:01.666609   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:01.666615   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:01.666674   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:01.726308   71168 cri.go:89] found id: ""
	I0401 19:33:01.726331   71168 logs.go:276] 0 containers: []
	W0401 19:33:01.726341   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:01.726347   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:01.726412   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:01.773095   71168 cri.go:89] found id: ""
	I0401 19:33:01.773118   71168 logs.go:276] 0 containers: []
	W0401 19:33:01.773125   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:01.773131   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:01.773189   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:01.813011   71168 cri.go:89] found id: ""
	I0401 19:33:01.813034   71168 logs.go:276] 0 containers: []
	W0401 19:33:01.813042   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:01.813048   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:01.813096   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:01.859124   71168 cri.go:89] found id: ""
	I0401 19:33:01.859151   71168 logs.go:276] 0 containers: []
	W0401 19:33:01.859161   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:01.859169   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:01.859228   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:01.904491   71168 cri.go:89] found id: ""
	I0401 19:33:01.904519   71168 logs.go:276] 0 containers: []
	W0401 19:33:01.904530   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:01.904537   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:01.904596   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:01.946768   71168 cri.go:89] found id: ""
	I0401 19:33:01.946794   71168 logs.go:276] 0 containers: []
	W0401 19:33:01.946804   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:01.946815   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:01.946829   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:02.026315   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:02.026362   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:02.072861   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:02.072893   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:02.132064   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:02.132105   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:02.151545   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:02.151575   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:02.234059   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:02.325806   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:04.327258   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:03.215901   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0401 19:33:03.215933   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0401 19:33:03.215947   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:03.264913   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0401 19:33:03.264946   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0401 19:33:03.264961   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:03.272548   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0401 19:33:03.272580   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0401 19:33:03.726254   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:03.731022   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:03.731050   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:04.225595   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:04.237757   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:04.237783   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:04.725330   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:04.734019   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:04.734047   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:05.225303   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:05.242774   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:05.242811   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:05.726350   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:05.730775   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:05.730838   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:06.225345   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:06.229749   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:06.229793   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:06.725687   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:06.730607   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:06.730640   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:04.112109   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:06.606160   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:04.734559   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:04.755071   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:04.755130   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:04.798316   71168 cri.go:89] found id: ""
	I0401 19:33:04.798345   71168 logs.go:276] 0 containers: []
	W0401 19:33:04.798358   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:04.798366   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:04.798426   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:04.840011   71168 cri.go:89] found id: ""
	I0401 19:33:04.840032   71168 logs.go:276] 0 containers: []
	W0401 19:33:04.840043   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:04.840050   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:04.840106   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:04.883686   71168 cri.go:89] found id: ""
	I0401 19:33:04.883713   71168 logs.go:276] 0 containers: []
	W0401 19:33:04.883725   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:04.883733   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:04.883795   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:04.933810   71168 cri.go:89] found id: ""
	I0401 19:33:04.933844   71168 logs.go:276] 0 containers: []
	W0401 19:33:04.933855   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:04.933863   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:04.933925   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:04.983118   71168 cri.go:89] found id: ""
	I0401 19:33:04.983139   71168 logs.go:276] 0 containers: []
	W0401 19:33:04.983146   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:04.983151   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:04.983207   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:05.036146   71168 cri.go:89] found id: ""
	I0401 19:33:05.036169   71168 logs.go:276] 0 containers: []
	W0401 19:33:05.036179   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:05.036186   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:05.036242   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:05.086269   71168 cri.go:89] found id: ""
	I0401 19:33:05.086296   71168 logs.go:276] 0 containers: []
	W0401 19:33:05.086308   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:05.086315   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:05.086378   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:05.140893   71168 cri.go:89] found id: ""
	I0401 19:33:05.140914   71168 logs.go:276] 0 containers: []
	W0401 19:33:05.140922   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:05.140931   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:05.140946   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:05.161222   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:05.161249   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:05.262254   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:05.262276   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:05.262289   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:05.352880   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:05.352908   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:05.400720   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:05.400748   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:07.954227   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:07.225774   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:07.230656   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:07.230684   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:07.726299   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:07.731793   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:07.731830   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:08.225362   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:08.229716   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:08.229755   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:08.725315   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:08.733428   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 200:
	ok
	I0401 19:33:08.739761   70284 api_server.go:141] control plane version: v1.30.0-rc.0
	I0401 19:33:08.739788   70284 api_server.go:131] duration metric: took 45.014537527s to wait for apiserver health ...
	I0401 19:33:08.739796   70284 cni.go:84] Creating CNI manager for ""
	I0401 19:33:08.739802   70284 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:33:08.741701   70284 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0401 19:33:06.825165   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:08.829987   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:11.327172   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:08.743011   70284 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0401 19:33:08.758184   70284 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0401 19:33:08.778975   70284 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 19:33:08.789725   70284 system_pods.go:59] 8 kube-system pods found
	I0401 19:33:08.789763   70284 system_pods.go:61] "coredns-7db6d8ff4d-gdml5" [039c8887-dff0-40e5-b8b5-00ef2f4a21cc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:33:08.789771   70284 system_pods.go:61] "etcd-no-preload-472858" [09086659-e20f-40da-b01f-3690e110ffeb] Running
	I0401 19:33:08.789781   70284 system_pods.go:61] "kube-apiserver-no-preload-472858" [5139434c-3d23-4736-86ad-28253c89f7da] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0401 19:33:08.789794   70284 system_pods.go:61] "kube-controller-manager-no-preload-472858" [965d600a-612e-4625-b883-7105f9166503] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0401 19:33:08.789806   70284 system_pods.go:61] "kube-proxy-7c22p" [903412f5-252c-41f3-81ac-1ae47522b403] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:33:08.789820   70284 system_pods.go:61] "kube-scheduler-no-preload-472858" [936981be-fc5e-4865-811c-936fab59f37b] Running
	I0401 19:33:08.789832   70284 system_pods.go:61] "metrics-server-569cc877fc-wlr7k" [14010e9a-9662-46c9-bc46-cc6d19c0cddf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:33:08.789839   70284 system_pods.go:61] "storage-provisioner" [2e5d9f78-e74c-4b3b-8878-e4bd8ce34108] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:33:08.789861   70284 system_pods.go:74] duration metric: took 10.868458ms to wait for pod list to return data ...
	I0401 19:33:08.789874   70284 node_conditions.go:102] verifying NodePressure condition ...
	I0401 19:33:08.793853   70284 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 19:33:08.793883   70284 node_conditions.go:123] node cpu capacity is 2
	I0401 19:33:08.793897   70284 node_conditions.go:105] duration metric: took 4.016996ms to run NodePressure ...
	I0401 19:33:08.793916   70284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:33:09.081698   70284 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0401 19:33:09.085681   70284 kubeadm.go:733] kubelet initialised
	I0401 19:33:09.085699   70284 kubeadm.go:734] duration metric: took 3.976973ms waiting for restarted kubelet to initialise ...
	I0401 19:33:09.085705   70284 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:33:09.090647   70284 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:11.102738   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:08.608194   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:11.109659   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:07.970794   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:07.970850   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:08.013694   71168 cri.go:89] found id: ""
	I0401 19:33:08.013719   71168 logs.go:276] 0 containers: []
	W0401 19:33:08.013729   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:08.013737   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:08.013810   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:08.050810   71168 cri.go:89] found id: ""
	I0401 19:33:08.050849   71168 logs.go:276] 0 containers: []
	W0401 19:33:08.050861   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:08.050868   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:08.050932   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:08.092056   71168 cri.go:89] found id: ""
	I0401 19:33:08.092086   71168 logs.go:276] 0 containers: []
	W0401 19:33:08.092096   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:08.092102   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:08.092157   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:08.133171   71168 cri.go:89] found id: ""
	I0401 19:33:08.133195   71168 logs.go:276] 0 containers: []
	W0401 19:33:08.133205   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:08.133212   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:08.133271   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:08.173997   71168 cri.go:89] found id: ""
	I0401 19:33:08.174023   71168 logs.go:276] 0 containers: []
	W0401 19:33:08.174034   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:08.174041   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:08.174102   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:08.212740   71168 cri.go:89] found id: ""
	I0401 19:33:08.212768   71168 logs.go:276] 0 containers: []
	W0401 19:33:08.212778   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:08.212785   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:08.212831   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:08.254815   71168 cri.go:89] found id: ""
	I0401 19:33:08.254837   71168 logs.go:276] 0 containers: []
	W0401 19:33:08.254847   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:08.254854   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:08.254909   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:08.295347   71168 cri.go:89] found id: ""
	I0401 19:33:08.295375   71168 logs.go:276] 0 containers: []
	W0401 19:33:08.295382   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:08.295390   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:08.295402   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:08.311574   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:08.311600   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:08.405437   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:08.405455   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:08.405470   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:08.483687   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:08.483722   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:08.526132   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:08.526158   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:11.076590   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:11.093846   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:11.093983   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:11.146046   71168 cri.go:89] found id: ""
	I0401 19:33:11.146073   71168 logs.go:276] 0 containers: []
	W0401 19:33:11.146083   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:11.146088   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:11.146146   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:11.193751   71168 cri.go:89] found id: ""
	I0401 19:33:11.193782   71168 logs.go:276] 0 containers: []
	W0401 19:33:11.193793   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:11.193801   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:11.193873   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:11.242150   71168 cri.go:89] found id: ""
	I0401 19:33:11.242178   71168 logs.go:276] 0 containers: []
	W0401 19:33:11.242189   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:11.242197   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:11.242271   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:11.294063   71168 cri.go:89] found id: ""
	I0401 19:33:11.294092   71168 logs.go:276] 0 containers: []
	W0401 19:33:11.294103   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:11.294110   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:11.294175   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:11.334764   71168 cri.go:89] found id: ""
	I0401 19:33:11.334784   71168 logs.go:276] 0 containers: []
	W0401 19:33:11.334791   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:11.334797   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:11.334846   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:11.372770   71168 cri.go:89] found id: ""
	I0401 19:33:11.372789   71168 logs.go:276] 0 containers: []
	W0401 19:33:11.372795   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:11.372806   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:11.372871   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:11.413233   71168 cri.go:89] found id: ""
	I0401 19:33:11.413261   71168 logs.go:276] 0 containers: []
	W0401 19:33:11.413271   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:11.413278   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:11.413337   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:11.456044   71168 cri.go:89] found id: ""
	I0401 19:33:11.456073   71168 logs.go:276] 0 containers: []
	W0401 19:33:11.456084   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:11.456093   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:11.456103   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:11.471157   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:11.471183   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:11.550489   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:11.550508   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:11.550523   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:11.635360   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:11.635389   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:11.680683   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:11.680713   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:13.827425   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:16.325563   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:13.104812   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:15.602114   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:13.607926   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:16.107219   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:14.235295   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:14.251513   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:14.251590   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:14.291688   71168 cri.go:89] found id: ""
	I0401 19:33:14.291715   71168 logs.go:276] 0 containers: []
	W0401 19:33:14.291725   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:14.291732   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:14.291792   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:14.332030   71168 cri.go:89] found id: ""
	I0401 19:33:14.332051   71168 logs.go:276] 0 containers: []
	W0401 19:33:14.332060   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:14.332068   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:14.332132   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:14.372098   71168 cri.go:89] found id: ""
	I0401 19:33:14.372122   71168 logs.go:276] 0 containers: []
	W0401 19:33:14.372130   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:14.372137   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:14.372183   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:14.410529   71168 cri.go:89] found id: ""
	I0401 19:33:14.410554   71168 logs.go:276] 0 containers: []
	W0401 19:33:14.410563   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:14.410570   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:14.410624   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:14.451198   71168 cri.go:89] found id: ""
	I0401 19:33:14.451226   71168 logs.go:276] 0 containers: []
	W0401 19:33:14.451238   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:14.451246   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:14.451306   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:14.494588   71168 cri.go:89] found id: ""
	I0401 19:33:14.494616   71168 logs.go:276] 0 containers: []
	W0401 19:33:14.494627   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:14.494635   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:14.494689   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:14.537561   71168 cri.go:89] found id: ""
	I0401 19:33:14.537583   71168 logs.go:276] 0 containers: []
	W0401 19:33:14.537590   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:14.537597   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:14.537674   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:14.580624   71168 cri.go:89] found id: ""
	I0401 19:33:14.580651   71168 logs.go:276] 0 containers: []
	W0401 19:33:14.580662   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:14.580672   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:14.580688   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:14.635769   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:14.635798   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:14.650275   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:14.650304   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:14.742355   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:14.742378   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:14.742394   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:14.827839   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:14.827869   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:17.373408   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:17.390110   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:17.390185   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:17.432355   71168 cri.go:89] found id: ""
	I0401 19:33:17.432384   71168 logs.go:276] 0 containers: []
	W0401 19:33:17.432396   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:17.432409   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:17.432471   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:17.476458   71168 cri.go:89] found id: ""
	I0401 19:33:17.476484   71168 logs.go:276] 0 containers: []
	W0401 19:33:17.476495   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:17.476502   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:17.476587   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:17.519657   71168 cri.go:89] found id: ""
	I0401 19:33:17.519686   71168 logs.go:276] 0 containers: []
	W0401 19:33:17.519694   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:17.519699   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:17.519751   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:17.559962   71168 cri.go:89] found id: ""
	I0401 19:33:17.559985   71168 logs.go:276] 0 containers: []
	W0401 19:33:17.559992   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:17.559997   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:17.560054   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:17.608924   71168 cri.go:89] found id: ""
	I0401 19:33:17.608995   71168 logs.go:276] 0 containers: []
	W0401 19:33:17.609009   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:17.609016   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:17.609075   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:17.648371   71168 cri.go:89] found id: ""
	I0401 19:33:17.648394   71168 logs.go:276] 0 containers: []
	W0401 19:33:17.648401   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:17.648406   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:17.648462   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:17.689217   71168 cri.go:89] found id: ""
	I0401 19:33:17.689239   71168 logs.go:276] 0 containers: []
	W0401 19:33:17.689246   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:17.689252   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:17.689312   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:17.741738   71168 cri.go:89] found id: ""
	I0401 19:33:17.741768   71168 logs.go:276] 0 containers: []
	W0401 19:33:17.741779   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:17.741790   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:17.741805   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:17.839857   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:17.839887   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:17.888684   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:17.888716   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:17.944268   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:17.944298   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:17.959305   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:17.959334   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0401 19:33:18.327388   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:20.826627   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:18.100065   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:20.100714   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:18.107770   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:20.108880   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	W0401 19:33:18.040820   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:20.541980   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:20.558198   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:20.558270   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:20.596329   71168 cri.go:89] found id: ""
	I0401 19:33:20.596357   71168 logs.go:276] 0 containers: []
	W0401 19:33:20.596366   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:20.596373   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:20.596431   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:20.638611   71168 cri.go:89] found id: ""
	I0401 19:33:20.638639   71168 logs.go:276] 0 containers: []
	W0401 19:33:20.638664   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:20.638672   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:20.638729   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:20.677984   71168 cri.go:89] found id: ""
	I0401 19:33:20.678014   71168 logs.go:276] 0 containers: []
	W0401 19:33:20.678024   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:20.678032   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:20.678080   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:20.718491   71168 cri.go:89] found id: ""
	I0401 19:33:20.718520   71168 logs.go:276] 0 containers: []
	W0401 19:33:20.718530   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:20.718537   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:20.718597   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:20.772147   71168 cri.go:89] found id: ""
	I0401 19:33:20.772174   71168 logs.go:276] 0 containers: []
	W0401 19:33:20.772185   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:20.772199   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:20.772258   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:20.823339   71168 cri.go:89] found id: ""
	I0401 19:33:20.823361   71168 logs.go:276] 0 containers: []
	W0401 19:33:20.823372   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:20.823380   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:20.823463   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:20.884081   71168 cri.go:89] found id: ""
	I0401 19:33:20.884106   71168 logs.go:276] 0 containers: []
	W0401 19:33:20.884117   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:20.884124   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:20.884185   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:20.931679   71168 cri.go:89] found id: ""
	I0401 19:33:20.931703   71168 logs.go:276] 0 containers: []
	W0401 19:33:20.931713   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:20.931722   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:20.931736   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:21.016766   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:21.016797   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:21.067600   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:21.067632   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:21.136989   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:21.137045   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:21.152673   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:21.152706   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:21.250186   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:23.325222   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:25.326919   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:22.597922   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:24.602701   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:22.606659   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:24.606811   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:26.608185   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:23.750565   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:23.768458   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:23.768534   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:23.814489   71168 cri.go:89] found id: ""
	I0401 19:33:23.814534   71168 logs.go:276] 0 containers: []
	W0401 19:33:23.814555   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:23.814565   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:23.814632   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:23.854954   71168 cri.go:89] found id: ""
	I0401 19:33:23.854981   71168 logs.go:276] 0 containers: []
	W0401 19:33:23.854989   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:23.854995   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:23.855060   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:23.896115   71168 cri.go:89] found id: ""
	I0401 19:33:23.896148   71168 logs.go:276] 0 containers: []
	W0401 19:33:23.896159   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:23.896169   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:23.896231   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:23.941300   71168 cri.go:89] found id: ""
	I0401 19:33:23.941324   71168 logs.go:276] 0 containers: []
	W0401 19:33:23.941337   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:23.941344   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:23.941390   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:23.983955   71168 cri.go:89] found id: ""
	I0401 19:33:23.983982   71168 logs.go:276] 0 containers: []
	W0401 19:33:23.983991   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:23.983997   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:23.984056   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:24.020756   71168 cri.go:89] found id: ""
	I0401 19:33:24.020777   71168 logs.go:276] 0 containers: []
	W0401 19:33:24.020784   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:24.020789   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:24.020835   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:24.063426   71168 cri.go:89] found id: ""
	I0401 19:33:24.063454   71168 logs.go:276] 0 containers: []
	W0401 19:33:24.063462   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:24.063467   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:24.063529   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:24.110924   71168 cri.go:89] found id: ""
	I0401 19:33:24.110945   71168 logs.go:276] 0 containers: []
	W0401 19:33:24.110952   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:24.110960   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:24.110969   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:24.179200   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:24.179240   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:24.194880   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:24.194909   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:24.280555   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:24.280588   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:24.280603   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:24.359502   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:24.359534   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:26.909147   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:26.925961   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:26.926028   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:26.969502   71168 cri.go:89] found id: ""
	I0401 19:33:26.969525   71168 logs.go:276] 0 containers: []
	W0401 19:33:26.969536   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:26.969543   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:26.969604   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:27.015205   71168 cri.go:89] found id: ""
	I0401 19:33:27.015232   71168 logs.go:276] 0 containers: []
	W0401 19:33:27.015241   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:27.015246   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:27.015296   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:27.055943   71168 cri.go:89] found id: ""
	I0401 19:33:27.055968   71168 logs.go:276] 0 containers: []
	W0401 19:33:27.055977   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:27.055983   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:27.056039   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:27.095447   71168 cri.go:89] found id: ""
	I0401 19:33:27.095474   71168 logs.go:276] 0 containers: []
	W0401 19:33:27.095485   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:27.095497   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:27.095558   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:27.137912   71168 cri.go:89] found id: ""
	I0401 19:33:27.137941   71168 logs.go:276] 0 containers: []
	W0401 19:33:27.137948   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:27.137954   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:27.138008   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:27.183303   71168 cri.go:89] found id: ""
	I0401 19:33:27.183325   71168 logs.go:276] 0 containers: []
	W0401 19:33:27.183335   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:27.183344   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:27.183403   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:27.225780   71168 cri.go:89] found id: ""
	I0401 19:33:27.225804   71168 logs.go:276] 0 containers: []
	W0401 19:33:27.225814   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:27.225822   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:27.225880   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:27.268136   71168 cri.go:89] found id: ""
	I0401 19:33:27.268159   71168 logs.go:276] 0 containers: []
	W0401 19:33:27.268168   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:27.268191   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:27.268215   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:27.325527   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:27.325557   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:27.341727   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:27.341763   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:27.432369   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:27.432389   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:27.432403   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:27.523104   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:27.523135   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:27.826804   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:30.326279   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:27.099509   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:29.597830   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:31.598325   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:29.107400   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:31.107514   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:30.066147   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:30.079999   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:30.080062   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:30.121887   71168 cri.go:89] found id: ""
	I0401 19:33:30.121911   71168 logs.go:276] 0 containers: []
	W0401 19:33:30.121920   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:30.121929   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:30.121986   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:30.163939   71168 cri.go:89] found id: ""
	I0401 19:33:30.163967   71168 logs.go:276] 0 containers: []
	W0401 19:33:30.163978   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:30.163986   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:30.164051   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:30.203924   71168 cri.go:89] found id: ""
	I0401 19:33:30.203965   71168 logs.go:276] 0 containers: []
	W0401 19:33:30.203977   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:30.203985   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:30.204048   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:30.243771   71168 cri.go:89] found id: ""
	I0401 19:33:30.243798   71168 logs.go:276] 0 containers: []
	W0401 19:33:30.243809   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:30.243816   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:30.243888   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:30.284039   71168 cri.go:89] found id: ""
	I0401 19:33:30.284066   71168 logs.go:276] 0 containers: []
	W0401 19:33:30.284074   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:30.284079   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:30.284127   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:30.327549   71168 cri.go:89] found id: ""
	I0401 19:33:30.327570   71168 logs.go:276] 0 containers: []
	W0401 19:33:30.327577   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:30.327583   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:30.327630   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:30.365258   71168 cri.go:89] found id: ""
	I0401 19:33:30.365281   71168 logs.go:276] 0 containers: []
	W0401 19:33:30.365291   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:30.365297   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:30.365352   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:30.405959   71168 cri.go:89] found id: ""
	I0401 19:33:30.405984   71168 logs.go:276] 0 containers: []
	W0401 19:33:30.405992   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:30.405999   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:30.406011   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:30.480668   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:30.480692   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:30.480706   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:30.566042   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:30.566077   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:30.629250   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:30.629285   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:30.682185   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:30.682213   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:32.824844   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:34.826598   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:33.600555   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:36.100194   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:33.608315   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:36.106573   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:33.199466   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:33.213557   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:33.213630   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:33.255038   71168 cri.go:89] found id: ""
	I0401 19:33:33.255062   71168 logs.go:276] 0 containers: []
	W0401 19:33:33.255072   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:33.255079   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:33.255143   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:33.297724   71168 cri.go:89] found id: ""
	I0401 19:33:33.297751   71168 logs.go:276] 0 containers: []
	W0401 19:33:33.297761   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:33.297767   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:33.297836   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:33.340694   71168 cri.go:89] found id: ""
	I0401 19:33:33.340718   71168 logs.go:276] 0 containers: []
	W0401 19:33:33.340727   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:33.340735   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:33.340794   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:33.388857   71168 cri.go:89] found id: ""
	I0401 19:33:33.388883   71168 logs.go:276] 0 containers: []
	W0401 19:33:33.388891   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:33.388896   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:33.388940   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:33.430875   71168 cri.go:89] found id: ""
	I0401 19:33:33.430899   71168 logs.go:276] 0 containers: []
	W0401 19:33:33.430906   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:33.430911   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:33.430966   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:33.479877   71168 cri.go:89] found id: ""
	I0401 19:33:33.479905   71168 logs.go:276] 0 containers: []
	W0401 19:33:33.479917   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:33.479923   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:33.479968   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:33.522635   71168 cri.go:89] found id: ""
	I0401 19:33:33.522662   71168 logs.go:276] 0 containers: []
	W0401 19:33:33.522672   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:33.522680   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:33.522737   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:33.560497   71168 cri.go:89] found id: ""
	I0401 19:33:33.560519   71168 logs.go:276] 0 containers: []
	W0401 19:33:33.560527   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:33.560534   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:33.560549   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:33.612141   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:33.612170   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:33.665142   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:33.665170   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:33.681076   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:33.681100   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:33.755938   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:33.755966   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:33.755983   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:36.341957   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:36.359519   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:36.359586   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:36.416339   71168 cri.go:89] found id: ""
	I0401 19:33:36.416362   71168 logs.go:276] 0 containers: []
	W0401 19:33:36.416373   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:36.416381   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:36.416442   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:36.473883   71168 cri.go:89] found id: ""
	I0401 19:33:36.473906   71168 logs.go:276] 0 containers: []
	W0401 19:33:36.473918   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:36.473925   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:36.473988   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:36.521532   71168 cri.go:89] found id: ""
	I0401 19:33:36.521558   71168 logs.go:276] 0 containers: []
	W0401 19:33:36.521568   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:36.521575   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:36.521639   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:36.563420   71168 cri.go:89] found id: ""
	I0401 19:33:36.563446   71168 logs.go:276] 0 containers: []
	W0401 19:33:36.563454   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:36.563459   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:36.563520   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:36.605658   71168 cri.go:89] found id: ""
	I0401 19:33:36.605678   71168 logs.go:276] 0 containers: []
	W0401 19:33:36.605689   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:36.605697   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:36.605759   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:36.645611   71168 cri.go:89] found id: ""
	I0401 19:33:36.645631   71168 logs.go:276] 0 containers: []
	W0401 19:33:36.645638   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:36.645656   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:36.645715   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:36.685994   71168 cri.go:89] found id: ""
	I0401 19:33:36.686022   71168 logs.go:276] 0 containers: []
	W0401 19:33:36.686033   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:36.686041   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:36.686099   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:36.725573   71168 cri.go:89] found id: ""
	I0401 19:33:36.725598   71168 logs.go:276] 0 containers: []
	W0401 19:33:36.725608   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:36.725618   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:36.725630   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:36.778854   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:36.778885   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:36.795003   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:36.795036   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:36.872648   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:36.872666   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:36.872678   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:36.956648   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:36.956683   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:36.827745   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:38.830544   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:41.326012   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:38.597991   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:41.097044   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:38.107961   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:40.606475   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:39.502868   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:39.519090   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:39.519161   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:39.562347   71168 cri.go:89] found id: ""
	I0401 19:33:39.562371   71168 logs.go:276] 0 containers: []
	W0401 19:33:39.562379   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:39.562384   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:39.562442   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:39.607250   71168 cri.go:89] found id: ""
	I0401 19:33:39.607276   71168 logs.go:276] 0 containers: []
	W0401 19:33:39.607286   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:39.607293   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:39.607343   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:39.650683   71168 cri.go:89] found id: ""
	I0401 19:33:39.650704   71168 logs.go:276] 0 containers: []
	W0401 19:33:39.650712   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:39.650717   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:39.650764   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:39.694676   71168 cri.go:89] found id: ""
	I0401 19:33:39.694706   71168 logs.go:276] 0 containers: []
	W0401 19:33:39.694718   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:39.694724   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:39.694783   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:39.733873   71168 cri.go:89] found id: ""
	I0401 19:33:39.733901   71168 logs.go:276] 0 containers: []
	W0401 19:33:39.733911   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:39.733919   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:39.733980   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:39.773625   71168 cri.go:89] found id: ""
	I0401 19:33:39.773668   71168 logs.go:276] 0 containers: []
	W0401 19:33:39.773679   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:39.773686   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:39.773735   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:39.815020   71168 cri.go:89] found id: ""
	I0401 19:33:39.815053   71168 logs.go:276] 0 containers: []
	W0401 19:33:39.815064   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:39.815071   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:39.815134   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:39.855575   71168 cri.go:89] found id: ""
	I0401 19:33:39.855606   71168 logs.go:276] 0 containers: []
	W0401 19:33:39.855615   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:39.855626   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:39.855641   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:39.873827   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:39.873857   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:39.948487   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:39.948507   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:39.948521   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:40.034026   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:40.034062   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:40.077798   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:40.077828   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:42.637999   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:42.654991   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:42.655063   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:42.695920   71168 cri.go:89] found id: ""
	I0401 19:33:42.695953   71168 logs.go:276] 0 containers: []
	W0401 19:33:42.695964   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:42.695971   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:42.696030   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:42.737303   71168 cri.go:89] found id: ""
	I0401 19:33:42.737325   71168 logs.go:276] 0 containers: []
	W0401 19:33:42.737333   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:42.737341   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:42.737393   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:42.777922   71168 cri.go:89] found id: ""
	I0401 19:33:42.777953   71168 logs.go:276] 0 containers: []
	W0401 19:33:42.777965   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:42.777972   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:42.778036   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:42.818339   71168 cri.go:89] found id: ""
	I0401 19:33:42.818364   71168 logs.go:276] 0 containers: []
	W0401 19:33:42.818372   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:42.818379   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:42.818435   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:42.859470   71168 cri.go:89] found id: ""
	I0401 19:33:42.859494   71168 logs.go:276] 0 containers: []
	W0401 19:33:42.859502   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:42.859507   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:42.859556   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:42.901950   71168 cri.go:89] found id: ""
	I0401 19:33:42.901980   71168 logs.go:276] 0 containers: []
	W0401 19:33:42.901989   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:42.901996   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:42.902063   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:42.947230   71168 cri.go:89] found id: ""
	I0401 19:33:42.947258   71168 logs.go:276] 0 containers: []
	W0401 19:33:42.947268   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:42.947275   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:42.947351   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:43.827204   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:46.325749   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:43.098252   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:45.098316   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:42.607590   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:44.607666   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:47.107837   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:42.988997   71168 cri.go:89] found id: ""
	I0401 19:33:42.989022   71168 logs.go:276] 0 containers: []
	W0401 19:33:42.989032   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:42.989049   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:42.989066   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:43.075323   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:43.075352   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:43.075363   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:43.164445   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:43.164479   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:43.215852   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:43.215885   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:43.271301   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:43.271334   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:45.786705   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:45.804389   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:45.804445   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:45.849838   71168 cri.go:89] found id: ""
	I0401 19:33:45.849872   71168 logs.go:276] 0 containers: []
	W0401 19:33:45.849883   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:45.849891   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:45.849950   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:45.890603   71168 cri.go:89] found id: ""
	I0401 19:33:45.890625   71168 logs.go:276] 0 containers: []
	W0401 19:33:45.890635   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:45.890642   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:45.890703   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:45.929189   71168 cri.go:89] found id: ""
	I0401 19:33:45.929210   71168 logs.go:276] 0 containers: []
	W0401 19:33:45.929218   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:45.929223   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:45.929268   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:45.968266   71168 cri.go:89] found id: ""
	I0401 19:33:45.968292   71168 logs.go:276] 0 containers: []
	W0401 19:33:45.968303   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:45.968310   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:45.968365   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:46.007114   71168 cri.go:89] found id: ""
	I0401 19:33:46.007135   71168 logs.go:276] 0 containers: []
	W0401 19:33:46.007143   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:46.007148   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:46.007195   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:46.046067   71168 cri.go:89] found id: ""
	I0401 19:33:46.046088   71168 logs.go:276] 0 containers: []
	W0401 19:33:46.046095   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:46.046101   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:46.046186   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:46.083604   71168 cri.go:89] found id: ""
	I0401 19:33:46.083630   71168 logs.go:276] 0 containers: []
	W0401 19:33:46.083644   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:46.083651   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:46.083709   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:46.125435   71168 cri.go:89] found id: ""
	I0401 19:33:46.125457   71168 logs.go:276] 0 containers: []
	W0401 19:33:46.125464   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:46.125472   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:46.125483   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:46.179060   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:46.179092   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:46.195139   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:46.195179   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:46.275876   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:46.275903   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:46.275914   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:46.365430   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:46.365465   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:48.825540   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:50.827204   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:47.099197   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:49.105260   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:51.597808   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:49.108344   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:51.607079   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:48.908390   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:48.924357   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:48.924416   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:48.969325   71168 cri.go:89] found id: ""
	I0401 19:33:48.969351   71168 logs.go:276] 0 containers: []
	W0401 19:33:48.969359   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:48.969364   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:48.969421   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:49.006702   71168 cri.go:89] found id: ""
	I0401 19:33:49.006724   71168 logs.go:276] 0 containers: []
	W0401 19:33:49.006731   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:49.006736   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:49.006785   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:49.051196   71168 cri.go:89] found id: ""
	I0401 19:33:49.051229   71168 logs.go:276] 0 containers: []
	W0401 19:33:49.051241   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:49.051260   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:49.051336   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:49.098123   71168 cri.go:89] found id: ""
	I0401 19:33:49.098150   71168 logs.go:276] 0 containers: []
	W0401 19:33:49.098159   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:49.098166   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:49.098225   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:49.138203   71168 cri.go:89] found id: ""
	I0401 19:33:49.138232   71168 logs.go:276] 0 containers: []
	W0401 19:33:49.138239   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:49.138244   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:49.138290   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:49.185441   71168 cri.go:89] found id: ""
	I0401 19:33:49.185465   71168 logs.go:276] 0 containers: []
	W0401 19:33:49.185473   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:49.185478   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:49.185537   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:49.235649   71168 cri.go:89] found id: ""
	I0401 19:33:49.235670   71168 logs.go:276] 0 containers: []
	W0401 19:33:49.235678   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:49.235683   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:49.235762   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:49.279638   71168 cri.go:89] found id: ""
	I0401 19:33:49.279662   71168 logs.go:276] 0 containers: []
	W0401 19:33:49.279673   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:49.279683   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:49.279699   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:49.340761   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:49.340798   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:49.356552   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:49.356581   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:49.441110   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:49.441129   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:49.441140   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:49.523159   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:49.523189   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:52.067710   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:52.082986   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:52.083046   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:52.128510   71168 cri.go:89] found id: ""
	I0401 19:33:52.128531   71168 logs.go:276] 0 containers: []
	W0401 19:33:52.128538   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:52.128543   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:52.128590   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:52.167767   71168 cri.go:89] found id: ""
	I0401 19:33:52.167792   71168 logs.go:276] 0 containers: []
	W0401 19:33:52.167803   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:52.167810   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:52.167871   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:52.206384   71168 cri.go:89] found id: ""
	I0401 19:33:52.206416   71168 logs.go:276] 0 containers: []
	W0401 19:33:52.206426   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:52.206433   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:52.206493   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:52.245277   71168 cri.go:89] found id: ""
	I0401 19:33:52.245301   71168 logs.go:276] 0 containers: []
	W0401 19:33:52.245309   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:52.245318   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:52.245388   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:52.283925   71168 cri.go:89] found id: ""
	I0401 19:33:52.283954   71168 logs.go:276] 0 containers: []
	W0401 19:33:52.283964   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:52.283971   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:52.284032   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:52.323944   71168 cri.go:89] found id: ""
	I0401 19:33:52.323970   71168 logs.go:276] 0 containers: []
	W0401 19:33:52.323981   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:52.323988   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:52.324045   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:52.364853   71168 cri.go:89] found id: ""
	I0401 19:33:52.364882   71168 logs.go:276] 0 containers: []
	W0401 19:33:52.364893   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:52.364901   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:52.364958   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:52.404136   71168 cri.go:89] found id: ""
	I0401 19:33:52.404158   71168 logs.go:276] 0 containers: []
	W0401 19:33:52.404165   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:52.404173   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:52.404184   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:52.459097   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:52.459129   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:52.474392   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:52.474417   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:52.551817   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:52.551843   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:52.551860   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:52.650710   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:52.650750   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:53.326050   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:55.327326   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:52.607062   70284 pod_ready.go:92] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"True"
	I0401 19:33:52.607082   70284 pod_ready.go:81] duration metric: took 43.516413537s for pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.607091   70284 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.628695   70284 pod_ready.go:92] pod "etcd-no-preload-472858" in "kube-system" namespace has status "Ready":"True"
	I0401 19:33:52.628725   70284 pod_ready.go:81] duration metric: took 21.625468ms for pod "etcd-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.628739   70284 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.643017   70284 pod_ready.go:92] pod "kube-apiserver-no-preload-472858" in "kube-system" namespace has status "Ready":"True"
	I0401 19:33:52.643044   70284 pod_ready.go:81] duration metric: took 14.296056ms for pod "kube-apiserver-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.643058   70284 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.649063   70284 pod_ready.go:92] pod "kube-controller-manager-no-preload-472858" in "kube-system" namespace has status "Ready":"True"
	I0401 19:33:52.649091   70284 pod_ready.go:81] duration metric: took 6.024238ms for pod "kube-controller-manager-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.649105   70284 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7c22p" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.654806   70284 pod_ready.go:92] pod "kube-proxy-7c22p" in "kube-system" namespace has status "Ready":"True"
	I0401 19:33:52.654829   70284 pod_ready.go:81] duration metric: took 5.709865ms for pod "kube-proxy-7c22p" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.654840   70284 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.997116   70284 pod_ready.go:92] pod "kube-scheduler-no-preload-472858" in "kube-system" namespace has status "Ready":"True"
	I0401 19:33:52.997139   70284 pod_ready.go:81] duration metric: took 342.291727ms for pod "kube-scheduler-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.997148   70284 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:55.004130   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:53.608064   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:56.106148   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:55.205689   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:55.222840   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:55.222901   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:55.263783   71168 cri.go:89] found id: ""
	I0401 19:33:55.263813   71168 logs.go:276] 0 containers: []
	W0401 19:33:55.263820   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:55.263828   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:55.263883   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:55.300788   71168 cri.go:89] found id: ""
	I0401 19:33:55.300818   71168 logs.go:276] 0 containers: []
	W0401 19:33:55.300826   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:55.300834   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:55.300888   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:55.343189   71168 cri.go:89] found id: ""
	I0401 19:33:55.343215   71168 logs.go:276] 0 containers: []
	W0401 19:33:55.343223   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:55.343229   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:55.343286   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:55.387560   71168 cri.go:89] found id: ""
	I0401 19:33:55.387587   71168 logs.go:276] 0 containers: []
	W0401 19:33:55.387597   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:55.387604   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:55.387663   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:55.428078   71168 cri.go:89] found id: ""
	I0401 19:33:55.428103   71168 logs.go:276] 0 containers: []
	W0401 19:33:55.428112   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:55.428119   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:55.428181   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:55.472696   71168 cri.go:89] found id: ""
	I0401 19:33:55.472722   71168 logs.go:276] 0 containers: []
	W0401 19:33:55.472734   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:55.472741   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:55.472797   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:55.518071   71168 cri.go:89] found id: ""
	I0401 19:33:55.518115   71168 logs.go:276] 0 containers: []
	W0401 19:33:55.518126   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:55.518136   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:55.518201   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:55.555697   71168 cri.go:89] found id: ""
	I0401 19:33:55.555717   71168 logs.go:276] 0 containers: []
	W0401 19:33:55.555724   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:55.555732   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:55.555747   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:55.637462   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:55.637492   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:55.682353   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:55.682380   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:55.735451   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:55.735484   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:55.750928   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:55.750954   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:55.824610   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:57.328228   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:59.826213   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:57.005395   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:59.505575   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:01.506107   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:58.106643   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:00.606864   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:58.325742   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:58.341022   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:58.341092   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:58.380910   71168 cri.go:89] found id: ""
	I0401 19:33:58.380932   71168 logs.go:276] 0 containers: []
	W0401 19:33:58.380940   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:58.380946   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:58.380990   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:58.420387   71168 cri.go:89] found id: ""
	I0401 19:33:58.420413   71168 logs.go:276] 0 containers: []
	W0401 19:33:58.420425   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:58.420431   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:58.420479   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:58.460470   71168 cri.go:89] found id: ""
	I0401 19:33:58.460501   71168 logs.go:276] 0 containers: []
	W0401 19:33:58.460511   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:58.460520   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:58.460580   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:58.496844   71168 cri.go:89] found id: ""
	I0401 19:33:58.496867   71168 logs.go:276] 0 containers: []
	W0401 19:33:58.496875   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:58.496881   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:58.496930   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:58.535883   71168 cri.go:89] found id: ""
	I0401 19:33:58.535905   71168 logs.go:276] 0 containers: []
	W0401 19:33:58.535915   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:58.535922   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:58.535979   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:58.576833   71168 cri.go:89] found id: ""
	I0401 19:33:58.576855   71168 logs.go:276] 0 containers: []
	W0401 19:33:58.576863   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:58.576869   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:58.576913   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:58.615057   71168 cri.go:89] found id: ""
	I0401 19:33:58.615081   71168 logs.go:276] 0 containers: []
	W0401 19:33:58.615091   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:58.615098   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:58.615156   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:58.657982   71168 cri.go:89] found id: ""
	I0401 19:33:58.658008   71168 logs.go:276] 0 containers: []
	W0401 19:33:58.658018   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:58.658028   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:58.658045   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:58.734579   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:58.734601   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:58.734616   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:58.821779   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:58.821819   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:58.894470   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:58.894506   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:58.949854   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:58.949884   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:01.465820   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:01.481929   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:01.481984   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:01.525371   71168 cri.go:89] found id: ""
	I0401 19:34:01.525397   71168 logs.go:276] 0 containers: []
	W0401 19:34:01.525407   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:01.525415   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:01.525473   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:01.571106   71168 cri.go:89] found id: ""
	I0401 19:34:01.571136   71168 logs.go:276] 0 containers: []
	W0401 19:34:01.571146   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:01.571153   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:01.571214   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:01.617666   71168 cri.go:89] found id: ""
	I0401 19:34:01.617705   71168 logs.go:276] 0 containers: []
	W0401 19:34:01.617717   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:01.617725   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:01.617787   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:01.655286   71168 cri.go:89] found id: ""
	I0401 19:34:01.655311   71168 logs.go:276] 0 containers: []
	W0401 19:34:01.655321   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:01.655328   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:01.655396   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:01.694911   71168 cri.go:89] found id: ""
	I0401 19:34:01.694940   71168 logs.go:276] 0 containers: []
	W0401 19:34:01.694950   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:01.694957   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:01.695040   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:01.734970   71168 cri.go:89] found id: ""
	I0401 19:34:01.734996   71168 logs.go:276] 0 containers: []
	W0401 19:34:01.735007   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:01.735014   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:01.735071   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:01.778846   71168 cri.go:89] found id: ""
	I0401 19:34:01.778871   71168 logs.go:276] 0 containers: []
	W0401 19:34:01.778879   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:01.778885   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:01.778958   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:01.821934   71168 cri.go:89] found id: ""
	I0401 19:34:01.821964   71168 logs.go:276] 0 containers: []
	W0401 19:34:01.821975   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:01.821986   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:01.822002   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:01.880123   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:01.880155   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:01.895178   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:01.895200   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:01.972248   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:01.972275   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:01.972290   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:02.056663   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:02.056694   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:02.325323   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:04.326474   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:06.327583   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:04.004061   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:06.004176   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:02.608516   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:05.108477   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:04.603745   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:04.619269   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:04.619344   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:04.658089   71168 cri.go:89] found id: ""
	I0401 19:34:04.658111   71168 logs.go:276] 0 containers: []
	W0401 19:34:04.658118   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:04.658123   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:04.658168   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:04.700596   71168 cri.go:89] found id: ""
	I0401 19:34:04.700622   71168 logs.go:276] 0 containers: []
	W0401 19:34:04.700634   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:04.700641   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:04.700708   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:04.744960   71168 cri.go:89] found id: ""
	I0401 19:34:04.744990   71168 logs.go:276] 0 containers: []
	W0401 19:34:04.744999   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:04.745004   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:04.745052   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:04.788239   71168 cri.go:89] found id: ""
	I0401 19:34:04.788264   71168 logs.go:276] 0 containers: []
	W0401 19:34:04.788272   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:04.788278   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:04.788343   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:04.830788   71168 cri.go:89] found id: ""
	I0401 19:34:04.830812   71168 logs.go:276] 0 containers: []
	W0401 19:34:04.830850   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:04.830859   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:04.830917   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:04.889784   71168 cri.go:89] found id: ""
	I0401 19:34:04.889815   71168 logs.go:276] 0 containers: []
	W0401 19:34:04.889826   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:04.889834   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:04.889902   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:04.931969   71168 cri.go:89] found id: ""
	I0401 19:34:04.931996   71168 logs.go:276] 0 containers: []
	W0401 19:34:04.932004   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:04.932010   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:04.932058   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:04.975668   71168 cri.go:89] found id: ""
	I0401 19:34:04.975689   71168 logs.go:276] 0 containers: []
	W0401 19:34:04.975696   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:04.975704   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:04.975715   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:05.032212   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:05.032246   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:05.047900   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:05.047924   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:05.132371   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:05.132394   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:05.132408   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:05.222591   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:05.222623   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:07.767686   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:07.784473   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:07.784542   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:07.828460   71168 cri.go:89] found id: ""
	I0401 19:34:07.828487   71168 logs.go:276] 0 containers: []
	W0401 19:34:07.828498   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:07.828505   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:07.828564   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:07.872760   71168 cri.go:89] found id: ""
	I0401 19:34:07.872786   71168 logs.go:276] 0 containers: []
	W0401 19:34:07.872797   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:07.872804   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:07.872862   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:07.914241   71168 cri.go:89] found id: ""
	I0401 19:34:07.914263   71168 logs.go:276] 0 containers: []
	W0401 19:34:07.914271   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:07.914276   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:07.914340   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:07.953757   71168 cri.go:89] found id: ""
	I0401 19:34:07.953784   71168 logs.go:276] 0 containers: []
	W0401 19:34:07.953795   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:07.953803   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:07.953869   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:08.825113   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:10.827081   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:08.504038   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:10.508973   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:07.608037   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:10.110321   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:07.994382   71168 cri.go:89] found id: ""
	I0401 19:34:07.994401   71168 logs.go:276] 0 containers: []
	W0401 19:34:07.994409   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:07.994414   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:07.994459   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:08.038178   71168 cri.go:89] found id: ""
	I0401 19:34:08.038202   71168 logs.go:276] 0 containers: []
	W0401 19:34:08.038213   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:08.038220   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:08.038282   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:08.077532   71168 cri.go:89] found id: ""
	I0401 19:34:08.077562   71168 logs.go:276] 0 containers: []
	W0401 19:34:08.077573   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:08.077580   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:08.077657   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:08.119825   71168 cri.go:89] found id: ""
	I0401 19:34:08.119845   71168 logs.go:276] 0 containers: []
	W0401 19:34:08.119855   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:08.119865   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:08.119878   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:08.207688   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:08.207724   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:08.253050   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:08.253085   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:08.309119   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:08.309152   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:08.325675   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:08.325704   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:08.410877   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:10.911211   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:10.925590   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:10.925657   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:10.964180   71168 cri.go:89] found id: ""
	I0401 19:34:10.964205   71168 logs.go:276] 0 containers: []
	W0401 19:34:10.964216   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:10.964224   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:10.964273   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:11.004492   71168 cri.go:89] found id: ""
	I0401 19:34:11.004515   71168 logs.go:276] 0 containers: []
	W0401 19:34:11.004526   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:11.004533   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:11.004588   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:11.048771   71168 cri.go:89] found id: ""
	I0401 19:34:11.048792   71168 logs.go:276] 0 containers: []
	W0401 19:34:11.048804   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:11.048810   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:11.048861   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:11.084956   71168 cri.go:89] found id: ""
	I0401 19:34:11.084982   71168 logs.go:276] 0 containers: []
	W0401 19:34:11.084992   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:11.084999   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:11.085043   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:11.128194   71168 cri.go:89] found id: ""
	I0401 19:34:11.128218   71168 logs.go:276] 0 containers: []
	W0401 19:34:11.128225   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:11.128230   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:11.128274   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:11.169884   71168 cri.go:89] found id: ""
	I0401 19:34:11.169908   71168 logs.go:276] 0 containers: []
	W0401 19:34:11.169918   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:11.169925   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:11.169988   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:11.213032   71168 cri.go:89] found id: ""
	I0401 19:34:11.213066   71168 logs.go:276] 0 containers: []
	W0401 19:34:11.213077   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:11.213084   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:11.213149   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:11.258391   71168 cri.go:89] found id: ""
	I0401 19:34:11.258414   71168 logs.go:276] 0 containers: []
	W0401 19:34:11.258422   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:11.258429   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:11.258445   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:11.341297   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:11.341328   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:11.388628   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:11.388659   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:11.442300   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:11.442326   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:11.457531   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:11.457561   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:11.561556   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:13.324598   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:15.325464   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:13.005005   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:15.505216   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:12.607201   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:14.607580   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:17.107659   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:14.062670   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:14.077384   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:14.077449   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:14.119421   71168 cri.go:89] found id: ""
	I0401 19:34:14.119444   71168 logs.go:276] 0 containers: []
	W0401 19:34:14.119455   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:14.119462   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:14.119518   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:14.158762   71168 cri.go:89] found id: ""
	I0401 19:34:14.158783   71168 logs.go:276] 0 containers: []
	W0401 19:34:14.158798   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:14.158805   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:14.158867   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:14.197024   71168 cri.go:89] found id: ""
	I0401 19:34:14.197052   71168 logs.go:276] 0 containers: []
	W0401 19:34:14.197060   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:14.197065   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:14.197115   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:14.235976   71168 cri.go:89] found id: ""
	I0401 19:34:14.236004   71168 logs.go:276] 0 containers: []
	W0401 19:34:14.236015   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:14.236021   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:14.236085   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:14.280596   71168 cri.go:89] found id: ""
	I0401 19:34:14.280623   71168 logs.go:276] 0 containers: []
	W0401 19:34:14.280635   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:14.280642   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:14.280703   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:14.322196   71168 cri.go:89] found id: ""
	I0401 19:34:14.322219   71168 logs.go:276] 0 containers: []
	W0401 19:34:14.322230   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:14.322239   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:14.322298   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:14.364572   71168 cri.go:89] found id: ""
	I0401 19:34:14.364596   71168 logs.go:276] 0 containers: []
	W0401 19:34:14.364607   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:14.364615   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:14.364662   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:14.406043   71168 cri.go:89] found id: ""
	I0401 19:34:14.406066   71168 logs.go:276] 0 containers: []
	W0401 19:34:14.406072   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:14.406082   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:14.406097   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:14.461841   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:14.461870   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:14.479960   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:14.479990   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:14.557039   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:14.557058   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:14.557070   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:14.641945   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:14.641975   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:17.192681   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:17.207913   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:17.207964   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:17.245596   71168 cri.go:89] found id: ""
	I0401 19:34:17.245618   71168 logs.go:276] 0 containers: []
	W0401 19:34:17.245625   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:17.245630   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:17.245701   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:17.310845   71168 cri.go:89] found id: ""
	I0401 19:34:17.310875   71168 logs.go:276] 0 containers: []
	W0401 19:34:17.310887   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:17.310894   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:17.310958   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:17.367726   71168 cri.go:89] found id: ""
	I0401 19:34:17.367753   71168 logs.go:276] 0 containers: []
	W0401 19:34:17.367764   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:17.367770   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:17.367833   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:17.410807   71168 cri.go:89] found id: ""
	I0401 19:34:17.410834   71168 logs.go:276] 0 containers: []
	W0401 19:34:17.410842   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:17.410847   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:17.410892   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:17.448242   71168 cri.go:89] found id: ""
	I0401 19:34:17.448268   71168 logs.go:276] 0 containers: []
	W0401 19:34:17.448278   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:17.448285   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:17.448337   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:17.486552   71168 cri.go:89] found id: ""
	I0401 19:34:17.486580   71168 logs.go:276] 0 containers: []
	W0401 19:34:17.486590   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:17.486595   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:17.486644   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:17.529947   71168 cri.go:89] found id: ""
	I0401 19:34:17.529975   71168 logs.go:276] 0 containers: []
	W0401 19:34:17.529986   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:17.529993   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:17.530052   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:17.571617   71168 cri.go:89] found id: ""
	I0401 19:34:17.571640   71168 logs.go:276] 0 containers: []
	W0401 19:34:17.571648   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:17.571656   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:17.571673   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:17.627326   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:17.627354   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:17.643409   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:17.643431   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:17.723772   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:17.723798   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:17.723811   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:17.803383   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:17.803414   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:17.325836   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:19.328447   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:17.509486   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:20.004341   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:19.606840   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:21.607646   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:20.348949   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:20.363311   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:20.363385   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:20.401558   71168 cri.go:89] found id: ""
	I0401 19:34:20.401585   71168 logs.go:276] 0 containers: []
	W0401 19:34:20.401595   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:20.401603   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:20.401686   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:20.445979   71168 cri.go:89] found id: ""
	I0401 19:34:20.446004   71168 logs.go:276] 0 containers: []
	W0401 19:34:20.446011   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:20.446016   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:20.446060   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:20.487819   71168 cri.go:89] found id: ""
	I0401 19:34:20.487844   71168 logs.go:276] 0 containers: []
	W0401 19:34:20.487854   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:20.487862   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:20.487921   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:20.532107   71168 cri.go:89] found id: ""
	I0401 19:34:20.532131   71168 logs.go:276] 0 containers: []
	W0401 19:34:20.532154   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:20.532186   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:20.532247   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:20.577727   71168 cri.go:89] found id: ""
	I0401 19:34:20.577749   71168 logs.go:276] 0 containers: []
	W0401 19:34:20.577756   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:20.577762   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:20.577841   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:20.616774   71168 cri.go:89] found id: ""
	I0401 19:34:20.616805   71168 logs.go:276] 0 containers: []
	W0401 19:34:20.616816   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:20.616824   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:20.616887   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:20.656122   71168 cri.go:89] found id: ""
	I0401 19:34:20.656150   71168 logs.go:276] 0 containers: []
	W0401 19:34:20.656160   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:20.656167   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:20.656226   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:20.701249   71168 cri.go:89] found id: ""
	I0401 19:34:20.701274   71168 logs.go:276] 0 containers: []
	W0401 19:34:20.701285   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:20.701295   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:20.701310   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:20.746979   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:20.747003   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:20.799197   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:20.799226   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:20.815771   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:20.815808   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:20.895179   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:20.895202   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:20.895218   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:21.826671   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:24.325896   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:26.326569   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:22.503727   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:24.503877   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:26.506643   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:24.107702   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:26.607285   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:23.481911   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:23.496820   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:23.496889   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:23.538292   71168 cri.go:89] found id: ""
	I0401 19:34:23.538314   71168 logs.go:276] 0 containers: []
	W0401 19:34:23.538322   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:23.538327   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:23.538372   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:23.579171   71168 cri.go:89] found id: ""
	I0401 19:34:23.579200   71168 logs.go:276] 0 containers: []
	W0401 19:34:23.579209   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:23.579214   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:23.579269   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:23.620377   71168 cri.go:89] found id: ""
	I0401 19:34:23.620399   71168 logs.go:276] 0 containers: []
	W0401 19:34:23.620410   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:23.620417   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:23.620477   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:23.663309   71168 cri.go:89] found id: ""
	I0401 19:34:23.663329   71168 logs.go:276] 0 containers: []
	W0401 19:34:23.663337   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:23.663342   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:23.663392   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:23.702724   71168 cri.go:89] found id: ""
	I0401 19:34:23.702755   71168 logs.go:276] 0 containers: []
	W0401 19:34:23.702772   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:23.702778   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:23.702836   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:23.742797   71168 cri.go:89] found id: ""
	I0401 19:34:23.742827   71168 logs.go:276] 0 containers: []
	W0401 19:34:23.742837   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:23.742845   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:23.742913   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:23.781299   71168 cri.go:89] found id: ""
	I0401 19:34:23.781350   71168 logs.go:276] 0 containers: []
	W0401 19:34:23.781367   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:23.781375   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:23.781440   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:23.828244   71168 cri.go:89] found id: ""
	I0401 19:34:23.828270   71168 logs.go:276] 0 containers: []
	W0401 19:34:23.828277   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:23.828284   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:23.828298   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:23.914758   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:23.914782   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:23.914797   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:23.993300   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:23.993332   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:24.037388   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:24.037424   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:24.090157   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:24.090198   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:26.609062   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:26.624241   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:26.624309   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:26.665813   71168 cri.go:89] found id: ""
	I0401 19:34:26.665840   71168 logs.go:276] 0 containers: []
	W0401 19:34:26.665848   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:26.665857   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:26.665917   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:26.709571   71168 cri.go:89] found id: ""
	I0401 19:34:26.709593   71168 logs.go:276] 0 containers: []
	W0401 19:34:26.709600   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:26.709606   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:26.709680   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:26.757286   71168 cri.go:89] found id: ""
	I0401 19:34:26.757309   71168 logs.go:276] 0 containers: []
	W0401 19:34:26.757319   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:26.757325   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:26.757386   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:26.795715   71168 cri.go:89] found id: ""
	I0401 19:34:26.795768   71168 logs.go:276] 0 containers: []
	W0401 19:34:26.795781   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:26.795788   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:26.795839   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:26.835985   71168 cri.go:89] found id: ""
	I0401 19:34:26.836011   71168 logs.go:276] 0 containers: []
	W0401 19:34:26.836022   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:26.836029   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:26.836094   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:26.878890   71168 cri.go:89] found id: ""
	I0401 19:34:26.878918   71168 logs.go:276] 0 containers: []
	W0401 19:34:26.878929   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:26.878936   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:26.878991   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:26.920161   71168 cri.go:89] found id: ""
	I0401 19:34:26.920189   71168 logs.go:276] 0 containers: []
	W0401 19:34:26.920199   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:26.920206   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:26.920262   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:26.961597   71168 cri.go:89] found id: ""
	I0401 19:34:26.961626   71168 logs.go:276] 0 containers: []
	W0401 19:34:26.961637   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:26.961663   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:26.961679   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:27.019814   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:27.019847   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:27.035535   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:27.035564   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:27.111755   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:27.111776   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:27.111790   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:27.194932   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:27.194964   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:28.827702   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:31.325488   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:29.005830   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:31.007294   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:29.107097   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:31.109807   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:29.738592   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:29.752851   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:29.752913   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:29.791808   71168 cri.go:89] found id: ""
	I0401 19:34:29.791863   71168 logs.go:276] 0 containers: []
	W0401 19:34:29.791875   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:29.791883   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:29.791944   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:29.836113   71168 cri.go:89] found id: ""
	I0401 19:34:29.836132   71168 logs.go:276] 0 containers: []
	W0401 19:34:29.836139   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:29.836144   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:29.836200   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:29.879005   71168 cri.go:89] found id: ""
	I0401 19:34:29.879039   71168 logs.go:276] 0 containers: []
	W0401 19:34:29.879050   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:29.879059   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:29.879122   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:29.919349   71168 cri.go:89] found id: ""
	I0401 19:34:29.919383   71168 logs.go:276] 0 containers: []
	W0401 19:34:29.919394   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:29.919400   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:29.919454   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:29.957252   71168 cri.go:89] found id: ""
	I0401 19:34:29.957275   71168 logs.go:276] 0 containers: []
	W0401 19:34:29.957287   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:29.957294   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:29.957354   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:30.003220   71168 cri.go:89] found id: ""
	I0401 19:34:30.003245   71168 logs.go:276] 0 containers: []
	W0401 19:34:30.003256   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:30.003263   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:30.003311   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:30.043873   71168 cri.go:89] found id: ""
	I0401 19:34:30.043900   71168 logs.go:276] 0 containers: []
	W0401 19:34:30.043921   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:30.043928   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:30.043989   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:30.082215   71168 cri.go:89] found id: ""
	I0401 19:34:30.082242   71168 logs.go:276] 0 containers: []
	W0401 19:34:30.082253   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:30.082263   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:30.082277   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:30.098676   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:30.098701   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:30.180857   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:30.180879   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:30.180897   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:30.269982   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:30.270016   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:30.317933   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:30.317967   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:32.874312   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:32.888687   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:32.888742   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:32.926222   71168 cri.go:89] found id: ""
	I0401 19:34:32.926244   71168 logs.go:276] 0 containers: []
	W0401 19:34:32.926252   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:32.926257   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:32.926307   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:32.964838   71168 cri.go:89] found id: ""
	I0401 19:34:32.964858   71168 logs.go:276] 0 containers: []
	W0401 19:34:32.964865   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:32.964870   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:32.964914   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:33.327670   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:35.826387   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:33.504338   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:36.005240   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:33.606596   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:35.607014   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:33.006903   71168 cri.go:89] found id: ""
	I0401 19:34:33.006920   71168 logs.go:276] 0 containers: []
	W0401 19:34:33.006927   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:33.006933   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:33.006983   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:33.045663   71168 cri.go:89] found id: ""
	I0401 19:34:33.045691   71168 logs.go:276] 0 containers: []
	W0401 19:34:33.045701   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:33.045709   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:33.045770   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:33.086262   71168 cri.go:89] found id: ""
	I0401 19:34:33.086290   71168 logs.go:276] 0 containers: []
	W0401 19:34:33.086298   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:33.086303   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:33.086368   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:33.128302   71168 cri.go:89] found id: ""
	I0401 19:34:33.128327   71168 logs.go:276] 0 containers: []
	W0401 19:34:33.128335   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:33.128341   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:33.128402   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:33.171155   71168 cri.go:89] found id: ""
	I0401 19:34:33.171189   71168 logs.go:276] 0 containers: []
	W0401 19:34:33.171200   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:33.171207   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:33.171270   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:33.210793   71168 cri.go:89] found id: ""
	I0401 19:34:33.210820   71168 logs.go:276] 0 containers: []
	W0401 19:34:33.210838   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:33.210848   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:33.210870   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:33.295035   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:33.295072   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:33.345381   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:33.345417   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:33.401082   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:33.401120   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:33.417029   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:33.417055   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:33.497027   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:35.997632   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:36.013106   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:36.013161   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:36.053013   71168 cri.go:89] found id: ""
	I0401 19:34:36.053040   71168 logs.go:276] 0 containers: []
	W0401 19:34:36.053050   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:36.053059   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:36.053116   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:36.092268   71168 cri.go:89] found id: ""
	I0401 19:34:36.092297   71168 logs.go:276] 0 containers: []
	W0401 19:34:36.092308   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:36.092315   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:36.092389   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:36.131347   71168 cri.go:89] found id: ""
	I0401 19:34:36.131391   71168 logs.go:276] 0 containers: []
	W0401 19:34:36.131402   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:36.131409   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:36.131468   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:36.171402   71168 cri.go:89] found id: ""
	I0401 19:34:36.171432   71168 logs.go:276] 0 containers: []
	W0401 19:34:36.171443   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:36.171449   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:36.171511   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:36.211239   71168 cri.go:89] found id: ""
	I0401 19:34:36.211272   71168 logs.go:276] 0 containers: []
	W0401 19:34:36.211283   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:36.211290   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:36.211354   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:36.251246   71168 cri.go:89] found id: ""
	I0401 19:34:36.251275   71168 logs.go:276] 0 containers: []
	W0401 19:34:36.251287   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:36.251294   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:36.251354   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:36.293140   71168 cri.go:89] found id: ""
	I0401 19:34:36.293162   71168 logs.go:276] 0 containers: []
	W0401 19:34:36.293169   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:36.293174   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:36.293231   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:36.330281   71168 cri.go:89] found id: ""
	I0401 19:34:36.330308   71168 logs.go:276] 0 containers: []
	W0401 19:34:36.330318   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:36.330328   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:36.330342   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:36.421753   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:36.421790   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:36.467555   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:36.467581   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:36.524747   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:36.524778   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:36.540946   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:36.540976   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:36.622452   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:38.326341   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:40.327267   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:38.503641   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:40.504555   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:38.107732   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:40.608535   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:39.122969   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:39.139092   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:39.139157   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:39.177337   71168 cri.go:89] found id: ""
	I0401 19:34:39.177368   71168 logs.go:276] 0 containers: []
	W0401 19:34:39.177379   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:39.177387   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:39.177449   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:39.216471   71168 cri.go:89] found id: ""
	I0401 19:34:39.216498   71168 logs.go:276] 0 containers: []
	W0401 19:34:39.216507   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:39.216512   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:39.216558   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:39.255526   71168 cri.go:89] found id: ""
	I0401 19:34:39.255550   71168 logs.go:276] 0 containers: []
	W0401 19:34:39.255557   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:39.255563   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:39.255623   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:39.294682   71168 cri.go:89] found id: ""
	I0401 19:34:39.294711   71168 logs.go:276] 0 containers: []
	W0401 19:34:39.294723   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:39.294735   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:39.294798   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:39.337416   71168 cri.go:89] found id: ""
	I0401 19:34:39.337437   71168 logs.go:276] 0 containers: []
	W0401 19:34:39.337444   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:39.337449   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:39.337510   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:39.384560   71168 cri.go:89] found id: ""
	I0401 19:34:39.384586   71168 logs.go:276] 0 containers: []
	W0401 19:34:39.384598   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:39.384608   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:39.384671   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:39.421459   71168 cri.go:89] found id: ""
	I0401 19:34:39.421480   71168 logs.go:276] 0 containers: []
	W0401 19:34:39.421488   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:39.421493   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:39.421540   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:39.460221   71168 cri.go:89] found id: ""
	I0401 19:34:39.460246   71168 logs.go:276] 0 containers: []
	W0401 19:34:39.460256   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:39.460264   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:39.460275   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:39.543800   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:39.543835   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:39.591012   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:39.591038   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:39.645994   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:39.646025   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:39.662223   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:39.662250   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:39.741574   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:42.242541   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:42.256933   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:42.257006   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:42.294268   71168 cri.go:89] found id: ""
	I0401 19:34:42.294297   71168 logs.go:276] 0 containers: []
	W0401 19:34:42.294308   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:42.294315   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:42.294370   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:42.331978   71168 cri.go:89] found id: ""
	I0401 19:34:42.331999   71168 logs.go:276] 0 containers: []
	W0401 19:34:42.332005   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:42.332013   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:42.332078   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:42.369858   71168 cri.go:89] found id: ""
	I0401 19:34:42.369885   71168 logs.go:276] 0 containers: []
	W0401 19:34:42.369895   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:42.369903   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:42.369989   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:42.412688   71168 cri.go:89] found id: ""
	I0401 19:34:42.412708   71168 logs.go:276] 0 containers: []
	W0401 19:34:42.412715   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:42.412720   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:42.412776   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:42.449180   71168 cri.go:89] found id: ""
	I0401 19:34:42.449209   71168 logs.go:276] 0 containers: []
	W0401 19:34:42.449217   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:42.449225   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:42.449283   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:42.488582   71168 cri.go:89] found id: ""
	I0401 19:34:42.488606   71168 logs.go:276] 0 containers: []
	W0401 19:34:42.488613   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:42.488618   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:42.488665   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:42.527883   71168 cri.go:89] found id: ""
	I0401 19:34:42.527915   71168 logs.go:276] 0 containers: []
	W0401 19:34:42.527924   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:42.527931   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:42.527993   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:42.564372   71168 cri.go:89] found id: ""
	I0401 19:34:42.564394   71168 logs.go:276] 0 containers: []
	W0401 19:34:42.564401   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:42.564408   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:42.564419   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:42.646940   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:42.646974   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:42.689323   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:42.689354   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:42.744996   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:42.745024   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:42.761404   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:42.761429   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:42.836643   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:42.825895   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:45.325856   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:42.504642   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:45.004315   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:43.110114   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:45.607093   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:45.337809   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:45.352936   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:45.353029   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:45.395073   71168 cri.go:89] found id: ""
	I0401 19:34:45.395098   71168 logs.go:276] 0 containers: []
	W0401 19:34:45.395106   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:45.395112   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:45.395160   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:45.433537   71168 cri.go:89] found id: ""
	I0401 19:34:45.433567   71168 logs.go:276] 0 containers: []
	W0401 19:34:45.433578   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:45.433586   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:45.433658   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:45.477108   71168 cri.go:89] found id: ""
	I0401 19:34:45.477138   71168 logs.go:276] 0 containers: []
	W0401 19:34:45.477150   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:45.477157   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:45.477217   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:45.520350   71168 cri.go:89] found id: ""
	I0401 19:34:45.520389   71168 logs.go:276] 0 containers: []
	W0401 19:34:45.520401   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:45.520408   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:45.520466   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:45.562871   71168 cri.go:89] found id: ""
	I0401 19:34:45.562901   71168 logs.go:276] 0 containers: []
	W0401 19:34:45.562911   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:45.562918   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:45.562988   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:45.619214   71168 cri.go:89] found id: ""
	I0401 19:34:45.619237   71168 logs.go:276] 0 containers: []
	W0401 19:34:45.619248   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:45.619255   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:45.619317   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:45.664361   71168 cri.go:89] found id: ""
	I0401 19:34:45.664387   71168 logs.go:276] 0 containers: []
	W0401 19:34:45.664398   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:45.664405   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:45.664463   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:45.701087   71168 cri.go:89] found id: ""
	I0401 19:34:45.701110   71168 logs.go:276] 0 containers: []
	W0401 19:34:45.701120   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:45.701128   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:45.701139   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:45.716839   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:45.716863   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:45.794609   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:45.794630   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:45.794642   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:45.883428   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:45.883464   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:45.934342   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:45.934374   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:47.825597   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:50.326528   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:47.505036   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:49.505287   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:51.505884   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:47.609038   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:50.106705   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:52.107802   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:48.492128   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:48.508674   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:48.508746   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:48.549522   71168 cri.go:89] found id: ""
	I0401 19:34:48.549545   71168 logs.go:276] 0 containers: []
	W0401 19:34:48.549555   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:48.549561   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:48.549619   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:48.587014   71168 cri.go:89] found id: ""
	I0401 19:34:48.587037   71168 logs.go:276] 0 containers: []
	W0401 19:34:48.587045   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:48.587051   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:48.587108   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:48.629591   71168 cri.go:89] found id: ""
	I0401 19:34:48.629620   71168 logs.go:276] 0 containers: []
	W0401 19:34:48.629630   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:48.629636   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:48.629707   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:48.669335   71168 cri.go:89] found id: ""
	I0401 19:34:48.669363   71168 logs.go:276] 0 containers: []
	W0401 19:34:48.669383   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:48.669400   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:48.669455   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:48.708322   71168 cri.go:89] found id: ""
	I0401 19:34:48.708350   71168 logs.go:276] 0 containers: []
	W0401 19:34:48.708356   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:48.708362   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:48.708407   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:48.750680   71168 cri.go:89] found id: ""
	I0401 19:34:48.750708   71168 logs.go:276] 0 containers: []
	W0401 19:34:48.750718   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:48.750726   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:48.750791   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:48.790946   71168 cri.go:89] found id: ""
	I0401 19:34:48.790974   71168 logs.go:276] 0 containers: []
	W0401 19:34:48.790984   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:48.790998   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:48.791055   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:48.828849   71168 cri.go:89] found id: ""
	I0401 19:34:48.828871   71168 logs.go:276] 0 containers: []
	W0401 19:34:48.828880   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:48.828889   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:48.828904   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:48.909182   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:48.909212   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:48.954285   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:48.954315   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:49.010340   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:49.010372   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:49.026493   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:49.026516   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:49.099662   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:51.599905   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:51.618094   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:51.618168   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:51.657003   71168 cri.go:89] found id: ""
	I0401 19:34:51.657028   71168 logs.go:276] 0 containers: []
	W0401 19:34:51.657038   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:51.657046   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:51.657104   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:51.696415   71168 cri.go:89] found id: ""
	I0401 19:34:51.696441   71168 logs.go:276] 0 containers: []
	W0401 19:34:51.696451   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:51.696456   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:51.696515   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:51.734416   71168 cri.go:89] found id: ""
	I0401 19:34:51.734445   71168 logs.go:276] 0 containers: []
	W0401 19:34:51.734457   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:51.734465   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:51.734523   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:51.774895   71168 cri.go:89] found id: ""
	I0401 19:34:51.774918   71168 logs.go:276] 0 containers: []
	W0401 19:34:51.774925   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:51.774931   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:51.774980   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:51.814602   71168 cri.go:89] found id: ""
	I0401 19:34:51.814623   71168 logs.go:276] 0 containers: []
	W0401 19:34:51.814631   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:51.814637   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:51.814687   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:51.856035   71168 cri.go:89] found id: ""
	I0401 19:34:51.856061   71168 logs.go:276] 0 containers: []
	W0401 19:34:51.856071   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:51.856078   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:51.856132   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:51.897415   71168 cri.go:89] found id: ""
	I0401 19:34:51.897440   71168 logs.go:276] 0 containers: []
	W0401 19:34:51.897451   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:51.897457   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:51.897516   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:51.937406   71168 cri.go:89] found id: ""
	I0401 19:34:51.937428   71168 logs.go:276] 0 containers: []
	W0401 19:34:51.937436   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:51.937443   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:51.937456   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:51.981508   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:51.981535   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:52.039956   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:52.039995   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:52.066403   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:52.066429   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:52.172509   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:52.172530   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:52.172541   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:52.827950   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:55.331369   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:54.004625   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:56.503197   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:54.607359   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:57.108257   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:54.761459   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:54.776972   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:54.777030   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:54.822945   71168 cri.go:89] found id: ""
	I0401 19:34:54.822983   71168 logs.go:276] 0 containers: []
	W0401 19:34:54.822996   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:54.823004   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:54.823066   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:54.861602   71168 cri.go:89] found id: ""
	I0401 19:34:54.861629   71168 logs.go:276] 0 containers: []
	W0401 19:34:54.861639   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:54.861662   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:54.861727   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:54.901283   71168 cri.go:89] found id: ""
	I0401 19:34:54.901309   71168 logs.go:276] 0 containers: []
	W0401 19:34:54.901319   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:54.901327   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:54.901385   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:54.940071   71168 cri.go:89] found id: ""
	I0401 19:34:54.940103   71168 logs.go:276] 0 containers: []
	W0401 19:34:54.940114   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:54.940121   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:54.940179   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:54.978447   71168 cri.go:89] found id: ""
	I0401 19:34:54.978474   71168 logs.go:276] 0 containers: []
	W0401 19:34:54.978485   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:54.978493   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:54.978563   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:55.021786   71168 cri.go:89] found id: ""
	I0401 19:34:55.021810   71168 logs.go:276] 0 containers: []
	W0401 19:34:55.021819   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:55.021827   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:55.021886   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:55.059861   71168 cri.go:89] found id: ""
	I0401 19:34:55.059889   71168 logs.go:276] 0 containers: []
	W0401 19:34:55.059899   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:55.059907   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:55.059963   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:55.104484   71168 cri.go:89] found id: ""
	I0401 19:34:55.104516   71168 logs.go:276] 0 containers: []
	W0401 19:34:55.104527   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:55.104537   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:55.104551   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:55.152197   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:55.152221   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:55.203900   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:55.203942   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:55.221553   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:55.221580   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:55.299651   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:55.299668   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:55.299680   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:57.877382   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:57.899186   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:57.899260   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:57.948146   71168 cri.go:89] found id: ""
	I0401 19:34:57.948182   71168 logs.go:276] 0 containers: []
	W0401 19:34:57.948192   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:57.948203   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:57.948270   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:57.826282   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:59.826598   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:58.504492   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:01.003480   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:59.607646   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:02.107162   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:58.017121   71168 cri.go:89] found id: ""
	I0401 19:34:58.017150   71168 logs.go:276] 0 containers: []
	W0401 19:34:58.017161   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:58.017168   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:58.017230   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:58.073881   71168 cri.go:89] found id: ""
	I0401 19:34:58.073905   71168 logs.go:276] 0 containers: []
	W0401 19:34:58.073916   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:58.073923   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:58.073979   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:58.115410   71168 cri.go:89] found id: ""
	I0401 19:34:58.115435   71168 logs.go:276] 0 containers: []
	W0401 19:34:58.115445   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:58.115452   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:58.115512   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:58.155452   71168 cri.go:89] found id: ""
	I0401 19:34:58.155481   71168 logs.go:276] 0 containers: []
	W0401 19:34:58.155492   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:58.155500   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:58.155562   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:58.197335   71168 cri.go:89] found id: ""
	I0401 19:34:58.197376   71168 logs.go:276] 0 containers: []
	W0401 19:34:58.197397   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:58.197407   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:58.197469   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:58.239782   71168 cri.go:89] found id: ""
	I0401 19:34:58.239808   71168 logs.go:276] 0 containers: []
	W0401 19:34:58.239815   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:58.239820   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:58.239870   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:58.280936   71168 cri.go:89] found id: ""
	I0401 19:34:58.280961   71168 logs.go:276] 0 containers: []
	W0401 19:34:58.280971   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:58.280982   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:58.280998   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:58.368357   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:58.368401   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:58.415104   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:58.415132   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:58.474719   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:58.474749   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:58.491004   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:58.491031   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:58.573999   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:01.074865   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:01.091751   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:01.091822   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:01.140053   71168 cri.go:89] found id: ""
	I0401 19:35:01.140079   71168 logs.go:276] 0 containers: []
	W0401 19:35:01.140089   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:01.140096   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:01.140154   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:01.184046   71168 cri.go:89] found id: ""
	I0401 19:35:01.184078   71168 logs.go:276] 0 containers: []
	W0401 19:35:01.184089   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:01.184096   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:01.184161   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:01.225962   71168 cri.go:89] found id: ""
	I0401 19:35:01.225989   71168 logs.go:276] 0 containers: []
	W0401 19:35:01.225999   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:01.226006   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:01.226072   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:01.267212   71168 cri.go:89] found id: ""
	I0401 19:35:01.267234   71168 logs.go:276] 0 containers: []
	W0401 19:35:01.267242   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:01.267247   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:01.267308   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:01.307039   71168 cri.go:89] found id: ""
	I0401 19:35:01.307066   71168 logs.go:276] 0 containers: []
	W0401 19:35:01.307074   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:01.307080   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:01.307132   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:01.347856   71168 cri.go:89] found id: ""
	I0401 19:35:01.347886   71168 logs.go:276] 0 containers: []
	W0401 19:35:01.347898   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:01.347905   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:01.347962   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:01.385893   71168 cri.go:89] found id: ""
	I0401 19:35:01.385923   71168 logs.go:276] 0 containers: []
	W0401 19:35:01.385933   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:01.385940   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:01.385999   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:01.422983   71168 cri.go:89] found id: ""
	I0401 19:35:01.423012   71168 logs.go:276] 0 containers: []
	W0401 19:35:01.423022   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:01.423033   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:01.423048   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:01.469842   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:01.469875   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:01.527536   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:01.527566   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:01.542332   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:01.542357   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:01.617252   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:01.617270   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:01.617284   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:02.325502   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:04.326603   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:06.328115   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:03.005979   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:05.504470   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:04.107681   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:06.607619   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:04.195171   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:04.211963   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:04.212015   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:04.252298   71168 cri.go:89] found id: ""
	I0401 19:35:04.252324   71168 logs.go:276] 0 containers: []
	W0401 19:35:04.252334   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:04.252342   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:04.252396   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:04.299619   71168 cri.go:89] found id: ""
	I0401 19:35:04.299649   71168 logs.go:276] 0 containers: []
	W0401 19:35:04.299659   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:04.299667   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:04.299725   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:04.347386   71168 cri.go:89] found id: ""
	I0401 19:35:04.347409   71168 logs.go:276] 0 containers: []
	W0401 19:35:04.347416   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:04.347426   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:04.347473   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:04.385902   71168 cri.go:89] found id: ""
	I0401 19:35:04.385929   71168 logs.go:276] 0 containers: []
	W0401 19:35:04.385937   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:04.385943   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:04.385993   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:04.425235   71168 cri.go:89] found id: ""
	I0401 19:35:04.425258   71168 logs.go:276] 0 containers: []
	W0401 19:35:04.425266   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:04.425271   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:04.425325   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:04.463849   71168 cri.go:89] found id: ""
	I0401 19:35:04.463881   71168 logs.go:276] 0 containers: []
	W0401 19:35:04.463891   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:04.463899   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:04.463974   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:04.501983   71168 cri.go:89] found id: ""
	I0401 19:35:04.502003   71168 logs.go:276] 0 containers: []
	W0401 19:35:04.502010   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:04.502016   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:04.502072   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:04.544082   71168 cri.go:89] found id: ""
	I0401 19:35:04.544103   71168 logs.go:276] 0 containers: []
	W0401 19:35:04.544113   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:04.544124   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:04.544141   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:04.600545   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:04.600578   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:04.617049   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:04.617075   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:04.696927   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:04.696945   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:04.696957   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:04.780024   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:04.780056   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:07.323161   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:07.339368   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:07.339432   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:07.379407   71168 cri.go:89] found id: ""
	I0401 19:35:07.379429   71168 logs.go:276] 0 containers: []
	W0401 19:35:07.379440   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:07.379452   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:07.379497   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:07.418700   71168 cri.go:89] found id: ""
	I0401 19:35:07.418728   71168 logs.go:276] 0 containers: []
	W0401 19:35:07.418737   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:07.418743   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:07.418788   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:07.457580   71168 cri.go:89] found id: ""
	I0401 19:35:07.457606   71168 logs.go:276] 0 containers: []
	W0401 19:35:07.457617   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:07.457624   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:07.457696   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:07.498211   71168 cri.go:89] found id: ""
	I0401 19:35:07.498240   71168 logs.go:276] 0 containers: []
	W0401 19:35:07.498249   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:07.498256   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:07.498318   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:07.539659   71168 cri.go:89] found id: ""
	I0401 19:35:07.539681   71168 logs.go:276] 0 containers: []
	W0401 19:35:07.539692   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:07.539699   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:07.539759   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:07.577414   71168 cri.go:89] found id: ""
	I0401 19:35:07.577440   71168 logs.go:276] 0 containers: []
	W0401 19:35:07.577450   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:07.577456   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:07.577520   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:07.623318   71168 cri.go:89] found id: ""
	I0401 19:35:07.623340   71168 logs.go:276] 0 containers: []
	W0401 19:35:07.623352   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:07.623358   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:07.623416   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:07.664791   71168 cri.go:89] found id: ""
	I0401 19:35:07.664823   71168 logs.go:276] 0 containers: []
	W0401 19:35:07.664834   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:07.664842   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:07.664854   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:07.722158   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:07.722186   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:07.737838   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:07.737876   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:07.813694   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:07.813717   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:07.813728   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:07.899698   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:07.899740   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:08.825778   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:10.825935   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:07.505933   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:10.003529   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:09.107076   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:11.108917   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:10.446184   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:10.460860   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:10.460927   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:10.505656   71168 cri.go:89] found id: ""
	I0401 19:35:10.505685   71168 logs.go:276] 0 containers: []
	W0401 19:35:10.505692   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:10.505698   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:10.505742   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:10.547771   71168 cri.go:89] found id: ""
	I0401 19:35:10.547796   71168 logs.go:276] 0 containers: []
	W0401 19:35:10.547814   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:10.547820   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:10.547876   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:10.584625   71168 cri.go:89] found id: ""
	I0401 19:35:10.584652   71168 logs.go:276] 0 containers: []
	W0401 19:35:10.584664   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:10.584671   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:10.584737   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:10.625512   71168 cri.go:89] found id: ""
	I0401 19:35:10.625541   71168 logs.go:276] 0 containers: []
	W0401 19:35:10.625552   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:10.625559   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:10.625618   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:10.664905   71168 cri.go:89] found id: ""
	I0401 19:35:10.664936   71168 logs.go:276] 0 containers: []
	W0401 19:35:10.664949   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:10.664955   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:10.665015   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:10.703043   71168 cri.go:89] found id: ""
	I0401 19:35:10.703071   71168 logs.go:276] 0 containers: []
	W0401 19:35:10.703082   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:10.703090   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:10.703149   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:10.747750   71168 cri.go:89] found id: ""
	I0401 19:35:10.747777   71168 logs.go:276] 0 containers: []
	W0401 19:35:10.747790   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:10.747796   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:10.747841   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:10.792944   71168 cri.go:89] found id: ""
	I0401 19:35:10.792970   71168 logs.go:276] 0 containers: []
	W0401 19:35:10.792980   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:10.792989   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:10.793004   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:10.854029   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:10.854058   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:10.868968   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:10.868991   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:10.940537   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:10.940564   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:10.940579   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:11.018201   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:11.018231   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:12.826117   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:14.826387   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:12.003995   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:14.503258   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:16.504686   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:13.608777   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:16.108992   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:13.562139   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:13.579370   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:13.579435   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:13.620811   71168 cri.go:89] found id: ""
	I0401 19:35:13.620838   71168 logs.go:276] 0 containers: []
	W0401 19:35:13.620847   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:13.620859   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:13.620919   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:13.661377   71168 cri.go:89] found id: ""
	I0401 19:35:13.661408   71168 logs.go:276] 0 containers: []
	W0401 19:35:13.661419   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:13.661427   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:13.661489   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:13.702413   71168 cri.go:89] found id: ""
	I0401 19:35:13.702436   71168 logs.go:276] 0 containers: []
	W0401 19:35:13.702445   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:13.702453   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:13.702519   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:13.748760   71168 cri.go:89] found id: ""
	I0401 19:35:13.748788   71168 logs.go:276] 0 containers: []
	W0401 19:35:13.748796   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:13.748803   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:13.748874   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:13.795438   71168 cri.go:89] found id: ""
	I0401 19:35:13.795460   71168 logs.go:276] 0 containers: []
	W0401 19:35:13.795472   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:13.795479   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:13.795537   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:13.835572   71168 cri.go:89] found id: ""
	I0401 19:35:13.835601   71168 logs.go:276] 0 containers: []
	W0401 19:35:13.835612   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:13.835619   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:13.835677   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:13.874301   71168 cri.go:89] found id: ""
	I0401 19:35:13.874327   71168 logs.go:276] 0 containers: []
	W0401 19:35:13.874336   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:13.874342   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:13.874387   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:13.914847   71168 cri.go:89] found id: ""
	I0401 19:35:13.914876   71168 logs.go:276] 0 containers: []
	W0401 19:35:13.914883   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:13.914891   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:13.914904   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:13.929329   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:13.929355   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:14.004332   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:14.004358   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:14.004373   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:14.084901   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:14.084935   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:14.134471   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:14.134500   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:16.693432   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:16.710258   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:16.710332   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:16.757213   71168 cri.go:89] found id: ""
	I0401 19:35:16.757243   71168 logs.go:276] 0 containers: []
	W0401 19:35:16.757254   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:16.757261   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:16.757320   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:16.797134   71168 cri.go:89] found id: ""
	I0401 19:35:16.797174   71168 logs.go:276] 0 containers: []
	W0401 19:35:16.797182   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:16.797188   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:16.797233   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:16.839502   71168 cri.go:89] found id: ""
	I0401 19:35:16.839530   71168 logs.go:276] 0 containers: []
	W0401 19:35:16.839541   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:16.839549   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:16.839609   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:16.881380   71168 cri.go:89] found id: ""
	I0401 19:35:16.881406   71168 logs.go:276] 0 containers: []
	W0401 19:35:16.881413   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:16.881419   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:16.881472   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:16.922968   71168 cri.go:89] found id: ""
	I0401 19:35:16.922991   71168 logs.go:276] 0 containers: []
	W0401 19:35:16.923002   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:16.923009   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:16.923069   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:16.961262   71168 cri.go:89] found id: ""
	I0401 19:35:16.961290   71168 logs.go:276] 0 containers: []
	W0401 19:35:16.961301   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:16.961310   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:16.961369   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:16.996901   71168 cri.go:89] found id: ""
	I0401 19:35:16.996929   71168 logs.go:276] 0 containers: []
	W0401 19:35:16.996940   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:16.996947   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:16.997004   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:17.038447   71168 cri.go:89] found id: ""
	I0401 19:35:17.038473   71168 logs.go:276] 0 containers: []
	W0401 19:35:17.038481   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:17.038489   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:17.038500   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:17.079979   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:17.080013   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:17.136973   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:17.137010   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:17.153083   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:17.153108   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:17.232055   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:17.232078   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:17.232096   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:17.326246   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:19.326903   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:20.818889   70687 pod_ready.go:81] duration metric: took 4m0.000381983s for pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace to be "Ready" ...
	E0401 19:35:20.818918   70687 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace to be "Ready" (will not retry!)
	I0401 19:35:20.818938   70687 pod_ready.go:38] duration metric: took 4m5.525170808s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:35:20.818967   70687 kubeadm.go:591] duration metric: took 4m13.404699267s to restartPrimaryControlPlane
	W0401 19:35:20.819026   70687 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0401 19:35:20.819059   70687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0401 19:35:19.004932   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:21.504514   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:18.607067   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:20.609619   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:19.813327   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:19.830168   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:19.830229   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:19.875502   71168 cri.go:89] found id: ""
	I0401 19:35:19.875524   71168 logs.go:276] 0 containers: []
	W0401 19:35:19.875532   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:19.875537   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:19.875591   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:19.916084   71168 cri.go:89] found id: ""
	I0401 19:35:19.916107   71168 logs.go:276] 0 containers: []
	W0401 19:35:19.916117   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:19.916125   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:19.916188   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:19.960673   71168 cri.go:89] found id: ""
	I0401 19:35:19.960699   71168 logs.go:276] 0 containers: []
	W0401 19:35:19.960710   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:19.960717   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:19.960796   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:19.998736   71168 cri.go:89] found id: ""
	I0401 19:35:19.998760   71168 logs.go:276] 0 containers: []
	W0401 19:35:19.998768   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:19.998776   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:19.998840   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:20.043382   71168 cri.go:89] found id: ""
	I0401 19:35:20.043408   71168 logs.go:276] 0 containers: []
	W0401 19:35:20.043418   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:20.043425   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:20.043492   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:20.086132   71168 cri.go:89] found id: ""
	I0401 19:35:20.086158   71168 logs.go:276] 0 containers: []
	W0401 19:35:20.086171   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:20.086178   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:20.086239   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:20.131052   71168 cri.go:89] found id: ""
	I0401 19:35:20.131074   71168 logs.go:276] 0 containers: []
	W0401 19:35:20.131081   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:20.131091   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:20.131151   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:20.174668   71168 cri.go:89] found id: ""
	I0401 19:35:20.174693   71168 logs.go:276] 0 containers: []
	W0401 19:35:20.174699   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:20.174707   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:20.174718   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:20.266503   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:20.266521   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:20.266534   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:20.351555   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:20.351586   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:20.400261   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:20.400289   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:20.455149   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:20.455183   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:23.510048   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:26.005267   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:23.109720   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:25.608633   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:22.972675   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:22.987481   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:22.987555   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:23.032429   71168 cri.go:89] found id: ""
	I0401 19:35:23.032453   71168 logs.go:276] 0 containers: []
	W0401 19:35:23.032461   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:23.032467   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:23.032522   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:23.073286   71168 cri.go:89] found id: ""
	I0401 19:35:23.073313   71168 logs.go:276] 0 containers: []
	W0401 19:35:23.073322   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:23.073330   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:23.073397   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:23.115424   71168 cri.go:89] found id: ""
	I0401 19:35:23.115447   71168 logs.go:276] 0 containers: []
	W0401 19:35:23.115454   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:23.115459   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:23.115506   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:23.164883   71168 cri.go:89] found id: ""
	I0401 19:35:23.164908   71168 logs.go:276] 0 containers: []
	W0401 19:35:23.164918   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:23.164925   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:23.164985   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:23.213617   71168 cri.go:89] found id: ""
	I0401 19:35:23.213656   71168 logs.go:276] 0 containers: []
	W0401 19:35:23.213668   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:23.213675   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:23.213787   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:23.264846   71168 cri.go:89] found id: ""
	I0401 19:35:23.264874   71168 logs.go:276] 0 containers: []
	W0401 19:35:23.264886   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:23.264893   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:23.264958   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:23.306467   71168 cri.go:89] found id: ""
	I0401 19:35:23.306495   71168 logs.go:276] 0 containers: []
	W0401 19:35:23.306506   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:23.306514   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:23.306566   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:23.358574   71168 cri.go:89] found id: ""
	I0401 19:35:23.358597   71168 logs.go:276] 0 containers: []
	W0401 19:35:23.358608   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:23.358619   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:23.358634   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:23.437486   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:23.437510   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:23.437525   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:23.555307   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:23.555350   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:23.601776   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:23.601808   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:23.666654   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:23.666688   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:26.184503   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:26.199924   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:26.199997   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:26.252151   71168 cri.go:89] found id: ""
	I0401 19:35:26.252181   71168 logs.go:276] 0 containers: []
	W0401 19:35:26.252192   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:26.252199   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:26.252266   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:26.299094   71168 cri.go:89] found id: ""
	I0401 19:35:26.299126   71168 logs.go:276] 0 containers: []
	W0401 19:35:26.299134   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:26.299139   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:26.299194   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:26.340483   71168 cri.go:89] found id: ""
	I0401 19:35:26.340516   71168 logs.go:276] 0 containers: []
	W0401 19:35:26.340533   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:26.340540   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:26.340599   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:26.387153   71168 cri.go:89] found id: ""
	I0401 19:35:26.387180   71168 logs.go:276] 0 containers: []
	W0401 19:35:26.387188   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:26.387194   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:26.387261   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:26.430746   71168 cri.go:89] found id: ""
	I0401 19:35:26.430773   71168 logs.go:276] 0 containers: []
	W0401 19:35:26.430781   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:26.430787   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:26.430854   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:26.478412   71168 cri.go:89] found id: ""
	I0401 19:35:26.478440   71168 logs.go:276] 0 containers: []
	W0401 19:35:26.478451   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:26.478458   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:26.478523   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:26.521120   71168 cri.go:89] found id: ""
	I0401 19:35:26.521150   71168 logs.go:276] 0 containers: []
	W0401 19:35:26.521161   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:26.521168   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:26.521229   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:26.564678   71168 cri.go:89] found id: ""
	I0401 19:35:26.564721   71168 logs.go:276] 0 containers: []
	W0401 19:35:26.564731   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:26.564742   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:26.564757   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:26.625271   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:26.625308   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:26.640505   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:26.640529   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:26.722753   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:26.722777   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:26.722795   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:26.830507   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:26.830551   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:28.505100   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:31.004387   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:28.107396   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:30.108080   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:29.386655   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:29.401232   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:29.401308   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:29.440479   71168 cri.go:89] found id: ""
	I0401 19:35:29.440511   71168 logs.go:276] 0 containers: []
	W0401 19:35:29.440522   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:29.440530   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:29.440590   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:29.479022   71168 cri.go:89] found id: ""
	I0401 19:35:29.479049   71168 logs.go:276] 0 containers: []
	W0401 19:35:29.479057   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:29.479062   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:29.479119   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:29.518179   71168 cri.go:89] found id: ""
	I0401 19:35:29.518208   71168 logs.go:276] 0 containers: []
	W0401 19:35:29.518216   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:29.518222   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:29.518281   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:29.556654   71168 cri.go:89] found id: ""
	I0401 19:35:29.556682   71168 logs.go:276] 0 containers: []
	W0401 19:35:29.556692   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:29.556712   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:29.556772   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:29.593258   71168 cri.go:89] found id: ""
	I0401 19:35:29.593287   71168 logs.go:276] 0 containers: []
	W0401 19:35:29.593295   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:29.593301   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:29.593349   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:29.637215   71168 cri.go:89] found id: ""
	I0401 19:35:29.637243   71168 logs.go:276] 0 containers: []
	W0401 19:35:29.637253   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:29.637261   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:29.637321   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:29.683052   71168 cri.go:89] found id: ""
	I0401 19:35:29.683090   71168 logs.go:276] 0 containers: []
	W0401 19:35:29.683100   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:29.683108   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:29.683164   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:29.730948   71168 cri.go:89] found id: ""
	I0401 19:35:29.730979   71168 logs.go:276] 0 containers: []
	W0401 19:35:29.730991   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:29.731001   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:29.731014   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:29.781969   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:29.782001   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:29.800700   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:29.800729   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:29.877200   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:29.877225   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:29.877244   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:29.958110   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:29.958144   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:32.501060   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:32.519551   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:32.519619   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:32.579776   71168 cri.go:89] found id: ""
	I0401 19:35:32.579802   71168 logs.go:276] 0 containers: []
	W0401 19:35:32.579813   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:32.579824   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:32.579886   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:32.643271   71168 cri.go:89] found id: ""
	I0401 19:35:32.643300   71168 logs.go:276] 0 containers: []
	W0401 19:35:32.643312   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:32.643322   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:32.643387   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:32.688576   71168 cri.go:89] found id: ""
	I0401 19:35:32.688605   71168 logs.go:276] 0 containers: []
	W0401 19:35:32.688614   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:32.688619   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:32.688678   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:32.729867   71168 cri.go:89] found id: ""
	I0401 19:35:32.729890   71168 logs.go:276] 0 containers: []
	W0401 19:35:32.729898   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:32.729906   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:32.729962   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:32.771485   71168 cri.go:89] found id: ""
	I0401 19:35:32.771508   71168 logs.go:276] 0 containers: []
	W0401 19:35:32.771515   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:32.771521   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:32.771574   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:32.809362   71168 cri.go:89] found id: ""
	I0401 19:35:32.809385   71168 logs.go:276] 0 containers: []
	W0401 19:35:32.809393   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:32.809398   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:32.809458   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:32.844916   71168 cri.go:89] found id: ""
	I0401 19:35:32.844941   71168 logs.go:276] 0 containers: []
	W0401 19:35:32.844950   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:32.844955   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:32.845000   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:32.884638   71168 cri.go:89] found id: ""
	I0401 19:35:32.884660   71168 logs.go:276] 0 containers: []
	W0401 19:35:32.884670   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:32.884680   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:32.884695   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:32.937462   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:32.937489   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:32.952842   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:32.952871   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0401 19:35:33.005516   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:35.504755   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:32.608051   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:35.106708   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:37.108135   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	W0401 19:35:33.035254   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:33.035278   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:33.035294   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:33.114963   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:33.114994   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:35.662190   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:35.675960   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:35.676016   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:35.717300   71168 cri.go:89] found id: ""
	I0401 19:35:35.717329   71168 logs.go:276] 0 containers: []
	W0401 19:35:35.717340   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:35.717347   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:35.717409   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:35.756687   71168 cri.go:89] found id: ""
	I0401 19:35:35.756713   71168 logs.go:276] 0 containers: []
	W0401 19:35:35.756723   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:35.756730   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:35.756788   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:35.796995   71168 cri.go:89] found id: ""
	I0401 19:35:35.797017   71168 logs.go:276] 0 containers: []
	W0401 19:35:35.797025   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:35.797030   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:35.797083   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:35.840419   71168 cri.go:89] found id: ""
	I0401 19:35:35.840444   71168 logs.go:276] 0 containers: []
	W0401 19:35:35.840455   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:35.840462   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:35.840523   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:35.880059   71168 cri.go:89] found id: ""
	I0401 19:35:35.880093   71168 logs.go:276] 0 containers: []
	W0401 19:35:35.880107   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:35.880113   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:35.880171   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:35.929491   71168 cri.go:89] found id: ""
	I0401 19:35:35.929515   71168 logs.go:276] 0 containers: []
	W0401 19:35:35.929523   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:35.929530   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:35.929584   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:35.968745   71168 cri.go:89] found id: ""
	I0401 19:35:35.968771   71168 logs.go:276] 0 containers: []
	W0401 19:35:35.968778   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:35.968784   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:35.968833   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:36.014294   71168 cri.go:89] found id: ""
	I0401 19:35:36.014318   71168 logs.go:276] 0 containers: []
	W0401 19:35:36.014328   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:36.014338   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:36.014359   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:36.068418   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:36.068450   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:36.086343   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:36.086367   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:36.172027   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:36.172053   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:36.172067   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:36.250046   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:36.250080   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:38.004007   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:40.004138   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:39.607714   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:42.107775   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:38.794261   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:38.809535   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:38.809597   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:38.849139   71168 cri.go:89] found id: ""
	I0401 19:35:38.849167   71168 logs.go:276] 0 containers: []
	W0401 19:35:38.849176   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:38.849181   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:38.849238   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:38.886787   71168 cri.go:89] found id: ""
	I0401 19:35:38.886811   71168 logs.go:276] 0 containers: []
	W0401 19:35:38.886821   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:38.886828   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:38.886891   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:38.923388   71168 cri.go:89] found id: ""
	I0401 19:35:38.923419   71168 logs.go:276] 0 containers: []
	W0401 19:35:38.923431   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:38.923438   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:38.923497   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:38.959583   71168 cri.go:89] found id: ""
	I0401 19:35:38.959608   71168 logs.go:276] 0 containers: []
	W0401 19:35:38.959619   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:38.959626   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:38.959682   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:38.998201   71168 cri.go:89] found id: ""
	I0401 19:35:38.998226   71168 logs.go:276] 0 containers: []
	W0401 19:35:38.998233   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:38.998238   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:38.998294   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:39.039669   71168 cri.go:89] found id: ""
	I0401 19:35:39.039692   71168 logs.go:276] 0 containers: []
	W0401 19:35:39.039703   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:39.039710   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:39.039767   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:39.077331   71168 cri.go:89] found id: ""
	I0401 19:35:39.077358   71168 logs.go:276] 0 containers: []
	W0401 19:35:39.077366   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:39.077371   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:39.077423   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:39.125999   71168 cri.go:89] found id: ""
	I0401 19:35:39.126021   71168 logs.go:276] 0 containers: []
	W0401 19:35:39.126031   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:39.126041   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:39.126054   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:39.183579   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:39.183612   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:39.201200   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:39.201227   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:39.282262   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:39.282280   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:39.282291   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:39.365340   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:39.365370   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:41.914909   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:41.929243   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:41.929317   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:41.975594   71168 cri.go:89] found id: ""
	I0401 19:35:41.975622   71168 logs.go:276] 0 containers: []
	W0401 19:35:41.975632   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:41.975639   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:41.975701   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:42.023558   71168 cri.go:89] found id: ""
	I0401 19:35:42.023585   71168 logs.go:276] 0 containers: []
	W0401 19:35:42.023596   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:42.023602   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:42.023662   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:42.074242   71168 cri.go:89] found id: ""
	I0401 19:35:42.074266   71168 logs.go:276] 0 containers: []
	W0401 19:35:42.074276   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:42.074283   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:42.074340   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:42.123327   71168 cri.go:89] found id: ""
	I0401 19:35:42.123358   71168 logs.go:276] 0 containers: []
	W0401 19:35:42.123370   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:42.123378   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:42.123452   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:42.168931   71168 cri.go:89] found id: ""
	I0401 19:35:42.168961   71168 logs.go:276] 0 containers: []
	W0401 19:35:42.168972   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:42.168980   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:42.169037   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:42.211747   71168 cri.go:89] found id: ""
	I0401 19:35:42.211774   71168 logs.go:276] 0 containers: []
	W0401 19:35:42.211784   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:42.211793   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:42.211849   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:42.251809   71168 cri.go:89] found id: ""
	I0401 19:35:42.251830   71168 logs.go:276] 0 containers: []
	W0401 19:35:42.251841   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:42.251849   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:42.251908   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:42.293266   71168 cri.go:89] found id: ""
	I0401 19:35:42.293361   71168 logs.go:276] 0 containers: []
	W0401 19:35:42.293377   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:42.293388   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:42.293405   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:42.364502   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:42.364553   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:42.381147   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:42.381180   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:42.464219   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:42.464238   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:42.464249   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:42.544564   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:42.544594   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:42.006061   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:44.504700   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:46.505615   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:44.606915   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:46.100004   70962 pod_ready.go:81] duration metric: took 4m0.000146584s for pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace to be "Ready" ...
	E0401 19:35:46.100029   70962 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace to be "Ready" (will not retry!)
	I0401 19:35:46.100044   70962 pod_ready.go:38] duration metric: took 4m10.491414096s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:35:46.100088   70962 kubeadm.go:591] duration metric: took 4m18.223285856s to restartPrimaryControlPlane
	W0401 19:35:46.100141   70962 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0401 19:35:46.100164   70962 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0401 19:35:45.105777   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:45.119911   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:45.119976   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:45.161871   71168 cri.go:89] found id: ""
	I0401 19:35:45.161890   71168 logs.go:276] 0 containers: []
	W0401 19:35:45.161897   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:45.161902   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:45.161949   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:45.198677   71168 cri.go:89] found id: ""
	I0401 19:35:45.198702   71168 logs.go:276] 0 containers: []
	W0401 19:35:45.198710   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:45.198715   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:45.198776   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:45.236938   71168 cri.go:89] found id: ""
	I0401 19:35:45.236972   71168 logs.go:276] 0 containers: []
	W0401 19:35:45.236983   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:45.236990   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:45.237052   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:45.280621   71168 cri.go:89] found id: ""
	I0401 19:35:45.280650   71168 logs.go:276] 0 containers: []
	W0401 19:35:45.280661   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:45.280668   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:45.280727   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:45.326794   71168 cri.go:89] found id: ""
	I0401 19:35:45.326818   71168 logs.go:276] 0 containers: []
	W0401 19:35:45.326827   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:45.326834   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:45.326892   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:45.369405   71168 cri.go:89] found id: ""
	I0401 19:35:45.369431   71168 logs.go:276] 0 containers: []
	W0401 19:35:45.369441   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:45.369446   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:45.369501   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:45.407609   71168 cri.go:89] found id: ""
	I0401 19:35:45.407635   71168 logs.go:276] 0 containers: []
	W0401 19:35:45.407643   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:45.407648   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:45.407720   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:45.444848   71168 cri.go:89] found id: ""
	I0401 19:35:45.444871   71168 logs.go:276] 0 containers: []
	W0401 19:35:45.444881   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:45.444891   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:45.444911   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:45.531938   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:45.531957   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:45.531972   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:45.617109   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:45.617141   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:45.663559   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:45.663591   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:45.717622   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:45.717670   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:49.004037   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:51.004650   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:48.234834   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:48.250543   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:48.250606   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:48.294396   71168 cri.go:89] found id: ""
	I0401 19:35:48.294423   71168 logs.go:276] 0 containers: []
	W0401 19:35:48.294432   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:48.294439   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:48.294504   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:48.336866   71168 cri.go:89] found id: ""
	I0401 19:35:48.336892   71168 logs.go:276] 0 containers: []
	W0401 19:35:48.336902   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:48.336908   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:48.336965   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:48.376031   71168 cri.go:89] found id: ""
	I0401 19:35:48.376065   71168 logs.go:276] 0 containers: []
	W0401 19:35:48.376076   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:48.376084   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:48.376142   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:48.414975   71168 cri.go:89] found id: ""
	I0401 19:35:48.414995   71168 logs.go:276] 0 containers: []
	W0401 19:35:48.415003   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:48.415008   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:48.415058   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:48.453484   71168 cri.go:89] found id: ""
	I0401 19:35:48.453513   71168 logs.go:276] 0 containers: []
	W0401 19:35:48.453524   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:48.453532   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:48.453593   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:48.487712   71168 cri.go:89] found id: ""
	I0401 19:35:48.487739   71168 logs.go:276] 0 containers: []
	W0401 19:35:48.487749   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:48.487757   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:48.487815   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:48.533331   71168 cri.go:89] found id: ""
	I0401 19:35:48.533364   71168 logs.go:276] 0 containers: []
	W0401 19:35:48.533375   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:48.533383   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:48.533442   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:48.574103   71168 cri.go:89] found id: ""
	I0401 19:35:48.574131   71168 logs.go:276] 0 containers: []
	W0401 19:35:48.574139   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:48.574147   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:48.574160   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:48.632068   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:48.632098   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:48.649342   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:48.649369   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:48.721799   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:48.721822   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:48.721836   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:48.821549   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:48.821584   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:51.364852   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:51.380281   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:51.380362   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:51.423383   71168 cri.go:89] found id: ""
	I0401 19:35:51.423412   71168 logs.go:276] 0 containers: []
	W0401 19:35:51.423422   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:51.423430   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:51.423490   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:51.470331   71168 cri.go:89] found id: ""
	I0401 19:35:51.470359   71168 logs.go:276] 0 containers: []
	W0401 19:35:51.470370   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:51.470378   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:51.470441   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:51.520310   71168 cri.go:89] found id: ""
	I0401 19:35:51.520339   71168 logs.go:276] 0 containers: []
	W0401 19:35:51.520350   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:51.520358   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:51.520414   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:51.568681   71168 cri.go:89] found id: ""
	I0401 19:35:51.568706   71168 logs.go:276] 0 containers: []
	W0401 19:35:51.568716   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:51.568724   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:51.568843   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:51.615146   71168 cri.go:89] found id: ""
	I0401 19:35:51.615174   71168 logs.go:276] 0 containers: []
	W0401 19:35:51.615185   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:51.615193   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:51.615256   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:51.658678   71168 cri.go:89] found id: ""
	I0401 19:35:51.658703   71168 logs.go:276] 0 containers: []
	W0401 19:35:51.658712   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:51.658720   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:51.658791   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:51.700071   71168 cri.go:89] found id: ""
	I0401 19:35:51.700097   71168 logs.go:276] 0 containers: []
	W0401 19:35:51.700108   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:51.700114   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:51.700177   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:51.746772   71168 cri.go:89] found id: ""
	I0401 19:35:51.746798   71168 logs.go:276] 0 containers: []
	W0401 19:35:51.746809   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:51.746826   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:51.746849   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:51.762321   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:51.762350   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:51.843300   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:51.843322   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:51.843337   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:51.919059   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:51.919090   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:51.965899   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:51.965925   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:53.564613   70687 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.745530657s)
	I0401 19:35:53.564696   70687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:35:53.582161   70687 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:35:53.593313   70687 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:35:53.604441   70687 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:35:53.604460   70687 kubeadm.go:156] found existing configuration files:
	
	I0401 19:35:53.604502   70687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 19:35:53.615367   70687 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:35:53.615426   70687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:35:53.626375   70687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 19:35:53.636924   70687 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:35:53.636975   70687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:35:53.647493   70687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 19:35:53.657319   70687 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:35:53.657373   70687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:35:53.667422   70687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 19:35:53.677235   70687 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:35:53.677308   70687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:35:53.688043   70687 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 19:35:53.894204   70687 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 19:35:53.504486   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:55.505966   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:54.523484   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:54.542004   71168 kubeadm.go:591] duration metric: took 4m4.024054342s to restartPrimaryControlPlane
	W0401 19:35:54.542067   71168 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0401 19:35:54.542088   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0401 19:35:55.179619   71168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:35:55.196424   71168 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:35:55.209517   71168 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:35:55.222643   71168 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:35:55.222664   71168 kubeadm.go:156] found existing configuration files:
	
	I0401 19:35:55.222714   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 19:35:55.234756   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:35:55.234813   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:35:55.246725   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 19:35:55.258440   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:35:55.258499   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:35:55.270106   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 19:35:55.280724   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:35:55.280776   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:35:55.293630   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 19:35:55.305588   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:35:55.305660   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:35:55.318308   71168 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 19:35:55.574896   71168 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 19:35:58.004494   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:00.505168   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:02.622337   70687 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0401 19:36:02.622433   70687 kubeadm.go:309] [preflight] Running pre-flight checks
	I0401 19:36:02.622548   70687 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 19:36:02.622659   70687 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 19:36:02.622794   70687 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 19:36:02.622883   70687 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 19:36:02.624550   70687 out.go:204]   - Generating certificates and keys ...
	I0401 19:36:02.624640   70687 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0401 19:36:02.624734   70687 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0401 19:36:02.624861   70687 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0401 19:36:02.624952   70687 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0401 19:36:02.625042   70687 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0401 19:36:02.625114   70687 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0401 19:36:02.625206   70687 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0401 19:36:02.625271   70687 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0401 19:36:02.625337   70687 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0401 19:36:02.625398   70687 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0401 19:36:02.625430   70687 kubeadm.go:309] [certs] Using the existing "sa" key
	I0401 19:36:02.625475   70687 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 19:36:02.625519   70687 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 19:36:02.625567   70687 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 19:36:02.625630   70687 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 19:36:02.625744   70687 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 19:36:02.625825   70687 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 19:36:02.625938   70687 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 19:36:02.626041   70687 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 19:36:02.627616   70687 out.go:204]   - Booting up control plane ...
	I0401 19:36:02.627744   70687 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 19:36:02.627812   70687 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 19:36:02.627878   70687 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 19:36:02.627976   70687 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 19:36:02.628046   70687 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 19:36:02.628098   70687 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0401 19:36:02.628273   70687 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 19:36:02.628354   70687 kubeadm.go:309] [apiclient] All control plane components are healthy after 5.502318 seconds
	I0401 19:36:02.628467   70687 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 19:36:02.628587   70687 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 19:36:02.628642   70687 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 19:36:02.628800   70687 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-882095 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 19:36:02.628849   70687 kubeadm.go:309] [bootstrap-token] Using token: 821cxx.fac41nwqi8u5mwgu
	I0401 19:36:02.630202   70687 out.go:204]   - Configuring RBAC rules ...
	I0401 19:36:02.630328   70687 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 19:36:02.630413   70687 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 19:36:02.630593   70687 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 19:36:02.630794   70687 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 19:36:02.630941   70687 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 19:36:02.631049   70687 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 19:36:02.631205   70687 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 19:36:02.631255   70687 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0401 19:36:02.631318   70687 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0401 19:36:02.631326   70687 kubeadm.go:309] 
	I0401 19:36:02.631412   70687 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0401 19:36:02.631421   70687 kubeadm.go:309] 
	I0401 19:36:02.631527   70687 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0401 19:36:02.631534   70687 kubeadm.go:309] 
	I0401 19:36:02.631560   70687 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0401 19:36:02.631649   70687 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 19:36:02.631721   70687 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 19:36:02.631731   70687 kubeadm.go:309] 
	I0401 19:36:02.631810   70687 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0401 19:36:02.631822   70687 kubeadm.go:309] 
	I0401 19:36:02.631896   70687 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 19:36:02.631910   70687 kubeadm.go:309] 
	I0401 19:36:02.631986   70687 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0401 19:36:02.632088   70687 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 19:36:02.632181   70687 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 19:36:02.632190   70687 kubeadm.go:309] 
	I0401 19:36:02.632319   70687 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 19:36:02.632427   70687 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0401 19:36:02.632437   70687 kubeadm.go:309] 
	I0401 19:36:02.632532   70687 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 821cxx.fac41nwqi8u5mwgu \
	I0401 19:36:02.632695   70687 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b8a0197ad47aa27a5800307c57228d22e61e4d31af785fa8a896f2b7fab267b8 \
	I0401 19:36:02.632726   70687 kubeadm.go:309] 	--control-plane 
	I0401 19:36:02.632736   70687 kubeadm.go:309] 
	I0401 19:36:02.632860   70687 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0401 19:36:02.632875   70687 kubeadm.go:309] 
	I0401 19:36:02.632983   70687 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 821cxx.fac41nwqi8u5mwgu \
	I0401 19:36:02.633118   70687 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b8a0197ad47aa27a5800307c57228d22e61e4d31af785fa8a896f2b7fab267b8 
	I0401 19:36:02.633132   70687 cni.go:84] Creating CNI manager for ""
	I0401 19:36:02.633138   70687 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:36:02.634595   70687 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0401 19:36:02.635812   70687 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0401 19:36:02.671750   70687 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0401 19:36:02.705562   70687 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 19:36:02.705657   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:02.705671   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-882095 minikube.k8s.io/updated_at=2024_04_01T19_36_02_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2 minikube.k8s.io/name=embed-certs-882095 minikube.k8s.io/primary=true
	I0401 19:36:02.762626   70687 ops.go:34] apiserver oom_adj: -16
	I0401 19:36:03.065957   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:03.566513   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:04.066178   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:04.566321   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:05.066798   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:05.566877   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:06.066520   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:03.004878   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:05.505057   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:06.566982   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:07.066931   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:07.566107   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:08.066843   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:08.566186   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:09.066550   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:09.566205   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:10.066287   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:10.566902   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:11.066656   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:08.005380   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:10.504026   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:11.566894   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:12.066235   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:12.566599   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:13.066132   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:13.566865   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:14.066759   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:14.566435   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:15.066907   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:15.566851   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:16.066880   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:16.158125   70687 kubeadm.go:1107] duration metric: took 13.452541301s to wait for elevateKubeSystemPrivileges
	W0401 19:36:16.158168   70687 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0401 19:36:16.158176   70687 kubeadm.go:393] duration metric: took 5m8.800288084s to StartCluster
	I0401 19:36:16.158195   70687 settings.go:142] acquiring lock: {Name:mk5cd3d9600680d3808ad7ff6310a5e71b09e71d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:36:16.158268   70687 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 19:36:16.159976   70687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/kubeconfig: {Name:mkbd988e40ba29769e9f8a43c4d876f38e957f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:36:16.160254   70687 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.190 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 19:36:16.162239   70687 out.go:177] * Verifying Kubernetes components...
	I0401 19:36:16.160346   70687 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0401 19:36:16.162276   70687 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-882095"
	I0401 19:36:16.162311   70687 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-882095"
	W0401 19:36:16.162320   70687 addons.go:243] addon storage-provisioner should already be in state true
	I0401 19:36:16.162339   70687 addons.go:69] Setting default-storageclass=true in profile "embed-certs-882095"
	I0401 19:36:16.162348   70687 addons.go:69] Setting metrics-server=true in profile "embed-certs-882095"
	I0401 19:36:16.162363   70687 addons.go:234] Setting addon metrics-server=true in "embed-certs-882095"
	W0401 19:36:16.162371   70687 addons.go:243] addon metrics-server should already be in state true
	I0401 19:36:16.162377   70687 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-882095"
	I0401 19:36:16.162384   70687 host.go:66] Checking if "embed-certs-882095" exists ...
	I0401 19:36:16.162345   70687 host.go:66] Checking if "embed-certs-882095" exists ...
	I0401 19:36:16.163767   70687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:36:16.160484   70687 config.go:182] Loaded profile config "embed-certs-882095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 19:36:16.162673   70687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:16.162687   70687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:16.163886   70687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:16.163900   70687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:16.162704   70687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:16.163963   70687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:16.180743   70687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41647
	I0401 19:36:16.180759   70687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46707
	I0401 19:36:16.180746   70687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44419
	I0401 19:36:16.181334   70687 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:16.181342   70687 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:16.181369   70687 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:16.181830   70687 main.go:141] libmachine: Using API Version  1
	I0401 19:36:16.181848   70687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:16.181973   70687 main.go:141] libmachine: Using API Version  1
	I0401 19:36:16.181991   70687 main.go:141] libmachine: Using API Version  1
	I0401 19:36:16.182001   70687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:16.182007   70687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:16.182187   70687 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:16.182360   70687 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:16.182393   70687 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:16.182592   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetState
	I0401 19:36:16.182726   70687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:16.182753   70687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:16.182829   70687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:16.182871   70687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:16.186198   70687 addons.go:234] Setting addon default-storageclass=true in "embed-certs-882095"
	W0401 19:36:16.186226   70687 addons.go:243] addon default-storageclass should already be in state true
	I0401 19:36:16.186258   70687 host.go:66] Checking if "embed-certs-882095" exists ...
	I0401 19:36:16.186603   70687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:16.186636   70687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:16.198494   70687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36915
	I0401 19:36:16.198862   70687 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:16.199298   70687 main.go:141] libmachine: Using API Version  1
	I0401 19:36:16.199315   70687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:16.199777   70687 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:16.200056   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetState
	I0401 19:36:16.201955   70687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39769
	I0401 19:36:16.202167   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:36:16.202416   70687 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:16.204728   70687 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:36:16.202891   70687 main.go:141] libmachine: Using API Version  1
	I0401 19:36:16.205309   70687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35751
	I0401 19:36:16.207964   70687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:16.208022   70687 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 19:36:16.208038   70687 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 19:36:16.208057   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:36:16.208345   70687 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:16.208482   70687 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:16.208550   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetState
	I0401 19:36:16.209106   70687 main.go:141] libmachine: Using API Version  1
	I0401 19:36:16.209121   70687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:16.209764   70687 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:16.210220   70687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:16.210258   70687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:16.211015   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:36:16.213549   70687 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 19:36:16.212105   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:36:16.215606   70687 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 19:36:16.213577   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:36:16.215625   70687 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 19:36:16.215632   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:36:16.212867   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:36:16.215647   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:36:16.215791   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:36:16.215913   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:36:16.216028   70687 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/embed-certs-882095/id_rsa Username:docker}
	I0401 19:36:16.218302   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:36:16.218924   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:36:16.218948   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:36:16.219174   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:36:16.219340   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:36:16.219496   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:36:16.219818   70687 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/embed-certs-882095/id_rsa Username:docker}
	I0401 19:36:16.227813   70687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35001
	I0401 19:36:16.228198   70687 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:16.228612   70687 main.go:141] libmachine: Using API Version  1
	I0401 19:36:16.228635   70687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:16.228989   70687 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:16.229159   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetState
	I0401 19:36:16.230712   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:36:16.230969   70687 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 19:36:16.230987   70687 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 19:36:16.231003   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:36:16.233712   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:36:16.234102   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:36:16.234126   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:36:16.234273   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:36:16.234435   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:36:16.234593   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:36:16.234753   70687 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/embed-certs-882095/id_rsa Username:docker}
	I0401 19:36:16.332504   70687 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:36:16.354423   70687 node_ready.go:35] waiting up to 6m0s for node "embed-certs-882095" to be "Ready" ...
	I0401 19:36:16.363527   70687 node_ready.go:49] node "embed-certs-882095" has status "Ready":"True"
	I0401 19:36:16.363555   70687 node_ready.go:38] duration metric: took 9.10669ms for node "embed-certs-882095" to be "Ready" ...
	I0401 19:36:16.363567   70687 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:36:16.369606   70687 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-fx6hf" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:16.435769   70687 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 19:36:16.435793   70687 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 19:36:16.450934   70687 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 19:36:16.468137   70687 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 19:36:16.474209   70687 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 19:36:16.474233   70687 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 19:36:13.003028   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:15.004924   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:16.530201   70687 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 19:36:16.530222   70687 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0401 19:36:16.607557   70687 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 19:36:17.044156   70687 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:17.044183   70687 main.go:141] libmachine: (embed-certs-882095) Calling .Close
	I0401 19:36:17.044165   70687 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:17.044244   70687 main.go:141] libmachine: (embed-certs-882095) Calling .Close
	I0401 19:36:17.044569   70687 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:17.044606   70687 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:17.044617   70687 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:17.044624   70687 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:17.044630   70687 main.go:141] libmachine: (embed-certs-882095) Calling .Close
	I0401 19:36:17.044639   70687 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:17.044656   70687 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:17.044657   70687 main.go:141] libmachine: (embed-certs-882095) DBG | Closing plugin on server side
	I0401 19:36:17.044670   70687 main.go:141] libmachine: (embed-certs-882095) Calling .Close
	I0401 19:36:17.044616   70687 main.go:141] libmachine: (embed-certs-882095) DBG | Closing plugin on server side
	I0401 19:36:17.044947   70687 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:17.044963   70687 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:17.044964   70687 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:17.044973   70687 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:17.045019   70687 main.go:141] libmachine: (embed-certs-882095) DBG | Closing plugin on server side
	I0401 19:36:17.058441   70687 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:17.058469   70687 main.go:141] libmachine: (embed-certs-882095) Calling .Close
	I0401 19:36:17.058718   70687 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:17.058735   70687 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:17.276263   70687 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:17.276283   70687 main.go:141] libmachine: (embed-certs-882095) Calling .Close
	I0401 19:36:17.276548   70687 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:17.276562   70687 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:17.276571   70687 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:17.276584   70687 main.go:141] libmachine: (embed-certs-882095) Calling .Close
	I0401 19:36:17.276823   70687 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:17.276837   70687 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:17.276852   70687 addons.go:470] Verifying addon metrics-server=true in "embed-certs-882095"
	I0401 19:36:17.278536   70687 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0401 19:36:17.279740   70687 addons.go:505] duration metric: took 1.119396s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0401 19:36:18.412746   70687 pod_ready.go:102] pod "coredns-76f75df574-fx6hf" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:19.378799   70687 pod_ready.go:92] pod "coredns-76f75df574-fx6hf" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:19.378819   70687 pod_ready.go:81] duration metric: took 3.009189982s for pod "coredns-76f75df574-fx6hf" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.378828   70687 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-hwbw6" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.384482   70687 pod_ready.go:92] pod "coredns-76f75df574-hwbw6" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:19.384498   70687 pod_ready.go:81] duration metric: took 5.664781ms for pod "coredns-76f75df574-hwbw6" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.384507   70687 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.390258   70687 pod_ready.go:92] pod "etcd-embed-certs-882095" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:19.390274   70687 pod_ready.go:81] duration metric: took 5.761319ms for pod "etcd-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.390281   70687 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.395592   70687 pod_ready.go:92] pod "kube-apiserver-embed-certs-882095" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:19.395611   70687 pod_ready.go:81] duration metric: took 5.323181ms for pod "kube-apiserver-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.395622   70687 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.400979   70687 pod_ready.go:92] pod "kube-controller-manager-embed-certs-882095" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:19.400994   70687 pod_ready.go:81] duration metric: took 5.365282ms for pod "kube-controller-manager-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.401002   70687 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mbs4m" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.775009   70687 pod_ready.go:92] pod "kube-proxy-mbs4m" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:19.775036   70687 pod_ready.go:81] duration metric: took 374.027521ms for pod "kube-proxy-mbs4m" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.775047   70687 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:20.174962   70687 pod_ready.go:92] pod "kube-scheduler-embed-certs-882095" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:20.174986   70687 pod_ready.go:81] duration metric: took 399.930828ms for pod "kube-scheduler-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:20.174994   70687 pod_ready.go:38] duration metric: took 3.811414774s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:36:20.175006   70687 api_server.go:52] waiting for apiserver process to appear ...
	I0401 19:36:20.175064   70687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:36:20.191452   70687 api_server.go:72] duration metric: took 4.031156406s to wait for apiserver process to appear ...
	I0401 19:36:20.191477   70687 api_server.go:88] waiting for apiserver healthz status ...
	I0401 19:36:20.191498   70687 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0401 19:36:20.196706   70687 api_server.go:279] https://192.168.39.190:8443/healthz returned 200:
	ok
	I0401 19:36:20.197772   70687 api_server.go:141] control plane version: v1.29.3
	I0401 19:36:20.197791   70687 api_server.go:131] duration metric: took 6.308074ms to wait for apiserver health ...
	I0401 19:36:20.197799   70687 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 19:36:20.380616   70687 system_pods.go:59] 9 kube-system pods found
	I0401 19:36:20.380645   70687 system_pods.go:61] "coredns-76f75df574-fx6hf" [1c07b740-3374-4a54-a786-784b23ec6b83] Running
	I0401 19:36:20.380651   70687 system_pods.go:61] "coredns-76f75df574-hwbw6" [7b12145a-2689-47e9-9724-d80790ed079c] Running
	I0401 19:36:20.380657   70687 system_pods.go:61] "etcd-embed-certs-882095" [3848d128-2fde-42f5-9543-b8d0343ba15b] Running
	I0401 19:36:20.380663   70687 system_pods.go:61] "kube-apiserver-embed-certs-882095" [116c5cd1-2d04-4a85-96e9-bd1e6af4cba4] Running
	I0401 19:36:20.380668   70687 system_pods.go:61] "kube-controller-manager-embed-certs-882095" [8a2282cf-2a87-4cee-a482-355e92048642] Running
	I0401 19:36:20.380672   70687 system_pods.go:61] "kube-proxy-mbs4m" [ffccbae0-7538-4a75-a6ce-afce49865f07] Running
	I0401 19:36:20.380676   70687 system_pods.go:61] "kube-scheduler-embed-certs-882095" [d2554007-1c9c-4238-809a-72aae1fb7de3] Running
	I0401 19:36:20.380684   70687 system_pods.go:61] "metrics-server-57f55c9bc5-dktr6" [c6adfcab-c746-4ad8-abe2-8b300389a4f5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:36:20.380689   70687 system_pods.go:61] "storage-provisioner" [bcff0d1d-a555-4b25-9aa5-7ab1188c21fd] Running
	I0401 19:36:20.380700   70687 system_pods.go:74] duration metric: took 182.895079ms to wait for pod list to return data ...
	I0401 19:36:20.380711   70687 default_sa.go:34] waiting for default service account to be created ...
	I0401 19:36:20.574739   70687 default_sa.go:45] found service account: "default"
	I0401 19:36:20.574771   70687 default_sa.go:55] duration metric: took 194.049249ms for default service account to be created ...
	I0401 19:36:20.574785   70687 system_pods.go:116] waiting for k8s-apps to be running ...
	I0401 19:36:20.781600   70687 system_pods.go:86] 9 kube-system pods found
	I0401 19:36:20.781630   70687 system_pods.go:89] "coredns-76f75df574-fx6hf" [1c07b740-3374-4a54-a786-784b23ec6b83] Running
	I0401 19:36:20.781638   70687 system_pods.go:89] "coredns-76f75df574-hwbw6" [7b12145a-2689-47e9-9724-d80790ed079c] Running
	I0401 19:36:20.781658   70687 system_pods.go:89] "etcd-embed-certs-882095" [3848d128-2fde-42f5-9543-b8d0343ba15b] Running
	I0401 19:36:20.781664   70687 system_pods.go:89] "kube-apiserver-embed-certs-882095" [116c5cd1-2d04-4a85-96e9-bd1e6af4cba4] Running
	I0401 19:36:20.781672   70687 system_pods.go:89] "kube-controller-manager-embed-certs-882095" [8a2282cf-2a87-4cee-a482-355e92048642] Running
	I0401 19:36:20.781678   70687 system_pods.go:89] "kube-proxy-mbs4m" [ffccbae0-7538-4a75-a6ce-afce49865f07] Running
	I0401 19:36:20.781686   70687 system_pods.go:89] "kube-scheduler-embed-certs-882095" [d2554007-1c9c-4238-809a-72aae1fb7de3] Running
	I0401 19:36:20.781695   70687 system_pods.go:89] "metrics-server-57f55c9bc5-dktr6" [c6adfcab-c746-4ad8-abe2-8b300389a4f5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:36:20.781705   70687 system_pods.go:89] "storage-provisioner" [bcff0d1d-a555-4b25-9aa5-7ab1188c21fd] Running
	I0401 19:36:20.781722   70687 system_pods.go:126] duration metric: took 206.928658ms to wait for k8s-apps to be running ...
	I0401 19:36:20.781738   70687 system_svc.go:44] waiting for kubelet service to be running ....
	I0401 19:36:20.781789   70687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:36:20.798910   70687 system_svc.go:56] duration metric: took 17.163227ms WaitForService to wait for kubelet
	I0401 19:36:20.798940   70687 kubeadm.go:576] duration metric: took 4.638649198s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 19:36:20.798962   70687 node_conditions.go:102] verifying NodePressure condition ...
	I0401 19:36:20.975011   70687 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 19:36:20.975034   70687 node_conditions.go:123] node cpu capacity is 2
	I0401 19:36:20.975045   70687 node_conditions.go:105] duration metric: took 176.077669ms to run NodePressure ...
	I0401 19:36:20.975055   70687 start.go:240] waiting for startup goroutines ...
	I0401 19:36:20.975061   70687 start.go:245] waiting for cluster config update ...
	I0401 19:36:20.975070   70687 start.go:254] writing updated cluster config ...
	I0401 19:36:20.975313   70687 ssh_runner.go:195] Run: rm -f paused
	I0401 19:36:21.024261   70687 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0401 19:36:21.026583   70687 out.go:177] * Done! kubectl is now configured to use "embed-certs-882095" cluster and "default" namespace by default
	I0401 19:36:17.504621   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:20.003964   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:18.623277   70962 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.523094705s)
	I0401 19:36:18.623344   70962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:36:18.640939   70962 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:36:18.653983   70962 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:36:18.666162   70962 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:36:18.666182   70962 kubeadm.go:156] found existing configuration files:
	
	I0401 19:36:18.666233   70962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0401 19:36:18.679043   70962 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:36:18.679092   70962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:36:18.690185   70962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0401 19:36:18.703017   70962 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:36:18.703078   70962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:36:18.714986   70962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0401 19:36:18.727138   70962 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:36:18.727188   70962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:36:18.737886   70962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0401 19:36:18.748013   70962 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:36:18.748064   70962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:36:18.758552   70962 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 19:36:18.988309   70962 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 19:36:22.004400   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:24.004510   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:26.504264   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:28.053408   70962 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0401 19:36:28.053478   70962 kubeadm.go:309] [preflight] Running pre-flight checks
	I0401 19:36:28.053544   70962 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 19:36:28.053677   70962 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 19:36:28.053837   70962 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 19:36:28.053953   70962 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 19:36:28.055426   70962 out.go:204]   - Generating certificates and keys ...
	I0401 19:36:28.055513   70962 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0401 19:36:28.055614   70962 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0401 19:36:28.055742   70962 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0401 19:36:28.055834   70962 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0401 19:36:28.055942   70962 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0401 19:36:28.056022   70962 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0401 19:36:28.056104   70962 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0401 19:36:28.056167   70962 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0401 19:36:28.056250   70962 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0401 19:36:28.056331   70962 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0401 19:36:28.056371   70962 kubeadm.go:309] [certs] Using the existing "sa" key
	I0401 19:36:28.056449   70962 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 19:36:28.056531   70962 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 19:36:28.056600   70962 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 19:36:28.056677   70962 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 19:36:28.056772   70962 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 19:36:28.056870   70962 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 19:36:28.057006   70962 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 19:36:28.057100   70962 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 19:36:28.058575   70962 out.go:204]   - Booting up control plane ...
	I0401 19:36:28.058693   70962 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 19:36:28.058773   70962 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 19:36:28.058830   70962 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 19:36:28.058923   70962 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 19:36:28.058998   70962 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 19:36:28.059032   70962 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0401 19:36:28.059201   70962 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 19:36:28.059307   70962 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003148 seconds
	I0401 19:36:28.059432   70962 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 19:36:28.059592   70962 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 19:36:28.059665   70962 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 19:36:28.059892   70962 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-734648 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 19:36:28.059966   70962 kubeadm.go:309] [bootstrap-token] Using token: x76swh.zbuhmc8jrh5hodf9
	I0401 19:36:28.061321   70962 out.go:204]   - Configuring RBAC rules ...
	I0401 19:36:28.061450   70962 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 19:36:28.061577   70962 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 19:36:28.061803   70962 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 19:36:28.061993   70962 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 19:36:28.062153   70962 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 19:36:28.062252   70962 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 19:36:28.062363   70962 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 19:36:28.062422   70962 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0401 19:36:28.062481   70962 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0401 19:36:28.062493   70962 kubeadm.go:309] 
	I0401 19:36:28.062556   70962 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0401 19:36:28.062569   70962 kubeadm.go:309] 
	I0401 19:36:28.062686   70962 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0401 19:36:28.062697   70962 kubeadm.go:309] 
	I0401 19:36:28.062727   70962 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0401 19:36:28.062805   70962 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 19:36:28.062872   70962 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 19:36:28.062886   70962 kubeadm.go:309] 
	I0401 19:36:28.062959   70962 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0401 19:36:28.062969   70962 kubeadm.go:309] 
	I0401 19:36:28.063050   70962 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 19:36:28.063061   70962 kubeadm.go:309] 
	I0401 19:36:28.063103   70962 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0401 19:36:28.063172   70962 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 19:36:28.063234   70962 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 19:36:28.063240   70962 kubeadm.go:309] 
	I0401 19:36:28.063337   70962 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 19:36:28.063440   70962 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0401 19:36:28.063453   70962 kubeadm.go:309] 
	I0401 19:36:28.063559   70962 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token x76swh.zbuhmc8jrh5hodf9 \
	I0401 19:36:28.063676   70962 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b8a0197ad47aa27a5800307c57228d22e61e4d31af785fa8a896f2b7fab267b8 \
	I0401 19:36:28.063725   70962 kubeadm.go:309] 	--control-plane 
	I0401 19:36:28.063734   70962 kubeadm.go:309] 
	I0401 19:36:28.063835   70962 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0401 19:36:28.063844   70962 kubeadm.go:309] 
	I0401 19:36:28.063955   70962 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token x76swh.zbuhmc8jrh5hodf9 \
	I0401 19:36:28.064092   70962 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b8a0197ad47aa27a5800307c57228d22e61e4d31af785fa8a896f2b7fab267b8 
	I0401 19:36:28.064105   70962 cni.go:84] Creating CNI manager for ""
	I0401 19:36:28.064114   70962 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:36:28.065560   70962 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0401 19:36:28.505029   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:31.005436   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:28.066823   70962 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0401 19:36:28.089595   70962 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0401 19:36:28.150074   70962 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 19:36:28.150195   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:28.150206   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-734648 minikube.k8s.io/updated_at=2024_04_01T19_36_28_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2 minikube.k8s.io/name=default-k8s-diff-port-734648 minikube.k8s.io/primary=true
	I0401 19:36:28.494391   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:28.529148   70962 ops.go:34] apiserver oom_adj: -16
	I0401 19:36:28.994780   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:29.494976   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:29.994627   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:30.495192   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:30.995334   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:31.494861   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:31.994576   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:33.505264   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:35.506298   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:32.495185   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:32.995090   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:33.494755   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:33.994758   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:34.494609   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:34.995423   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:35.495219   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:35.994557   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:36.495175   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:36.994857   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:37.494725   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:37.994846   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:38.494687   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:38.994615   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:39.494929   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:39.994514   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:40.494838   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:40.994846   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:41.105036   70962 kubeadm.go:1107] duration metric: took 12.954907711s to wait for elevateKubeSystemPrivileges
	W0401 19:36:41.105072   70962 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0401 19:36:41.105080   70962 kubeadm.go:393] duration metric: took 5m13.291890816s to StartCluster
	I0401 19:36:41.105098   70962 settings.go:142] acquiring lock: {Name:mk5cd3d9600680d3808ad7ff6310a5e71b09e71d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:36:41.105193   70962 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 19:36:41.107226   70962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/kubeconfig: {Name:mkbd988e40ba29769e9f8a43c4d876f38e957f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:36:41.107451   70962 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.145 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 19:36:41.109245   70962 out.go:177] * Verifying Kubernetes components...
	I0401 19:36:41.107543   70962 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0401 19:36:41.107682   70962 config.go:182] Loaded profile config "default-k8s-diff-port-734648": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 19:36:41.110583   70962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:36:41.110596   70962 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-734648"
	I0401 19:36:41.110621   70962 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-734648"
	I0401 19:36:41.110620   70962 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-734648"
	I0401 19:36:41.110652   70962 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-734648"
	I0401 19:36:41.110588   70962 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-734648"
	W0401 19:36:41.110665   70962 addons.go:243] addon metrics-server should already be in state true
	I0401 19:36:41.110685   70962 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-734648"
	W0401 19:36:41.110699   70962 addons.go:243] addon storage-provisioner should already be in state true
	I0401 19:36:41.110700   70962 host.go:66] Checking if "default-k8s-diff-port-734648" exists ...
	I0401 19:36:41.110727   70962 host.go:66] Checking if "default-k8s-diff-port-734648" exists ...
	I0401 19:36:41.111032   70962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:41.111039   70962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:41.111062   70962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:41.111098   70962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:41.111126   70962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:41.111158   70962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:41.129376   70962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46657
	I0401 19:36:41.130833   70962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38623
	I0401 19:36:41.131158   70962 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:41.131258   70962 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:41.131761   70962 main.go:141] libmachine: Using API Version  1
	I0401 19:36:41.131786   70962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:41.132119   70962 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:41.132313   70962 main.go:141] libmachine: Using API Version  1
	I0401 19:36:41.132437   70962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:41.132477   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetState
	I0401 19:36:41.133129   70962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36213
	I0401 19:36:41.133449   70962 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:41.133456   70962 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:41.133871   70962 main.go:141] libmachine: Using API Version  1
	I0401 19:36:41.133894   70962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:41.133990   70962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:41.134021   70962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:41.134159   70962 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:41.134572   70962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:41.134609   70962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:41.143808   70962 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-734648"
	W0401 19:36:41.143829   70962 addons.go:243] addon default-storageclass should already be in state true
	I0401 19:36:41.143858   70962 host.go:66] Checking if "default-k8s-diff-port-734648" exists ...
	I0401 19:36:41.144202   70962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:41.144241   70962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:41.154009   70962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38703
	I0401 19:36:41.156112   70962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45449
	I0401 19:36:41.156579   70962 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:41.157085   70962 main.go:141] libmachine: Using API Version  1
	I0401 19:36:41.157112   70962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:41.157458   70962 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:41.157631   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetState
	I0401 19:36:41.157891   70962 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:41.158593   70962 main.go:141] libmachine: Using API Version  1
	I0401 19:36:41.158615   70962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:41.158924   70962 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:41.159123   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetState
	I0401 19:36:41.160683   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:36:41.162801   70962 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 19:36:41.164275   70962 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 19:36:41.164292   70962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 19:36:41.164310   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:36:41.162762   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:36:41.163321   70962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39643
	I0401 19:36:41.166161   70962 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:36:38.004666   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:40.005118   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:41.164866   70962 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:41.167473   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:36:41.167806   70962 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 19:36:41.167833   70962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 19:36:41.167850   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:36:41.168056   70962 main.go:141] libmachine: Using API Version  1
	I0401 19:36:41.168074   70962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:41.168145   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:36:41.168163   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:36:41.168194   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:36:41.168353   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:36:41.168429   70962 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:41.168583   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:36:41.168723   70962 sshutil.go:53] new ssh client: &{IP:192.168.61.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/default-k8s-diff-port-734648/id_rsa Username:docker}
	I0401 19:36:41.169323   70962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:41.169374   70962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:41.170857   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:36:41.171269   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:36:41.171323   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:36:41.171412   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:36:41.171576   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:36:41.171723   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:36:41.171860   70962 sshutil.go:53] new ssh client: &{IP:192.168.61.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/default-k8s-diff-port-734648/id_rsa Username:docker}
	I0401 19:36:41.191280   70962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42133
	I0401 19:36:41.191576   70962 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:41.192122   70962 main.go:141] libmachine: Using API Version  1
	I0401 19:36:41.192152   70962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:41.192511   70962 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:41.192673   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetState
	I0401 19:36:41.194286   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:36:41.194528   70962 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 19:36:41.194546   70962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 19:36:41.194564   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:36:41.197639   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:36:41.198235   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:36:41.198259   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:36:41.198296   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:36:41.198491   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:36:41.198670   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:36:41.198857   70962 sshutil.go:53] new ssh client: &{IP:192.168.61.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/default-k8s-diff-port-734648/id_rsa Username:docker}
	I0401 19:36:41.308472   70962 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:36:41.334121   70962 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-734648" to be "Ready" ...
	I0401 19:36:41.343898   70962 node_ready.go:49] node "default-k8s-diff-port-734648" has status "Ready":"True"
	I0401 19:36:41.343943   70962 node_ready.go:38] duration metric: took 9.780821ms for node "default-k8s-diff-port-734648" to be "Ready" ...
	I0401 19:36:41.343952   70962 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:36:41.352294   70962 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:41.362318   70962 pod_ready.go:92] pod "etcd-default-k8s-diff-port-734648" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:41.362345   70962 pod_ready.go:81] duration metric: took 10.020335ms for pod "etcd-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:41.362358   70962 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:41.367338   70962 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-734648" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:41.367356   70962 pod_ready.go:81] duration metric: took 4.990987ms for pod "kube-apiserver-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:41.367364   70962 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:41.372379   70962 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-734648" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:41.372401   70962 pod_ready.go:81] duration metric: took 5.030239ms for pod "kube-controller-manager-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:41.372412   70962 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:41.377862   70962 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:41.377881   70962 pod_ready.go:81] duration metric: took 5.460968ms for pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:41.377891   70962 pod_ready.go:38] duration metric: took 33.929349ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:36:41.377915   70962 api_server.go:52] waiting for apiserver process to appear ...
	I0401 19:36:41.377965   70962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:36:41.396518   70962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 19:36:41.407024   70962 api_server.go:72] duration metric: took 299.545156ms to wait for apiserver process to appear ...
	I0401 19:36:41.407049   70962 api_server.go:88] waiting for apiserver healthz status ...
	I0401 19:36:41.407068   70962 api_server.go:253] Checking apiserver healthz at https://192.168.61.145:8444/healthz ...
	I0401 19:36:41.411429   70962 api_server.go:279] https://192.168.61.145:8444/healthz returned 200:
	ok
	I0401 19:36:41.412620   70962 api_server.go:141] control plane version: v1.29.3
	I0401 19:36:41.412640   70962 api_server.go:131] duration metric: took 5.58478ms to wait for apiserver health ...
	I0401 19:36:41.412646   70962 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 19:36:41.426474   70962 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 19:36:41.426500   70962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 19:36:41.447003   70962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 19:36:41.470135   70962 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 19:36:41.470153   70962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 19:36:41.526684   70962 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 19:36:41.526710   70962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0401 19:36:41.540871   70962 system_pods.go:59] 4 kube-system pods found
	I0401 19:36:41.540894   70962 system_pods.go:61] "etcd-default-k8s-diff-port-734648" [7b60f629-8a15-420e-936c-872a0d55ce74] Running
	I0401 19:36:41.540900   70962 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-734648" [811a3391-02c8-43dd-9129-3fc50a4fab41] Running
	I0401 19:36:41.540905   70962 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-734648" [4b57b14a-5f46-482f-8661-8fa500db5390] Running
	I0401 19:36:41.540908   70962 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-734648" [e0fb5e6b-aaa8-45ba-9df9-be947cbbdb80] Running
	I0401 19:36:41.540914   70962 system_pods.go:74] duration metric: took 128.262683ms to wait for pod list to return data ...
	I0401 19:36:41.540920   70962 default_sa.go:34] waiting for default service account to be created ...
	I0401 19:36:41.625507   70962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 19:36:41.750232   70962 default_sa.go:45] found service account: "default"
	I0401 19:36:41.750261   70962 default_sa.go:55] duration metric: took 209.334562ms for default service account to be created ...
	I0401 19:36:41.750273   70962 system_pods.go:116] waiting for k8s-apps to be running ...
	I0401 19:36:41.968623   70962 system_pods.go:86] 7 kube-system pods found
	I0401 19:36:41.968651   70962 system_pods.go:89] "coredns-76f75df574-lwsms" [9f432161-c5e3-42fa-8857-8e61959511b0] Pending
	I0401 19:36:41.968657   70962 system_pods.go:89] "coredns-76f75df574-ws9cc" [65660abf-9856-4df4-a07b-854cfd8e3fc6] Pending
	I0401 19:36:41.968663   70962 system_pods.go:89] "etcd-default-k8s-diff-port-734648" [7b60f629-8a15-420e-936c-872a0d55ce74] Running
	I0401 19:36:41.968669   70962 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-734648" [811a3391-02c8-43dd-9129-3fc50a4fab41] Running
	I0401 19:36:41.968675   70962 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-734648" [4b57b14a-5f46-482f-8661-8fa500db5390] Running
	I0401 19:36:41.968683   70962 system_pods.go:89] "kube-proxy-p8wrc" [2f6b37e6-b3f9-44b6-8ff9-e8fd781ef1a3] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:36:41.968690   70962 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-734648" [e0fb5e6b-aaa8-45ba-9df9-be947cbbdb80] Running
	I0401 19:36:41.968712   70962 retry.go:31] will retry after 288.42332ms: missing components: kube-dns, kube-proxy
	I0401 19:36:42.231814   70962 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:42.231848   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .Close
	I0401 19:36:42.231904   70962 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:42.231925   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .Close
	I0401 19:36:42.232160   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | Closing plugin on server side
	I0401 19:36:42.232161   70962 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:42.232179   70962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:42.232187   70962 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:42.232191   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | Closing plugin on server side
	I0401 19:36:42.232199   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .Close
	I0401 19:36:42.232223   70962 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:42.232235   70962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:42.232244   70962 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:42.232255   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .Close
	I0401 19:36:42.232431   70962 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:42.232478   70962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:42.232578   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | Closing plugin on server side
	I0401 19:36:42.232612   70962 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:42.232629   70962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:42.251515   70962 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:42.251538   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .Close
	I0401 19:36:42.251795   70962 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:42.251809   70962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:42.267102   70962 system_pods.go:86] 8 kube-system pods found
	I0401 19:36:42.267135   70962 system_pods.go:89] "coredns-76f75df574-lwsms" [9f432161-c5e3-42fa-8857-8e61959511b0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:36:42.267148   70962 system_pods.go:89] "coredns-76f75df574-ws9cc" [65660abf-9856-4df4-a07b-854cfd8e3fc6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:36:42.267163   70962 system_pods.go:89] "etcd-default-k8s-diff-port-734648" [7b60f629-8a15-420e-936c-872a0d55ce74] Running
	I0401 19:36:42.267181   70962 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-734648" [811a3391-02c8-43dd-9129-3fc50a4fab41] Running
	I0401 19:36:42.267187   70962 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-734648" [4b57b14a-5f46-482f-8661-8fa500db5390] Running
	I0401 19:36:42.267196   70962 system_pods.go:89] "kube-proxy-p8wrc" [2f6b37e6-b3f9-44b6-8ff9-e8fd781ef1a3] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:36:42.267204   70962 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-734648" [e0fb5e6b-aaa8-45ba-9df9-be947cbbdb80] Running
	I0401 19:36:42.267222   70962 system_pods.go:89] "storage-provisioner" [8509e661-1b53-4018-b6b0-b6a5e242768d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:36:42.267244   70962 retry.go:31] will retry after 336.906399ms: missing components: kube-dns, kube-proxy
	I0401 19:36:42.632180   70962 system_pods.go:86] 9 kube-system pods found
	I0401 19:36:42.632212   70962 system_pods.go:89] "coredns-76f75df574-lwsms" [9f432161-c5e3-42fa-8857-8e61959511b0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:36:42.632223   70962 system_pods.go:89] "coredns-76f75df574-ws9cc" [65660abf-9856-4df4-a07b-854cfd8e3fc6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:36:42.632232   70962 system_pods.go:89] "etcd-default-k8s-diff-port-734648" [7b60f629-8a15-420e-936c-872a0d55ce74] Running
	I0401 19:36:42.632240   70962 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-734648" [811a3391-02c8-43dd-9129-3fc50a4fab41] Running
	I0401 19:36:42.632247   70962 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-734648" [4b57b14a-5f46-482f-8661-8fa500db5390] Running
	I0401 19:36:42.632257   70962 system_pods.go:89] "kube-proxy-p8wrc" [2f6b37e6-b3f9-44b6-8ff9-e8fd781ef1a3] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:36:42.632264   70962 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-734648" [e0fb5e6b-aaa8-45ba-9df9-be947cbbdb80] Running
	I0401 19:36:42.632275   70962 system_pods.go:89] "metrics-server-57f55c9bc5-fj5x5" [e25fa51c-d80e-4ddc-898f-3b9903746537] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:36:42.632289   70962 system_pods.go:89] "storage-provisioner" [8509e661-1b53-4018-b6b0-b6a5e242768d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:36:42.632313   70962 retry.go:31] will retry after 406.571029ms: missing components: kube-dns, kube-proxy
	I0401 19:36:42.739308   70962 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.113759645s)
	I0401 19:36:42.739364   70962 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:42.739383   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .Close
	I0401 19:36:42.739822   70962 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:42.739842   70962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:42.739859   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | Closing plugin on server side
	I0401 19:36:42.739867   70962 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:42.739890   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .Close
	I0401 19:36:42.740171   70962 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:42.740186   70962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:42.740198   70962 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-734648"
	I0401 19:36:42.742233   70962 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0401 19:36:42.743265   70962 addons.go:505] duration metric: took 1.635721448s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0401 19:36:43.053149   70962 system_pods.go:86] 9 kube-system pods found
	I0401 19:36:43.053183   70962 system_pods.go:89] "coredns-76f75df574-lwsms" [9f432161-c5e3-42fa-8857-8e61959511b0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:36:43.053195   70962 system_pods.go:89] "coredns-76f75df574-ws9cc" [65660abf-9856-4df4-a07b-854cfd8e3fc6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:36:43.053205   70962 system_pods.go:89] "etcd-default-k8s-diff-port-734648" [7b60f629-8a15-420e-936c-872a0d55ce74] Running
	I0401 19:36:43.053215   70962 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-734648" [811a3391-02c8-43dd-9129-3fc50a4fab41] Running
	I0401 19:36:43.053223   70962 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-734648" [4b57b14a-5f46-482f-8661-8fa500db5390] Running
	I0401 19:36:43.053235   70962 system_pods.go:89] "kube-proxy-p8wrc" [2f6b37e6-b3f9-44b6-8ff9-e8fd781ef1a3] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:36:43.053240   70962 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-734648" [e0fb5e6b-aaa8-45ba-9df9-be947cbbdb80] Running
	I0401 19:36:43.053249   70962 system_pods.go:89] "metrics-server-57f55c9bc5-fj5x5" [e25fa51c-d80e-4ddc-898f-3b9903746537] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:36:43.053258   70962 system_pods.go:89] "storage-provisioner" [8509e661-1b53-4018-b6b0-b6a5e242768d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:36:43.053275   70962 retry.go:31] will retry after 524.250739ms: missing components: kube-dns, kube-proxy
	I0401 19:36:43.591419   70962 system_pods.go:86] 9 kube-system pods found
	I0401 19:36:43.591451   70962 system_pods.go:89] "coredns-76f75df574-lwsms" [9f432161-c5e3-42fa-8857-8e61959511b0] Running
	I0401 19:36:43.591463   70962 system_pods.go:89] "coredns-76f75df574-ws9cc" [65660abf-9856-4df4-a07b-854cfd8e3fc6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:36:43.591471   70962 system_pods.go:89] "etcd-default-k8s-diff-port-734648" [7b60f629-8a15-420e-936c-872a0d55ce74] Running
	I0401 19:36:43.591480   70962 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-734648" [811a3391-02c8-43dd-9129-3fc50a4fab41] Running
	I0401 19:36:43.591487   70962 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-734648" [4b57b14a-5f46-482f-8661-8fa500db5390] Running
	I0401 19:36:43.591493   70962 system_pods.go:89] "kube-proxy-p8wrc" [2f6b37e6-b3f9-44b6-8ff9-e8fd781ef1a3] Running
	I0401 19:36:43.591498   70962 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-734648" [e0fb5e6b-aaa8-45ba-9df9-be947cbbdb80] Running
	I0401 19:36:43.591508   70962 system_pods.go:89] "metrics-server-57f55c9bc5-fj5x5" [e25fa51c-d80e-4ddc-898f-3b9903746537] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:36:43.591517   70962 system_pods.go:89] "storage-provisioner" [8509e661-1b53-4018-b6b0-b6a5e242768d] Running
	I0401 19:36:43.591529   70962 system_pods.go:126] duration metric: took 1.841248999s to wait for k8s-apps to be running ...
	I0401 19:36:43.591561   70962 system_svc.go:44] waiting for kubelet service to be running ....
	I0401 19:36:43.591613   70962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:36:43.611873   70962 system_svc.go:56] duration metric: took 20.296001ms WaitForService to wait for kubelet
	I0401 19:36:43.611907   70962 kubeadm.go:576] duration metric: took 2.504430824s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 19:36:43.611930   70962 node_conditions.go:102] verifying NodePressure condition ...
	I0401 19:36:43.617697   70962 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 19:36:43.617720   70962 node_conditions.go:123] node cpu capacity is 2
	I0401 19:36:43.617732   70962 node_conditions.go:105] duration metric: took 5.796357ms to run NodePressure ...
	I0401 19:36:43.617745   70962 start.go:240] waiting for startup goroutines ...
	I0401 19:36:43.617754   70962 start.go:245] waiting for cluster config update ...
	I0401 19:36:43.617765   70962 start.go:254] writing updated cluster config ...
	I0401 19:36:43.618023   70962 ssh_runner.go:195] Run: rm -f paused
	I0401 19:36:43.666581   70962 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0401 19:36:43.668685   70962 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-734648" cluster and "default" namespace by default
	I0401 19:36:42.505149   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:45.003855   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:47.004247   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:49.504898   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:51.505403   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:54.005163   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:56.503395   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:58.503791   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:00.504001   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:02.504193   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:05.003540   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:07.003582   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:09.503975   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:12.005037   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:14.503460   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:16.504630   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:19.004307   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:21.004909   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:23.503286   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:25.503469   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:27.503520   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:30.004792   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:32.503693   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:35.005137   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:37.504848   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:39.504961   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:41.510644   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:44.004680   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:46.005118   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:51.561231   71168 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0401 19:37:51.561356   71168 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0401 19:37:51.563350   71168 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0401 19:37:51.563417   71168 kubeadm.go:309] [preflight] Running pre-flight checks
	I0401 19:37:51.563497   71168 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 19:37:51.563596   71168 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 19:37:51.563711   71168 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 19:37:51.563797   71168 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 19:37:51.565710   71168 out.go:204]   - Generating certificates and keys ...
	I0401 19:37:51.565809   71168 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0401 19:37:51.565908   71168 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0401 19:37:51.566051   71168 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0401 19:37:51.566136   71168 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0401 19:37:51.566230   71168 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0401 19:37:51.566325   71168 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0401 19:37:51.566402   71168 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0401 19:37:51.566464   71168 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0401 19:37:51.566580   71168 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0401 19:37:51.566688   71168 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0401 19:37:51.566727   71168 kubeadm.go:309] [certs] Using the existing "sa" key
	I0401 19:37:51.566774   71168 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 19:37:51.566822   71168 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 19:37:51.566917   71168 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 19:37:51.567001   71168 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 19:37:51.567068   71168 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 19:37:51.567210   71168 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 19:37:51.567314   71168 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 19:37:51.567371   71168 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0401 19:37:51.567473   71168 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 19:37:48.504708   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:51.005355   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:51.569285   71168 out.go:204]   - Booting up control plane ...
	I0401 19:37:51.569394   71168 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 19:37:51.569498   71168 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 19:37:51.569568   71168 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 19:37:51.569661   71168 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 19:37:51.569802   71168 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 19:37:51.569866   71168 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0401 19:37:51.569957   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:37:51.570195   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:37:51.570287   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:37:51.570514   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:37:51.570589   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:37:51.570769   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:37:51.570859   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:37:51.571033   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:37:51.571134   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:37:51.571342   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:37:51.571351   71168 kubeadm.go:309] 
	I0401 19:37:51.571394   71168 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0401 19:37:51.571453   71168 kubeadm.go:309] 		timed out waiting for the condition
	I0401 19:37:51.571475   71168 kubeadm.go:309] 
	I0401 19:37:51.571521   71168 kubeadm.go:309] 	This error is likely caused by:
	I0401 19:37:51.571558   71168 kubeadm.go:309] 		- The kubelet is not running
	I0401 19:37:51.571676   71168 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0401 19:37:51.571687   71168 kubeadm.go:309] 
	I0401 19:37:51.571824   71168 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0401 19:37:51.571880   71168 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0401 19:37:51.571921   71168 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0401 19:37:51.571931   71168 kubeadm.go:309] 
	I0401 19:37:51.572077   71168 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0401 19:37:51.572198   71168 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0401 19:37:51.572209   71168 kubeadm.go:309] 
	I0401 19:37:51.572359   71168 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0401 19:37:51.572477   71168 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0401 19:37:51.572576   71168 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0401 19:37:51.572676   71168 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0401 19:37:51.572731   71168 kubeadm.go:309] 
	W0401 19:37:51.572793   71168 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0401 19:37:51.572851   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0401 19:37:52.428554   71168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:37:52.445151   71168 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:37:52.456989   71168 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:37:52.457010   71168 kubeadm.go:156] found existing configuration files:
	
	I0401 19:37:52.457053   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 19:37:52.468305   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:37:52.468375   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:37:52.479305   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 19:37:52.489703   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:37:52.489753   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:37:52.501023   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 19:37:52.512418   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:37:52.512480   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:37:52.523850   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 19:37:52.534358   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:37:52.534425   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:37:52.546135   71168 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 19:37:52.779427   71168 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 19:37:52.997253   70284 pod_ready.go:81] duration metric: took 4m0.000092266s for pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace to be "Ready" ...
	E0401 19:37:52.997287   70284 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace to be "Ready" (will not retry!)
	I0401 19:37:52.997309   70284 pod_ready.go:38] duration metric: took 4m43.911595731s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:37:52.997333   70284 kubeadm.go:591] duration metric: took 5m31.840082505s to restartPrimaryControlPlane
	W0401 19:37:52.997393   70284 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0401 19:37:52.997421   70284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0401 19:38:25.458760   70284 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.46129187s)
	I0401 19:38:25.458845   70284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:38:25.476633   70284 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:38:25.487615   70284 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:38:25.498590   70284 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:38:25.498616   70284 kubeadm.go:156] found existing configuration files:
	
	I0401 19:38:25.498701   70284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 19:38:25.509063   70284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:38:25.509128   70284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:38:25.519806   70284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 19:38:25.530433   70284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:38:25.530488   70284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:38:25.540979   70284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 19:38:25.550786   70284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:38:25.550847   70284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:38:25.561979   70284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 19:38:25.571832   70284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:38:25.571898   70284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:38:25.582501   70284 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 19:38:25.646956   70284 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0-rc.0
	I0401 19:38:25.647046   70284 kubeadm.go:309] [preflight] Running pre-flight checks
	I0401 19:38:25.825328   70284 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 19:38:25.825459   70284 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 19:38:25.825574   70284 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 19:38:26.066201   70284 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 19:38:26.069071   70284 out.go:204]   - Generating certificates and keys ...
	I0401 19:38:26.069170   70284 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0401 19:38:26.069260   70284 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0401 19:38:26.069402   70284 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0401 19:38:26.069493   70284 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0401 19:38:26.069588   70284 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0401 19:38:26.069703   70284 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0401 19:38:26.069765   70284 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0401 19:38:26.069822   70284 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0401 19:38:26.069986   70284 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0401 19:38:26.070644   70284 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0401 19:38:26.071149   70284 kubeadm.go:309] [certs] Using the existing "sa" key
	I0401 19:38:26.071308   70284 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 19:38:26.204651   70284 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 19:38:26.368926   70284 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 19:38:26.586004   70284 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 19:38:26.710851   70284 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 19:38:26.858015   70284 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 19:38:26.858741   70284 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 19:38:26.863879   70284 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 19:38:26.865794   70284 out.go:204]   - Booting up control plane ...
	I0401 19:38:26.865898   70284 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 19:38:26.865984   70284 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 19:38:26.866081   70284 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 19:38:26.886171   70284 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 19:38:26.887118   70284 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 19:38:26.887177   70284 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0401 19:38:27.021053   70284 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0401 19:38:27.021142   70284 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0401 19:38:28.023462   70284 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002303634s
	I0401 19:38:28.023549   70284 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0401 19:38:34.026967   70284 kubeadm.go:309] [api-check] The API server is healthy after 6.003391014s
	I0401 19:38:34.044095   70284 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 19:38:34.061716   70284 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 19:38:34.092708   70284 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 19:38:34.093037   70284 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-472858 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 19:38:34.111758   70284 kubeadm.go:309] [bootstrap-token] Using token: 45cmca.rj16278sw3ueq3us
	I0401 19:38:34.113211   70284 out.go:204]   - Configuring RBAC rules ...
	I0401 19:38:34.113333   70284 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 19:38:34.122292   70284 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 19:38:34.133114   70284 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 19:38:34.138441   70284 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 19:38:34.143964   70284 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 19:38:34.148675   70284 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 19:38:34.438167   70284 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 19:38:34.885250   70284 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0401 19:38:35.439990   70284 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0401 19:38:35.441439   70284 kubeadm.go:309] 
	I0401 19:38:35.441532   70284 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0401 19:38:35.441545   70284 kubeadm.go:309] 
	I0401 19:38:35.441659   70284 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0401 19:38:35.441690   70284 kubeadm.go:309] 
	I0401 19:38:35.441752   70284 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0401 19:38:35.441845   70284 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 19:38:35.441930   70284 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 19:38:35.441938   70284 kubeadm.go:309] 
	I0401 19:38:35.442014   70284 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0401 19:38:35.442028   70284 kubeadm.go:309] 
	I0401 19:38:35.442067   70284 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 19:38:35.442073   70284 kubeadm.go:309] 
	I0401 19:38:35.442120   70284 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0401 19:38:35.442186   70284 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 19:38:35.442295   70284 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 19:38:35.442307   70284 kubeadm.go:309] 
	I0401 19:38:35.442426   70284 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 19:38:35.442552   70284 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0401 19:38:35.442565   70284 kubeadm.go:309] 
	I0401 19:38:35.442643   70284 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 45cmca.rj16278sw3ueq3us \
	I0401 19:38:35.442766   70284 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b8a0197ad47aa27a5800307c57228d22e61e4d31af785fa8a896f2b7fab267b8 \
	I0401 19:38:35.442803   70284 kubeadm.go:309] 	--control-plane 
	I0401 19:38:35.442813   70284 kubeadm.go:309] 
	I0401 19:38:35.442922   70284 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0401 19:38:35.442936   70284 kubeadm.go:309] 
	I0401 19:38:35.443008   70284 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 45cmca.rj16278sw3ueq3us \
	I0401 19:38:35.443097   70284 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b8a0197ad47aa27a5800307c57228d22e61e4d31af785fa8a896f2b7fab267b8 
	I0401 19:38:35.443436   70284 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 19:38:35.443530   70284 cni.go:84] Creating CNI manager for ""
	I0401 19:38:35.443546   70284 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:38:35.445089   70284 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0401 19:38:35.446328   70284 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0401 19:38:35.459788   70284 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0401 19:38:35.486202   70284 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 19:38:35.486300   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:35.486308   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-472858 minikube.k8s.io/updated_at=2024_04_01T19_38_35_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2 minikube.k8s.io/name=no-preload-472858 minikube.k8s.io/primary=true
	I0401 19:38:35.700677   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:35.731567   70284 ops.go:34] apiserver oom_adj: -16
	I0401 19:38:36.200955   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:36.701003   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:37.201632   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:37.700719   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:38.201316   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:38.701334   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:39.201609   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:39.701034   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:40.201771   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:40.700786   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:41.201750   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:41.701709   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:42.201682   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:42.700838   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:43.201123   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:43.701587   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:44.200860   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:44.700795   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:45.200850   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:45.701273   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:46.201701   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:46.701450   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:47.201496   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:47.701351   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:47.800239   70284 kubeadm.go:1107] duration metric: took 12.313994383s to wait for elevateKubeSystemPrivileges
	W0401 19:38:47.800287   70284 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0401 19:38:47.800298   70284 kubeadm.go:393] duration metric: took 6m26.705086714s to StartCluster
	I0401 19:38:47.800320   70284 settings.go:142] acquiring lock: {Name:mk5cd3d9600680d3808ad7ff6310a5e71b09e71d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:38:47.800410   70284 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 19:38:47.802818   70284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/kubeconfig: {Name:mkbd988e40ba29769e9f8a43c4d876f38e957f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:38:47.803132   70284 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.119 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 19:38:47.805445   70284 out.go:177] * Verifying Kubernetes components...
	I0401 19:38:47.803273   70284 config.go:182] Loaded profile config "no-preload-472858": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0401 19:38:47.803252   70284 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0401 19:38:47.806734   70284 addons.go:69] Setting storage-provisioner=true in profile "no-preload-472858"
	I0401 19:38:47.806761   70284 addons.go:69] Setting default-storageclass=true in profile "no-preload-472858"
	I0401 19:38:47.806774   70284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:38:47.806777   70284 addons.go:69] Setting metrics-server=true in profile "no-preload-472858"
	I0401 19:38:47.806802   70284 addons.go:234] Setting addon metrics-server=true in "no-preload-472858"
	W0401 19:38:47.806815   70284 addons.go:243] addon metrics-server should already be in state true
	I0401 19:38:47.806850   70284 host.go:66] Checking if "no-preload-472858" exists ...
	I0401 19:38:47.806802   70284 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-472858"
	I0401 19:38:47.806768   70284 addons.go:234] Setting addon storage-provisioner=true in "no-preload-472858"
	W0401 19:38:47.807229   70284 addons.go:243] addon storage-provisioner should already be in state true
	I0401 19:38:47.807257   70284 host.go:66] Checking if "no-preload-472858" exists ...
	I0401 19:38:47.807289   70284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:38:47.807332   70284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:38:47.807340   70284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:38:47.807366   70284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:38:47.807620   70284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:38:47.807690   70284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:38:47.823665   70284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38305
	I0401 19:38:47.823684   70284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35487
	I0401 19:38:47.824174   70284 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:38:47.824205   70284 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:38:47.824709   70284 main.go:141] libmachine: Using API Version  1
	I0401 19:38:47.824732   70284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:38:47.824838   70284 main.go:141] libmachine: Using API Version  1
	I0401 19:38:47.824867   70284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:38:47.825094   70284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:38:47.825276   70284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:38:47.825700   70284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:38:47.825746   70284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:38:47.825844   70284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:38:47.825866   70284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:38:47.826415   70284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38845
	I0401 19:38:47.826845   70284 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:38:47.827305   70284 main.go:141] libmachine: Using API Version  1
	I0401 19:38:47.827330   70284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:38:47.827800   70284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:38:47.828004   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetState
	I0401 19:38:47.831735   70284 addons.go:234] Setting addon default-storageclass=true in "no-preload-472858"
	W0401 19:38:47.831760   70284 addons.go:243] addon default-storageclass should already be in state true
	I0401 19:38:47.831791   70284 host.go:66] Checking if "no-preload-472858" exists ...
	I0401 19:38:47.832170   70284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:38:47.832218   70284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:38:47.842050   70284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42037
	I0401 19:38:47.842479   70284 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:38:47.842963   70284 main.go:141] libmachine: Using API Version  1
	I0401 19:38:47.842983   70284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:38:47.843354   70284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:38:47.843513   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetState
	I0401 19:38:47.845360   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:38:47.845430   70284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33357
	I0401 19:38:47.847622   70284 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:38:47.845959   70284 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:38:47.847568   70284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38785
	I0401 19:38:47.849255   70284 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 19:38:47.849283   70284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 19:38:47.849303   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:38:47.849356   70284 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:38:47.849524   70284 main.go:141] libmachine: Using API Version  1
	I0401 19:38:47.849536   70284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:38:47.850173   70284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:38:47.850228   70284 main.go:141] libmachine: Using API Version  1
	I0401 19:38:47.850238   70284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:38:47.850362   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetState
	I0401 19:38:47.851206   70284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:38:47.851773   70284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:38:47.851803   70284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:38:47.852404   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:38:47.854167   70284 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 19:38:47.853141   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:38:47.853926   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:38:47.855729   70284 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 19:38:47.855746   70284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 19:38:47.855763   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:38:47.855728   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:38:47.855809   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:38:47.855854   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:38:47.856000   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:38:47.856160   70284 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/no-preload-472858/id_rsa Username:docker}
	I0401 19:38:47.858726   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:38:47.859782   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:38:47.859826   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:38:47.859948   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:38:47.860138   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:38:47.860310   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:38:47.860593   70284 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/no-preload-472858/id_rsa Username:docker}
	I0401 19:38:47.870182   70284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34517
	I0401 19:38:47.870616   70284 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:38:47.871182   70284 main.go:141] libmachine: Using API Version  1
	I0401 19:38:47.871203   70284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:38:47.871561   70284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:38:47.871947   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetState
	I0401 19:38:47.873606   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:38:47.873931   70284 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 19:38:47.873949   70284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 19:38:47.873967   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:38:47.876826   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:38:47.877259   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:38:47.877286   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:38:47.877389   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:38:47.877672   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:38:47.877816   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:38:47.877974   70284 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/no-preload-472858/id_rsa Username:docker}
	I0401 19:38:48.053731   70284 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:38:48.081160   70284 node_ready.go:35] waiting up to 6m0s for node "no-preload-472858" to be "Ready" ...
	I0401 19:38:48.107976   70284 node_ready.go:49] node "no-preload-472858" has status "Ready":"True"
	I0401 19:38:48.107998   70284 node_ready.go:38] duration metric: took 26.793115ms for node "no-preload-472858" to be "Ready" ...
	I0401 19:38:48.108009   70284 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:38:48.115968   70284 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:38:48.158349   70284 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 19:38:48.158383   70284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 19:38:48.166047   70284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 19:38:48.181902   70284 pod_ready.go:92] pod "etcd-no-preload-472858" in "kube-system" namespace has status "Ready":"True"
	I0401 19:38:48.181922   70284 pod_ready.go:81] duration metric: took 65.920299ms for pod "etcd-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:38:48.181935   70284 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:38:48.199372   70284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 19:38:48.232110   70284 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 19:38:48.232140   70284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 19:38:48.251891   70284 pod_ready.go:92] pod "kube-apiserver-no-preload-472858" in "kube-system" namespace has status "Ready":"True"
	I0401 19:38:48.251914   70284 pod_ready.go:81] duration metric: took 69.970077ms for pod "kube-apiserver-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:38:48.251929   70284 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:38:48.309605   70284 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 19:38:48.309627   70284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0401 19:38:48.325907   70284 pod_ready.go:92] pod "kube-controller-manager-no-preload-472858" in "kube-system" namespace has status "Ready":"True"
	I0401 19:38:48.325928   70284 pod_ready.go:81] duration metric: took 73.991711ms for pod "kube-controller-manager-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:38:48.325938   70284 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:38:48.373418   70284 pod_ready.go:92] pod "kube-scheduler-no-preload-472858" in "kube-system" namespace has status "Ready":"True"
	I0401 19:38:48.373448   70284 pod_ready.go:81] duration metric: took 47.503272ms for pod "kube-scheduler-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:38:48.373456   70284 pod_ready.go:38] duration metric: took 265.436317ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:38:48.373479   70284 api_server.go:52] waiting for apiserver process to appear ...
	I0401 19:38:48.373543   70284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:38:48.396444   70284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 19:38:48.564838   70284 main.go:141] libmachine: Making call to close driver server
	I0401 19:38:48.564860   70284 main.go:141] libmachine: (no-preload-472858) Calling .Close
	I0401 19:38:48.565180   70284 main.go:141] libmachine: (no-preload-472858) DBG | Closing plugin on server side
	I0401 19:38:48.565197   70284 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:38:48.565227   70284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:38:48.565247   70284 main.go:141] libmachine: Making call to close driver server
	I0401 19:38:48.565258   70284 main.go:141] libmachine: (no-preload-472858) Calling .Close
	I0401 19:38:48.565489   70284 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:38:48.565506   70284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:38:48.579332   70284 main.go:141] libmachine: Making call to close driver server
	I0401 19:38:48.579355   70284 main.go:141] libmachine: (no-preload-472858) Calling .Close
	I0401 19:38:48.579599   70284 main.go:141] libmachine: (no-preload-472858) DBG | Closing plugin on server side
	I0401 19:38:48.579637   70284 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:38:48.579645   70284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:38:48.884887   70284 main.go:141] libmachine: Making call to close driver server
	I0401 19:38:48.884920   70284 main.go:141] libmachine: (no-preload-472858) Calling .Close
	I0401 19:38:48.884938   70284 api_server.go:72] duration metric: took 1.08176251s to wait for apiserver process to appear ...
	I0401 19:38:48.884958   70284 api_server.go:88] waiting for apiserver healthz status ...
	I0401 19:38:48.885018   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:38:48.885232   70284 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:38:48.885252   70284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:38:48.885260   70284 main.go:141] libmachine: Making call to close driver server
	I0401 19:38:48.885269   70284 main.go:141] libmachine: (no-preload-472858) Calling .Close
	I0401 19:38:48.885236   70284 main.go:141] libmachine: (no-preload-472858) DBG | Closing plugin on server side
	I0401 19:38:48.885519   70284 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:38:48.887182   70284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:38:48.885555   70284 main.go:141] libmachine: (no-preload-472858) DBG | Closing plugin on server side
	I0401 19:38:48.895737   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 200:
	ok
	I0401 19:38:48.899521   70284 api_server.go:141] control plane version: v1.30.0-rc.0
	I0401 19:38:48.899539   70284 api_server.go:131] duration metric: took 14.574989ms to wait for apiserver health ...
	I0401 19:38:48.899547   70284 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 19:38:48.914064   70284 system_pods.go:59] 8 kube-system pods found
	I0401 19:38:48.914090   70284 system_pods.go:61] "coredns-7db6d8ff4d-8285w" [c450ac4a-974e-4322-9857-fb65792a142b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:48.914106   70284 system_pods.go:61] "coredns-7db6d8ff4d-wmbsp" [7a73f081-42f4-4854-8785-25e54eb0a391] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:48.914112   70284 system_pods.go:61] "etcd-no-preload-472858" [d96862c6-4b97-4239-a79a-e877f2825eb6] Running
	I0401 19:38:48.914117   70284 system_pods.go:61] "kube-apiserver-no-preload-472858" [78418540-b912-4457-98ef-94cf57cf9379] Running
	I0401 19:38:48.914122   70284 system_pods.go:61] "kube-controller-manager-no-preload-472858" [4a48aaa7-c47f-4d1f-aace-f02d2f24c791] Running
	I0401 19:38:48.914126   70284 system_pods.go:61] "kube-proxy-5dmtl" [c243321b-b01a-4fd5-895a-888d18ee8527] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:38:48.914134   70284 system_pods.go:61] "kube-scheduler-no-preload-472858" [3564e7d0-f6cc-4584-a2cc-39fc6f884836] Running
	I0401 19:38:48.914138   70284 system_pods.go:61] "storage-provisioner" [844e010a-3bee-4fd1-942f-10fa50306617] Pending
	I0401 19:38:48.914146   70284 system_pods.go:74] duration metric: took 14.594359ms to wait for pod list to return data ...
	I0401 19:38:48.914156   70284 default_sa.go:34] waiting for default service account to be created ...
	I0401 19:38:48.924790   70284 default_sa.go:45] found service account: "default"
	I0401 19:38:48.924814   70284 default_sa.go:55] duration metric: took 10.649887ms for default service account to be created ...
	I0401 19:38:48.924825   70284 system_pods.go:116] waiting for k8s-apps to be running ...
	I0401 19:38:48.930993   70284 system_pods.go:86] 8 kube-system pods found
	I0401 19:38:48.931020   70284 system_pods.go:89] "coredns-7db6d8ff4d-8285w" [c450ac4a-974e-4322-9857-fb65792a142b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:48.931037   70284 system_pods.go:89] "coredns-7db6d8ff4d-wmbsp" [7a73f081-42f4-4854-8785-25e54eb0a391] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:48.931047   70284 system_pods.go:89] "etcd-no-preload-472858" [d96862c6-4b97-4239-a79a-e877f2825eb6] Running
	I0401 19:38:48.931056   70284 system_pods.go:89] "kube-apiserver-no-preload-472858" [78418540-b912-4457-98ef-94cf57cf9379] Running
	I0401 19:38:48.931066   70284 system_pods.go:89] "kube-controller-manager-no-preload-472858" [4a48aaa7-c47f-4d1f-aace-f02d2f24c791] Running
	I0401 19:38:48.931074   70284 system_pods.go:89] "kube-proxy-5dmtl" [c243321b-b01a-4fd5-895a-888d18ee8527] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:38:48.931089   70284 system_pods.go:89] "kube-scheduler-no-preload-472858" [3564e7d0-f6cc-4584-a2cc-39fc6f884836] Running
	I0401 19:38:48.931098   70284 system_pods.go:89] "storage-provisioner" [844e010a-3bee-4fd1-942f-10fa50306617] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:38:48.931117   70284 retry.go:31] will retry after 297.45527ms: missing components: kube-dns, kube-proxy
	I0401 19:38:49.123999   70284 main.go:141] libmachine: Making call to close driver server
	I0401 19:38:49.124019   70284 main.go:141] libmachine: (no-preload-472858) Calling .Close
	I0401 19:38:49.124344   70284 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:38:49.124394   70284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:38:49.124406   70284 main.go:141] libmachine: Making call to close driver server
	I0401 19:38:49.124414   70284 main.go:141] libmachine: (no-preload-472858) Calling .Close
	I0401 19:38:49.124356   70284 main.go:141] libmachine: (no-preload-472858) DBG | Closing plugin on server side
	I0401 19:38:49.124627   70284 main.go:141] libmachine: (no-preload-472858) DBG | Closing plugin on server side
	I0401 19:38:49.124661   70284 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:38:49.124677   70284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:38:49.124690   70284 addons.go:470] Verifying addon metrics-server=true in "no-preload-472858"
	I0401 19:38:49.127415   70284 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0401 19:38:49.129047   70284 addons.go:505] duration metric: took 1.325796036s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0401 19:38:49.236094   70284 system_pods.go:86] 9 kube-system pods found
	I0401 19:38:49.236127   70284 system_pods.go:89] "coredns-7db6d8ff4d-8285w" [c450ac4a-974e-4322-9857-fb65792a142b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:49.236136   70284 system_pods.go:89] "coredns-7db6d8ff4d-wmbsp" [7a73f081-42f4-4854-8785-25e54eb0a391] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:49.236145   70284 system_pods.go:89] "etcd-no-preload-472858" [d96862c6-4b97-4239-a79a-e877f2825eb6] Running
	I0401 19:38:49.236152   70284 system_pods.go:89] "kube-apiserver-no-preload-472858" [78418540-b912-4457-98ef-94cf57cf9379] Running
	I0401 19:38:49.236159   70284 system_pods.go:89] "kube-controller-manager-no-preload-472858" [4a48aaa7-c47f-4d1f-aace-f02d2f24c791] Running
	I0401 19:38:49.236168   70284 system_pods.go:89] "kube-proxy-5dmtl" [c243321b-b01a-4fd5-895a-888d18ee8527] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:38:49.236175   70284 system_pods.go:89] "kube-scheduler-no-preload-472858" [3564e7d0-f6cc-4584-a2cc-39fc6f884836] Running
	I0401 19:38:49.236185   70284 system_pods.go:89] "metrics-server-569cc877fc-wj2tt" [5259722c-3d0b-468f-b941-419806e91177] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:38:49.236198   70284 system_pods.go:89] "storage-provisioner" [844e010a-3bee-4fd1-942f-10fa50306617] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:38:49.236218   70284 retry.go:31] will retry after 287.299528ms: missing components: kube-dns, kube-proxy
	I0401 19:38:49.530606   70284 system_pods.go:86] 9 kube-system pods found
	I0401 19:38:49.530643   70284 system_pods.go:89] "coredns-7db6d8ff4d-8285w" [c450ac4a-974e-4322-9857-fb65792a142b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:49.530654   70284 system_pods.go:89] "coredns-7db6d8ff4d-wmbsp" [7a73f081-42f4-4854-8785-25e54eb0a391] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:49.530663   70284 system_pods.go:89] "etcd-no-preload-472858" [d96862c6-4b97-4239-a79a-e877f2825eb6] Running
	I0401 19:38:49.530670   70284 system_pods.go:89] "kube-apiserver-no-preload-472858" [78418540-b912-4457-98ef-94cf57cf9379] Running
	I0401 19:38:49.530678   70284 system_pods.go:89] "kube-controller-manager-no-preload-472858" [4a48aaa7-c47f-4d1f-aace-f02d2f24c791] Running
	I0401 19:38:49.530687   70284 system_pods.go:89] "kube-proxy-5dmtl" [c243321b-b01a-4fd5-895a-888d18ee8527] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:38:49.530697   70284 system_pods.go:89] "kube-scheduler-no-preload-472858" [3564e7d0-f6cc-4584-a2cc-39fc6f884836] Running
	I0401 19:38:49.530711   70284 system_pods.go:89] "metrics-server-569cc877fc-wj2tt" [5259722c-3d0b-468f-b941-419806e91177] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:38:49.530721   70284 system_pods.go:89] "storage-provisioner" [844e010a-3bee-4fd1-942f-10fa50306617] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:38:49.530744   70284 retry.go:31] will retry after 435.286919ms: missing components: kube-dns, kube-proxy
	I0401 19:38:49.974049   70284 system_pods.go:86] 9 kube-system pods found
	I0401 19:38:49.974090   70284 system_pods.go:89] "coredns-7db6d8ff4d-8285w" [c450ac4a-974e-4322-9857-fb65792a142b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:49.974103   70284 system_pods.go:89] "coredns-7db6d8ff4d-wmbsp" [7a73f081-42f4-4854-8785-25e54eb0a391] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:49.974113   70284 system_pods.go:89] "etcd-no-preload-472858" [d96862c6-4b97-4239-a79a-e877f2825eb6] Running
	I0401 19:38:49.974121   70284 system_pods.go:89] "kube-apiserver-no-preload-472858" [78418540-b912-4457-98ef-94cf57cf9379] Running
	I0401 19:38:49.974128   70284 system_pods.go:89] "kube-controller-manager-no-preload-472858" [4a48aaa7-c47f-4d1f-aace-f02d2f24c791] Running
	I0401 19:38:49.974142   70284 system_pods.go:89] "kube-proxy-5dmtl" [c243321b-b01a-4fd5-895a-888d18ee8527] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:38:49.974153   70284 system_pods.go:89] "kube-scheduler-no-preload-472858" [3564e7d0-f6cc-4584-a2cc-39fc6f884836] Running
	I0401 19:38:49.974168   70284 system_pods.go:89] "metrics-server-569cc877fc-wj2tt" [5259722c-3d0b-468f-b941-419806e91177] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:38:49.974181   70284 system_pods.go:89] "storage-provisioner" [844e010a-3bee-4fd1-942f-10fa50306617] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:38:49.974203   70284 retry.go:31] will retry after 577.959209ms: missing components: kube-dns, kube-proxy
	I0401 19:38:50.558750   70284 system_pods.go:86] 9 kube-system pods found
	I0401 19:38:50.558780   70284 system_pods.go:89] "coredns-7db6d8ff4d-8285w" [c450ac4a-974e-4322-9857-fb65792a142b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:50.558787   70284 system_pods.go:89] "coredns-7db6d8ff4d-wmbsp" [7a73f081-42f4-4854-8785-25e54eb0a391] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:50.558795   70284 system_pods.go:89] "etcd-no-preload-472858" [d96862c6-4b97-4239-a79a-e877f2825eb6] Running
	I0401 19:38:50.558805   70284 system_pods.go:89] "kube-apiserver-no-preload-472858" [78418540-b912-4457-98ef-94cf57cf9379] Running
	I0401 19:38:50.558812   70284 system_pods.go:89] "kube-controller-manager-no-preload-472858" [4a48aaa7-c47f-4d1f-aace-f02d2f24c791] Running
	I0401 19:38:50.558820   70284 system_pods.go:89] "kube-proxy-5dmtl" [c243321b-b01a-4fd5-895a-888d18ee8527] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:38:50.558833   70284 system_pods.go:89] "kube-scheduler-no-preload-472858" [3564e7d0-f6cc-4584-a2cc-39fc6f884836] Running
	I0401 19:38:50.558840   70284 system_pods.go:89] "metrics-server-569cc877fc-wj2tt" [5259722c-3d0b-468f-b941-419806e91177] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:38:50.558846   70284 system_pods.go:89] "storage-provisioner" [844e010a-3bee-4fd1-942f-10fa50306617] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:38:50.558863   70284 retry.go:31] will retry after 723.380101ms: missing components: kube-dns, kube-proxy
	I0401 19:38:51.291450   70284 system_pods.go:86] 9 kube-system pods found
	I0401 19:38:51.291487   70284 system_pods.go:89] "coredns-7db6d8ff4d-8285w" [c450ac4a-974e-4322-9857-fb65792a142b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:51.291498   70284 system_pods.go:89] "coredns-7db6d8ff4d-wmbsp" [7a73f081-42f4-4854-8785-25e54eb0a391] Running
	I0401 19:38:51.291508   70284 system_pods.go:89] "etcd-no-preload-472858" [d96862c6-4b97-4239-a79a-e877f2825eb6] Running
	I0401 19:38:51.291514   70284 system_pods.go:89] "kube-apiserver-no-preload-472858" [78418540-b912-4457-98ef-94cf57cf9379] Running
	I0401 19:38:51.291521   70284 system_pods.go:89] "kube-controller-manager-no-preload-472858" [4a48aaa7-c47f-4d1f-aace-f02d2f24c791] Running
	I0401 19:38:51.291527   70284 system_pods.go:89] "kube-proxy-5dmtl" [c243321b-b01a-4fd5-895a-888d18ee8527] Running
	I0401 19:38:51.291532   70284 system_pods.go:89] "kube-scheduler-no-preload-472858" [3564e7d0-f6cc-4584-a2cc-39fc6f884836] Running
	I0401 19:38:51.291543   70284 system_pods.go:89] "metrics-server-569cc877fc-wj2tt" [5259722c-3d0b-468f-b941-419806e91177] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:38:51.291551   70284 system_pods.go:89] "storage-provisioner" [844e010a-3bee-4fd1-942f-10fa50306617] Running
	I0401 19:38:51.291559   70284 system_pods.go:126] duration metric: took 2.366728733s to wait for k8s-apps to be running ...
	I0401 19:38:51.291576   70284 system_svc.go:44] waiting for kubelet service to be running ....
	I0401 19:38:51.291622   70284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:38:51.310224   70284 system_svc.go:56] duration metric: took 18.63923ms WaitForService to wait for kubelet
	I0401 19:38:51.310250   70284 kubeadm.go:576] duration metric: took 3.50708191s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 19:38:51.310269   70284 node_conditions.go:102] verifying NodePressure condition ...
	I0401 19:38:51.312899   70284 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 19:38:51.312919   70284 node_conditions.go:123] node cpu capacity is 2
	I0401 19:38:51.312930   70284 node_conditions.go:105] duration metric: took 2.654739ms to run NodePressure ...
	I0401 19:38:51.312945   70284 start.go:240] waiting for startup goroutines ...
	I0401 19:38:51.312958   70284 start.go:245] waiting for cluster config update ...
	I0401 19:38:51.312985   70284 start.go:254] writing updated cluster config ...
	I0401 19:38:51.313269   70284 ssh_runner.go:195] Run: rm -f paused
	I0401 19:38:51.365041   70284 start.go:600] kubectl: 1.29.3, cluster: 1.30.0-rc.0 (minor skew: 1)
	I0401 19:38:51.367173   70284 out.go:177] * Done! kubectl is now configured to use "no-preload-472858" cluster and "default" namespace by default
	I0401 19:39:48.856665   71168 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0401 19:39:48.856779   71168 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0401 19:39:48.858840   71168 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0401 19:39:48.858896   71168 kubeadm.go:309] [preflight] Running pre-flight checks
	I0401 19:39:48.858987   71168 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 19:39:48.859122   71168 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 19:39:48.859222   71168 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 19:39:48.859314   71168 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 19:39:48.861104   71168 out.go:204]   - Generating certificates and keys ...
	I0401 19:39:48.861202   71168 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0401 19:39:48.861277   71168 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0401 19:39:48.861381   71168 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0401 19:39:48.861492   71168 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0401 19:39:48.861596   71168 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0401 19:39:48.861699   71168 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0401 19:39:48.861791   71168 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0401 19:39:48.861897   71168 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0401 19:39:48.862009   71168 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0401 19:39:48.862118   71168 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0401 19:39:48.862176   71168 kubeadm.go:309] [certs] Using the existing "sa" key
	I0401 19:39:48.862260   71168 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 19:39:48.862338   71168 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 19:39:48.862420   71168 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 19:39:48.862480   71168 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 19:39:48.862527   71168 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 19:39:48.862618   71168 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 19:39:48.862693   71168 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 19:39:48.862734   71168 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0401 19:39:48.862804   71168 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 19:39:48.864199   71168 out.go:204]   - Booting up control plane ...
	I0401 19:39:48.864291   71168 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 19:39:48.864359   71168 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 19:39:48.864420   71168 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 19:39:48.864504   71168 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 19:39:48.864712   71168 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 19:39:48.864788   71168 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0401 19:39:48.864871   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:39:48.865069   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:39:48.865153   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:39:48.865344   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:39:48.865453   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:39:48.865674   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:39:48.865755   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:39:48.865989   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:39:48.866095   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:39:48.866269   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:39:48.866285   71168 kubeadm.go:309] 
	I0401 19:39:48.866343   71168 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0401 19:39:48.866402   71168 kubeadm.go:309] 		timed out waiting for the condition
	I0401 19:39:48.866414   71168 kubeadm.go:309] 
	I0401 19:39:48.866458   71168 kubeadm.go:309] 	This error is likely caused by:
	I0401 19:39:48.866506   71168 kubeadm.go:309] 		- The kubelet is not running
	I0401 19:39:48.866651   71168 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0401 19:39:48.866665   71168 kubeadm.go:309] 
	I0401 19:39:48.866816   71168 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0401 19:39:48.866865   71168 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0401 19:39:48.866895   71168 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0401 19:39:48.866901   71168 kubeadm.go:309] 
	I0401 19:39:48.866989   71168 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0401 19:39:48.867061   71168 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0401 19:39:48.867070   71168 kubeadm.go:309] 
	I0401 19:39:48.867194   71168 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0401 19:39:48.867327   71168 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0401 19:39:48.867417   71168 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0401 19:39:48.867526   71168 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0401 19:39:48.867555   71168 kubeadm.go:309] 
	I0401 19:39:48.867633   71168 kubeadm.go:393] duration metric: took 7m58.404831893s to StartCluster
	I0401 19:39:48.867702   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:39:48.867764   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:39:48.922329   71168 cri.go:89] found id: ""
	I0401 19:39:48.922359   71168 logs.go:276] 0 containers: []
	W0401 19:39:48.922369   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:39:48.922377   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:39:48.922435   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:39:48.966212   71168 cri.go:89] found id: ""
	I0401 19:39:48.966235   71168 logs.go:276] 0 containers: []
	W0401 19:39:48.966243   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:39:48.966248   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:39:48.966309   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:39:49.015141   71168 cri.go:89] found id: ""
	I0401 19:39:49.015171   71168 logs.go:276] 0 containers: []
	W0401 19:39:49.015182   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:39:49.015189   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:39:49.015249   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:39:49.053042   71168 cri.go:89] found id: ""
	I0401 19:39:49.053067   71168 logs.go:276] 0 containers: []
	W0401 19:39:49.053077   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:39:49.053085   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:39:49.053144   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:39:49.093880   71168 cri.go:89] found id: ""
	I0401 19:39:49.093906   71168 logs.go:276] 0 containers: []
	W0401 19:39:49.093914   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:39:49.093923   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:39:49.093976   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:39:49.129730   71168 cri.go:89] found id: ""
	I0401 19:39:49.129752   71168 logs.go:276] 0 containers: []
	W0401 19:39:49.129760   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:39:49.129766   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:39:49.129818   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:39:49.171075   71168 cri.go:89] found id: ""
	I0401 19:39:49.171107   71168 logs.go:276] 0 containers: []
	W0401 19:39:49.171118   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:39:49.171125   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:39:49.171204   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:39:49.208279   71168 cri.go:89] found id: ""
	I0401 19:39:49.208308   71168 logs.go:276] 0 containers: []
	W0401 19:39:49.208319   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:39:49.208330   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:39:49.208345   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:39:49.294128   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:39:49.294148   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:39:49.294162   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:39:49.400930   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:39:49.400963   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:39:49.443111   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:39:49.443140   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:39:49.501382   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:39:49.501417   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0401 19:39:49.516418   71168 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0401 19:39:49.516461   71168 out.go:239] * 
	W0401 19:39:49.516521   71168 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0401 19:39:49.516591   71168 out.go:239] * 
	W0401 19:39:49.517377   71168 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 19:39:49.520389   71168 out.go:177] 
	W0401 19:39:49.521593   71168 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0401 19:39:49.521639   71168 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0401 19:39:49.521686   71168 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0401 19:39:49.523181   71168 out.go:177] 
	
	
	==> CRI-O <==
	Apr 01 19:39:51 old-k8s-version-163608 crio[649]: time="2024-04-01 19:39:51.346668329Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712000391346633820,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dcfdb2be-301a-4db0-9c74-5d5dbf2707d7 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:39:51 old-k8s-version-163608 crio[649]: time="2024-04-01 19:39:51.347349487Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6afff903-b118-469e-b013-7d49cbaa1d3b name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:39:51 old-k8s-version-163608 crio[649]: time="2024-04-01 19:39:51.347506836Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6afff903-b118-469e-b013-7d49cbaa1d3b name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:39:51 old-k8s-version-163608 crio[649]: time="2024-04-01 19:39:51.347609535Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6afff903-b118-469e-b013-7d49cbaa1d3b name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:39:51 old-k8s-version-163608 crio[649]: time="2024-04-01 19:39:51.384964493Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d483707e-de0c-44e2-b02e-cfc313893986 name=/runtime.v1.RuntimeService/Version
	Apr 01 19:39:51 old-k8s-version-163608 crio[649]: time="2024-04-01 19:39:51.385035854Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d483707e-de0c-44e2-b02e-cfc313893986 name=/runtime.v1.RuntimeService/Version
	Apr 01 19:39:51 old-k8s-version-163608 crio[649]: time="2024-04-01 19:39:51.386866578Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d0c6eb34-2b87-44e1-8137-f52c924bb382 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:39:51 old-k8s-version-163608 crio[649]: time="2024-04-01 19:39:51.387235803Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712000391387214076,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d0c6eb34-2b87-44e1-8137-f52c924bb382 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:39:51 old-k8s-version-163608 crio[649]: time="2024-04-01 19:39:51.388221697Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=872bd6e0-9768-45f0-b18b-5825f394bc62 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:39:51 old-k8s-version-163608 crio[649]: time="2024-04-01 19:39:51.388277760Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=872bd6e0-9768-45f0-b18b-5825f394bc62 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:39:51 old-k8s-version-163608 crio[649]: time="2024-04-01 19:39:51.388310460Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=872bd6e0-9768-45f0-b18b-5825f394bc62 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:39:51 old-k8s-version-163608 crio[649]: time="2024-04-01 19:39:51.423475786Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=98875543-a1a0-48c4-ad01-8284a6a6a24e name=/runtime.v1.RuntimeService/Version
	Apr 01 19:39:51 old-k8s-version-163608 crio[649]: time="2024-04-01 19:39:51.423586461Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=98875543-a1a0-48c4-ad01-8284a6a6a24e name=/runtime.v1.RuntimeService/Version
	Apr 01 19:39:51 old-k8s-version-163608 crio[649]: time="2024-04-01 19:39:51.424886385Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5dd806fb-a0c5-4083-9f68-e9437f961f1d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:39:51 old-k8s-version-163608 crio[649]: time="2024-04-01 19:39:51.425269543Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712000391425242294,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5dd806fb-a0c5-4083-9f68-e9437f961f1d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:39:51 old-k8s-version-163608 crio[649]: time="2024-04-01 19:39:51.425882671Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=654e8d7f-937f-4664-80a7-33e37d65dfc0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:39:51 old-k8s-version-163608 crio[649]: time="2024-04-01 19:39:51.425968488Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=654e8d7f-937f-4664-80a7-33e37d65dfc0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:39:51 old-k8s-version-163608 crio[649]: time="2024-04-01 19:39:51.426003526Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=654e8d7f-937f-4664-80a7-33e37d65dfc0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:39:51 old-k8s-version-163608 crio[649]: time="2024-04-01 19:39:51.463052559Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=11ccfd86-0876-4834-b32c-4eef7c257eb2 name=/runtime.v1.RuntimeService/Version
	Apr 01 19:39:51 old-k8s-version-163608 crio[649]: time="2024-04-01 19:39:51.463177076Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=11ccfd86-0876-4834-b32c-4eef7c257eb2 name=/runtime.v1.RuntimeService/Version
	Apr 01 19:39:51 old-k8s-version-163608 crio[649]: time="2024-04-01 19:39:51.464162259Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b295a845-648d-4772-b42a-f7871fb26903 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:39:51 old-k8s-version-163608 crio[649]: time="2024-04-01 19:39:51.464533742Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712000391464505420,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b295a845-648d-4772-b42a-f7871fb26903 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:39:51 old-k8s-version-163608 crio[649]: time="2024-04-01 19:39:51.465504417Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=22204e41-0f86-459b-9ec3-a3d0308cfb6b name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:39:51 old-k8s-version-163608 crio[649]: time="2024-04-01 19:39:51.465625981Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=22204e41-0f86-459b-9ec3-a3d0308cfb6b name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:39:51 old-k8s-version-163608 crio[649]: time="2024-04-01 19:39:51.465663846Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=22204e41-0f86-459b-9ec3-a3d0308cfb6b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr 1 19:31] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054895] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.048499] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.863744] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.552305] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.682250] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.094710] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.062423] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068466] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.188548] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.189826] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.311320] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +7.231097] systemd-fstab-generator[843]: Ignoring "noauto" option for root device
	[  +0.070737] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.974260] systemd-fstab-generator[968]: Ignoring "noauto" option for root device
	[Apr 1 19:32] kauditd_printk_skb: 46 callbacks suppressed
	[Apr 1 19:35] systemd-fstab-generator[4981]: Ignoring "noauto" option for root device
	[Apr 1 19:37] systemd-fstab-generator[5267]: Ignoring "noauto" option for root device
	[  +0.080693] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:39:51 up 8 min,  0 users,  load average: 0.08, 0.11, 0.08
	Linux old-k8s-version-163608 5.10.207 #1 SMP Wed Mar 27 22:02:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 01 19:39:49 old-k8s-version-163608 kubelet[5449]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Apr 01 19:39:49 old-k8s-version-163608 kubelet[5449]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc000b69560, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc000b46b40, 0x24, 0x0, ...)
	Apr 01 19:39:49 old-k8s-version-163608 kubelet[5449]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Apr 01 19:39:49 old-k8s-version-163608 kubelet[5449]: net.(*Dialer).DialContext(0xc000978000, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000b46b40, 0x24, 0x0, 0x0, 0x0, ...)
	Apr 01 19:39:49 old-k8s-version-163608 kubelet[5449]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Apr 01 19:39:49 old-k8s-version-163608 kubelet[5449]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc0009747e0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000b46b40, 0x24, 0x60, 0x7f859407b618, 0x118, ...)
	Apr 01 19:39:49 old-k8s-version-163608 kubelet[5449]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Apr 01 19:39:49 old-k8s-version-163608 kubelet[5449]: net/http.(*Transport).dial(0xc000a42c80, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000b46b40, 0x24, 0x0, 0x0, 0x0, ...)
	Apr 01 19:39:49 old-k8s-version-163608 kubelet[5449]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Apr 01 19:39:49 old-k8s-version-163608 kubelet[5449]: net/http.(*Transport).dialConn(0xc000a42c80, 0x4f7fe00, 0xc000120018, 0x0, 0xc000101aa0, 0x5, 0xc000b46b40, 0x24, 0x0, 0xc000b4ca20, ...)
	Apr 01 19:39:49 old-k8s-version-163608 kubelet[5449]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Apr 01 19:39:49 old-k8s-version-163608 kubelet[5449]: net/http.(*Transport).dialConnFor(0xc000a42c80, 0xc000ae9ce0)
	Apr 01 19:39:49 old-k8s-version-163608 kubelet[5449]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Apr 01 19:39:49 old-k8s-version-163608 kubelet[5449]: created by net/http.(*Transport).queueForDial
	Apr 01 19:39:49 old-k8s-version-163608 kubelet[5449]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Apr 01 19:39:49 old-k8s-version-163608 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Apr 01 19:39:49 old-k8s-version-163608 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Apr 01 19:39:49 old-k8s-version-163608 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Apr 01 19:39:49 old-k8s-version-163608 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 01 19:39:49 old-k8s-version-163608 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Apr 01 19:39:50 old-k8s-version-163608 kubelet[5518]: I0401 19:39:50.104810    5518 server.go:416] Version: v1.20.0
	Apr 01 19:39:50 old-k8s-version-163608 kubelet[5518]: I0401 19:39:50.105040    5518 server.go:837] Client rotation is on, will bootstrap in background
	Apr 01 19:39:50 old-k8s-version-163608 kubelet[5518]: I0401 19:39:50.107031    5518 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Apr 01 19:39:50 old-k8s-version-163608 kubelet[5518]: W0401 19:39:50.108445    5518 manager.go:159] Cannot detect current cgroup on cgroup v2
	Apr 01 19:39:50 old-k8s-version-163608 kubelet[5518]: I0401 19:39:50.108499    5518 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-163608 -n old-k8s-version-163608
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-163608 -n old-k8s-version-163608: exit status 2 (277.686333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-163608" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (720.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-882095 -n embed-certs-882095
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-04-01 19:45:21.612290936 +0000 UTC m=+5951.067842121
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-882095 -n embed-certs-882095
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-882095 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-882095 logs -n 25: (2.199422723s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p bridge-408543 sudo cat                              | bridge-408543                | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |                |                     |                     |
	| ssh     | -p bridge-408543 sudo                                  | bridge-408543                | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | containerd config dump                                 |                              |         |                |                     |                     |
	| ssh     | -p bridge-408543 sudo                                  | bridge-408543                | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | systemctl status crio --all                            |                              |         |                |                     |                     |
	|         | --full --no-pager                                      |                              |         |                |                     |                     |
	| ssh     | -p bridge-408543 sudo                                  | bridge-408543                | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |                |                     |                     |
	| ssh     | -p bridge-408543 sudo find                             | bridge-408543                | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |                |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |                |                     |                     |
	| ssh     | -p bridge-408543 sudo crio                             | bridge-408543                | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | config                                                 |                              |         |                |                     |                     |
	| delete  | -p bridge-408543                                       | bridge-408543                | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	| delete  | -p                                                     | disable-driver-mounts-580301 | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | disable-driver-mounts-580301                           |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-734648 | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:24 UTC |
	|         | default-k8s-diff-port-734648                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p no-preload-472858             | no-preload-472858            | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p no-preload-472858                                   | no-preload-472858            | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-882095            | embed-certs-882095           | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:24 UTC | 01 Apr 24 19:24 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-882095                                  | embed-certs-882095           | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:24 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-734648  | default-k8s-diff-port-734648 | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:25 UTC | 01 Apr 24 19:25 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-734648 | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:25 UTC |                     |
	|         | default-k8s-diff-port-734648                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-472858                  | no-preload-472858            | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p no-preload-472858                                   | no-preload-472858            | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:26 UTC | 01 Apr 24 19:38 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |                |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.0                      |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-163608        | old-k8s-version-163608       | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:26 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-882095                 | embed-certs-882095           | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:26 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-882095                                  | embed-certs-882095           | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:26 UTC | 01 Apr 24 19:36 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-734648       | default-k8s-diff-port-734648 | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:27 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-734648 | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:27 UTC | 01 Apr 24 19:36 UTC |
	|         | default-k8s-diff-port-734648                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-163608                              | old-k8s-version-163608       | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:27 UTC | 01 Apr 24 19:27 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-163608             | old-k8s-version-163608       | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:27 UTC | 01 Apr 24 19:27 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-163608                              | old-k8s-version-163608       | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:27 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/01 19:27:52
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 19:27:52.967684   71168 out.go:291] Setting OutFile to fd 1 ...
	I0401 19:27:52.967904   71168 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 19:27:52.967912   71168 out.go:304] Setting ErrFile to fd 2...
	I0401 19:27:52.967916   71168 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 19:27:52.968071   71168 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-10493/.minikube/bin
	I0401 19:27:52.968601   71168 out.go:298] Setting JSON to false
	I0401 19:27:52.969458   71168 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7825,"bootTime":1711991848,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 19:27:52.969511   71168 start.go:139] virtualization: kvm guest
	I0401 19:27:52.972337   71168 out.go:177] * [old-k8s-version-163608] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 19:27:52.973728   71168 out.go:177]   - MINIKUBE_LOCATION=18233
	I0401 19:27:52.973774   71168 notify.go:220] Checking for updates...
	I0401 19:27:52.975050   71168 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 19:27:52.976498   71168 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 19:27:52.977880   71168 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-10493/.minikube
	I0401 19:27:52.979140   71168 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 19:27:52.980397   71168 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 19:27:52.982116   71168 config.go:182] Loaded profile config "old-k8s-version-163608": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 19:27:52.982478   71168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:27:52.982569   71168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:27:52.996903   71168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44083
	I0401 19:27:52.997230   71168 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:27:52.997702   71168 main.go:141] libmachine: Using API Version  1
	I0401 19:27:52.997724   71168 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:27:52.998082   71168 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:27:52.998286   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:27:53.000287   71168 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0401 19:27:53.001714   71168 driver.go:392] Setting default libvirt URI to qemu:///system
	I0401 19:27:53.001993   71168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:27:53.002030   71168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:27:53.016155   71168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43947
	I0401 19:27:53.016524   71168 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:27:53.016981   71168 main.go:141] libmachine: Using API Version  1
	I0401 19:27:53.017003   71168 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:27:53.017352   71168 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:27:53.017550   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:27:53.051163   71168 out.go:177] * Using the kvm2 driver based on existing profile
	I0401 19:27:53.052475   71168 start.go:297] selected driver: kvm2
	I0401 19:27:53.052488   71168 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-163608 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-163608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.106 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:27:53.052621   71168 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 19:27:53.053266   71168 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 19:27:53.053349   71168 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18233-10493/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0401 19:27:53.067629   71168 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0401 19:27:53.067994   71168 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 19:27:53.068065   71168 cni.go:84] Creating CNI manager for ""
	I0401 19:27:53.068083   71168 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:27:53.068130   71168 start.go:340] cluster config:
	{Name:old-k8s-version-163608 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-163608 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.106 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:27:53.068640   71168 iso.go:125] acquiring lock: {Name:mka511ffe42ecd86bd7f46e7a17ddcdd3e5e4327 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 19:27:53.070506   71168 out.go:177] * Starting "old-k8s-version-163608" primary control-plane node in "old-k8s-version-163608" cluster
	I0401 19:27:53.071686   71168 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0401 19:27:53.071716   71168 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0401 19:27:53.071726   71168 cache.go:56] Caching tarball of preloaded images
	I0401 19:27:53.071807   71168 preload.go:173] Found /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 19:27:53.071818   71168 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0401 19:27:53.071904   71168 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/config.json ...
	I0401 19:27:53.072076   71168 start.go:360] acquireMachinesLock for old-k8s-version-163608: {Name:mk6b7472209a8db5f40be4c2f0565da7e0094c19 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 19:27:57.821850   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:00.893934   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:06.973950   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:10.045903   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:16.125969   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:19.197902   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:25.277903   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:28.349963   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:34.429888   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:37.501886   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:43.581910   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:46.653871   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:52.733856   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:55.805957   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:01.885878   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:04.957919   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:11.037896   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:14.109854   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:20.189885   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:23.261848   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:29.341931   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:32.414013   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:38.493870   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:41.565912   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:47.645887   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:50.717882   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:56.797886   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:59.869824   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:30:05.949894   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:30:09.021905   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:30:15.101943   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:30:18.173911   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:30:24.253875   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:30:27.325874   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:30:33.405945   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:30:36.477889   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:30:39.482773   70687 start.go:364] duration metric: took 3m52.901392005s to acquireMachinesLock for "embed-certs-882095"
	I0401 19:30:39.482825   70687 start.go:96] Skipping create...Using existing machine configuration
	I0401 19:30:39.482831   70687 fix.go:54] fixHost starting: 
	I0401 19:30:39.483206   70687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:30:39.483272   70687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:30:39.498155   70687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43779
	I0401 19:30:39.498587   70687 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:30:39.499013   70687 main.go:141] libmachine: Using API Version  1
	I0401 19:30:39.499032   70687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:30:39.499400   70687 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:30:39.499572   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:30:39.499760   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetState
	I0401 19:30:39.501361   70687 fix.go:112] recreateIfNeeded on embed-certs-882095: state=Stopped err=<nil>
	I0401 19:30:39.501398   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	W0401 19:30:39.501552   70687 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 19:30:39.504183   70687 out.go:177] * Restarting existing kvm2 VM for "embed-certs-882095" ...
	I0401 19:30:39.505410   70687 main.go:141] libmachine: (embed-certs-882095) Calling .Start
	I0401 19:30:39.505549   70687 main.go:141] libmachine: (embed-certs-882095) Ensuring networks are active...
	I0401 19:30:39.506257   70687 main.go:141] libmachine: (embed-certs-882095) Ensuring network default is active
	I0401 19:30:39.506533   70687 main.go:141] libmachine: (embed-certs-882095) Ensuring network mk-embed-certs-882095 is active
	I0401 19:30:39.506892   70687 main.go:141] libmachine: (embed-certs-882095) Getting domain xml...
	I0401 19:30:39.507632   70687 main.go:141] libmachine: (embed-certs-882095) Creating domain...
	I0401 19:30:40.693316   70687 main.go:141] libmachine: (embed-certs-882095) Waiting to get IP...
	I0401 19:30:40.694095   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:40.694551   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:40.694597   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:40.694519   71595 retry.go:31] will retry after 283.185096ms: waiting for machine to come up
	I0401 19:30:40.979028   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:40.979500   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:40.979523   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:40.979452   71595 retry.go:31] will retry after 297.637907ms: waiting for machine to come up
	I0401 19:30:41.279111   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:41.279457   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:41.279479   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:41.279411   71595 retry.go:31] will retry after 366.625363ms: waiting for machine to come up
	I0401 19:30:39.480214   70284 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 19:30:39.480252   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetMachineName
	I0401 19:30:39.480557   70284 buildroot.go:166] provisioning hostname "no-preload-472858"
	I0401 19:30:39.480583   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetMachineName
	I0401 19:30:39.480787   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:30:39.482626   70284 machine.go:97] duration metric: took 4m37.415031648s to provisionDockerMachine
	I0401 19:30:39.482666   70284 fix.go:56] duration metric: took 4m37.43830515s for fixHost
	I0401 19:30:39.482676   70284 start.go:83] releasing machines lock for "no-preload-472858", held for 4m37.438344965s
	W0401 19:30:39.482704   70284 start.go:713] error starting host: provision: host is not running
	W0401 19:30:39.482794   70284 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0401 19:30:39.482805   70284 start.go:728] Will try again in 5 seconds ...
	I0401 19:30:41.647682   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:41.648045   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:41.648097   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:41.648026   71595 retry.go:31] will retry after 373.762437ms: waiting for machine to come up
	I0401 19:30:42.023500   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:42.023868   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:42.023904   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:42.023836   71595 retry.go:31] will retry after 461.430639ms: waiting for machine to come up
	I0401 19:30:42.486384   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:42.486836   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:42.486863   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:42.486784   71595 retry.go:31] will retry after 718.511667ms: waiting for machine to come up
	I0401 19:30:43.206555   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:43.206983   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:43.207006   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:43.206939   71595 retry.go:31] will retry after 907.934415ms: waiting for machine to come up
	I0401 19:30:44.115840   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:44.116223   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:44.116259   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:44.116173   71595 retry.go:31] will retry after 1.178492069s: waiting for machine to come up
	I0401 19:30:45.295704   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:45.296117   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:45.296146   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:45.296071   71595 retry.go:31] will retry after 1.188920707s: waiting for machine to come up
	I0401 19:30:44.484802   70284 start.go:360] acquireMachinesLock for no-preload-472858: {Name:mk6b7472209a8db5f40be4c2f0565da7e0094c19 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 19:30:46.486217   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:46.486777   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:46.486816   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:46.486740   71595 retry.go:31] will retry after 2.12728618s: waiting for machine to come up
	I0401 19:30:48.617124   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:48.617521   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:48.617553   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:48.617468   71595 retry.go:31] will retry after 2.867613028s: waiting for machine to come up
	I0401 19:30:51.488009   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:51.491502   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:51.491533   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:51.488532   71595 retry.go:31] will retry after 3.42206094s: waiting for machine to come up
	I0401 19:30:54.911723   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:54.912098   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:54.912127   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:54.912059   71595 retry.go:31] will retry after 4.263880792s: waiting for machine to come up
	I0401 19:31:00.450770   70962 start.go:364] duration metric: took 3m22.921307899s to acquireMachinesLock for "default-k8s-diff-port-734648"
	I0401 19:31:00.450836   70962 start.go:96] Skipping create...Using existing machine configuration
	I0401 19:31:00.450854   70962 fix.go:54] fixHost starting: 
	I0401 19:31:00.451364   70962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:31:00.451401   70962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:31:00.467219   70962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45255
	I0401 19:31:00.467579   70962 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:31:00.467998   70962 main.go:141] libmachine: Using API Version  1
	I0401 19:31:00.468021   70962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:31:00.468368   70962 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:31:00.468567   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:31:00.468740   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetState
	I0401 19:31:00.470224   70962 fix.go:112] recreateIfNeeded on default-k8s-diff-port-734648: state=Stopped err=<nil>
	I0401 19:31:00.470251   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	W0401 19:31:00.470396   70962 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 19:31:00.472906   70962 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-734648" ...
	I0401 19:30:59.180302   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.180756   70687 main.go:141] libmachine: (embed-certs-882095) Found IP for machine: 192.168.39.190
	I0401 19:30:59.180778   70687 main.go:141] libmachine: (embed-certs-882095) Reserving static IP address...
	I0401 19:30:59.180794   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has current primary IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.181269   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "embed-certs-882095", mac: "52:54:00:8c:f1:a7", ip: "192.168.39.190"} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.181300   70687 main.go:141] libmachine: (embed-certs-882095) DBG | skip adding static IP to network mk-embed-certs-882095 - found existing host DHCP lease matching {name: "embed-certs-882095", mac: "52:54:00:8c:f1:a7", ip: "192.168.39.190"}
	I0401 19:30:59.181311   70687 main.go:141] libmachine: (embed-certs-882095) Reserved static IP address: 192.168.39.190
	I0401 19:30:59.181324   70687 main.go:141] libmachine: (embed-certs-882095) DBG | Getting to WaitForSSH function...
	I0401 19:30:59.181331   70687 main.go:141] libmachine: (embed-certs-882095) Waiting for SSH to be available...
	I0401 19:30:59.183293   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.183599   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.183630   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.183756   70687 main.go:141] libmachine: (embed-certs-882095) DBG | Using SSH client type: external
	I0401 19:30:59.183784   70687 main.go:141] libmachine: (embed-certs-882095) DBG | Using SSH private key: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/embed-certs-882095/id_rsa (-rw-------)
	I0401 19:30:59.183837   70687 main.go:141] libmachine: (embed-certs-882095) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.190 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18233-10493/.minikube/machines/embed-certs-882095/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0401 19:30:59.183863   70687 main.go:141] libmachine: (embed-certs-882095) DBG | About to run SSH command:
	I0401 19:30:59.183924   70687 main.go:141] libmachine: (embed-certs-882095) DBG | exit 0
	I0401 19:30:59.305707   70687 main.go:141] libmachine: (embed-certs-882095) DBG | SSH cmd err, output: <nil>: 
	I0401 19:30:59.306036   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetConfigRaw
	I0401 19:30:59.306679   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetIP
	I0401 19:30:59.309266   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.309680   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.309711   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.309938   70687 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/embed-certs-882095/config.json ...
	I0401 19:30:59.310193   70687 machine.go:94] provisionDockerMachine start ...
	I0401 19:30:59.310219   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:30:59.310435   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:30:59.312549   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.312908   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.312930   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.313088   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:30:59.313247   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:30:59.313385   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:30:59.313502   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:30:59.313721   70687 main.go:141] libmachine: Using SSH client type: native
	I0401 19:30:59.313894   70687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0401 19:30:59.313904   70687 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 19:30:59.418216   70687 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0401 19:30:59.418244   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetMachineName
	I0401 19:30:59.418506   70687 buildroot.go:166] provisioning hostname "embed-certs-882095"
	I0401 19:30:59.418537   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetMachineName
	I0401 19:30:59.418703   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:30:59.421075   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.421411   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.421453   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.421534   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:30:59.421721   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:30:59.421867   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:30:59.421978   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:30:59.422122   70687 main.go:141] libmachine: Using SSH client type: native
	I0401 19:30:59.422317   70687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0401 19:30:59.422332   70687 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-882095 && echo "embed-certs-882095" | sudo tee /etc/hostname
	I0401 19:30:59.541974   70687 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-882095
	
	I0401 19:30:59.542006   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:30:59.544628   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.544992   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.545025   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.545193   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:30:59.545403   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:30:59.545566   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:30:59.545720   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:30:59.545906   70687 main.go:141] libmachine: Using SSH client type: native
	I0401 19:30:59.546060   70687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0401 19:30:59.546077   70687 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-882095' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-882095/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-882095' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 19:30:59.660103   70687 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 19:30:59.660134   70687 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18233-10493/.minikube CaCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18233-10493/.minikube}
	I0401 19:30:59.660161   70687 buildroot.go:174] setting up certificates
	I0401 19:30:59.660172   70687 provision.go:84] configureAuth start
	I0401 19:30:59.660193   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetMachineName
	I0401 19:30:59.660465   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetIP
	I0401 19:30:59.662943   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.663260   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.663302   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.663413   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:30:59.665390   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.665688   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.665719   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.665821   70687 provision.go:143] copyHostCerts
	I0401 19:30:59.665879   70687 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem, removing ...
	I0401 19:30:59.665892   70687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem
	I0401 19:30:59.665956   70687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem (1679 bytes)
	I0401 19:30:59.666041   70687 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem, removing ...
	I0401 19:30:59.666048   70687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem
	I0401 19:30:59.666071   70687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem (1082 bytes)
	I0401 19:30:59.666121   70687 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem, removing ...
	I0401 19:30:59.666128   70687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem
	I0401 19:30:59.666148   70687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem (1123 bytes)
	I0401 19:30:59.666193   70687 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem org=jenkins.embed-certs-882095 san=[127.0.0.1 192.168.39.190 embed-certs-882095 localhost minikube]
	I0401 19:30:59.761975   70687 provision.go:177] copyRemoteCerts
	I0401 19:30:59.762033   70687 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 19:30:59.762058   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:30:59.764277   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.764601   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.764626   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.764832   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:30:59.765006   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:30:59.765155   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:30:59.765250   70687 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/embed-certs-882095/id_rsa Username:docker}
	I0401 19:30:59.848158   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 19:30:59.875879   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 19:30:59.902573   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0401 19:30:59.928757   70687 provision.go:87] duration metric: took 268.570153ms to configureAuth
	I0401 19:30:59.928781   70687 buildroot.go:189] setting minikube options for container-runtime
	I0401 19:30:59.928924   70687 config.go:182] Loaded profile config "embed-certs-882095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 19:30:59.928988   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:30:59.931187   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.931571   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.931600   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.931755   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:30:59.931914   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:30:59.932067   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:30:59.932176   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:30:59.932325   70687 main.go:141] libmachine: Using SSH client type: native
	I0401 19:30:59.932506   70687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0401 19:30:59.932530   70687 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 19:31:00.214527   70687 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 19:31:00.214552   70687 machine.go:97] duration metric: took 904.342981ms to provisionDockerMachine
	I0401 19:31:00.214563   70687 start.go:293] postStartSetup for "embed-certs-882095" (driver="kvm2")
	I0401 19:31:00.214574   70687 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 19:31:00.214587   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:31:00.214892   70687 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 19:31:00.214920   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:31:00.217289   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.217580   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:31:00.217608   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.217828   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:31:00.218014   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:31:00.218137   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:31:00.218267   70687 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/embed-certs-882095/id_rsa Username:docker}
	I0401 19:31:00.301379   70687 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 19:31:00.306211   70687 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 19:31:00.306231   70687 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/addons for local assets ...
	I0401 19:31:00.306284   70687 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/files for local assets ...
	I0401 19:31:00.306377   70687 filesync.go:149] local asset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> 177512.pem in /etc/ssl/certs
	I0401 19:31:00.306459   70687 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 19:31:00.316524   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:31:00.342848   70687 start.go:296] duration metric: took 128.272743ms for postStartSetup
	I0401 19:31:00.342887   70687 fix.go:56] duration metric: took 20.860054972s for fixHost
	I0401 19:31:00.342910   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:31:00.345429   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.345883   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:31:00.345915   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.346060   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:31:00.346288   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:31:00.346504   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:31:00.346656   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:31:00.346806   70687 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:00.346961   70687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0401 19:31:00.346972   70687 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 19:31:00.450606   70687 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711999860.420567604
	
	I0401 19:31:00.450627   70687 fix.go:216] guest clock: 1711999860.420567604
	I0401 19:31:00.450635   70687 fix.go:229] Guest: 2024-04-01 19:31:00.420567604 +0000 UTC Remote: 2024-04-01 19:31:00.34289204 +0000 UTC m=+253.905703085 (delta=77.675564ms)
	I0401 19:31:00.450683   70687 fix.go:200] guest clock delta is within tolerance: 77.675564ms
	I0401 19:31:00.450693   70687 start.go:83] releasing machines lock for "embed-certs-882095", held for 20.967887876s
	I0401 19:31:00.450725   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:31:00.451011   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetIP
	I0401 19:31:00.453581   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.453959   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:31:00.453990   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.454112   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:31:00.454613   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:31:00.454788   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:31:00.454844   70687 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 19:31:00.454886   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:31:00.454997   70687 ssh_runner.go:195] Run: cat /version.json
	I0401 19:31:00.455019   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:31:00.457540   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.457811   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.457846   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:31:00.457878   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.458053   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:31:00.458141   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:31:00.458173   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.458217   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:31:00.458295   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:31:00.458387   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:31:00.458471   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:31:00.458556   70687 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/embed-certs-882095/id_rsa Username:docker}
	I0401 19:31:00.458602   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:31:00.458741   70687 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/embed-certs-882095/id_rsa Username:docker}
	I0401 19:31:00.569039   70687 ssh_runner.go:195] Run: systemctl --version
	I0401 19:31:00.575452   70687 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 19:31:00.728549   70687 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 19:31:00.735559   70687 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 19:31:00.735642   70687 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 19:31:00.756640   70687 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 19:31:00.756669   70687 start.go:494] detecting cgroup driver to use...
	I0401 19:31:00.756743   70687 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 19:31:00.776638   70687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 19:31:00.793006   70687 docker.go:217] disabling cri-docker service (if available) ...
	I0401 19:31:00.793063   70687 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 19:31:00.809240   70687 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 19:31:00.825245   70687 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 19:31:00.952595   70687 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 19:31:01.109771   70687 docker.go:233] disabling docker service ...
	I0401 19:31:01.109841   70687 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 19:31:01.126814   70687 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 19:31:01.141976   70687 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 19:31:01.301634   70687 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 19:31:01.440350   70687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 19:31:01.458083   70687 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 19:31:01.479653   70687 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0401 19:31:01.479730   70687 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:01.492598   70687 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 19:31:01.492677   70687 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:01.506469   70687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:01.521981   70687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:01.534406   70687 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 19:31:01.546817   70687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:01.558857   70687 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:01.578922   70687 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:01.593381   70687 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 19:31:01.605265   70687 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0401 19:31:01.605341   70687 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0401 19:31:01.621681   70687 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 19:31:01.633336   70687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:31:01.770373   70687 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 19:31:01.927892   70687 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 19:31:01.927952   70687 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 19:31:01.935046   70687 start.go:562] Will wait 60s for crictl version
	I0401 19:31:01.935101   70687 ssh_runner.go:195] Run: which crictl
	I0401 19:31:01.940563   70687 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 19:31:01.986956   70687 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 19:31:01.987030   70687 ssh_runner.go:195] Run: crio --version
	I0401 19:31:02.018567   70687 ssh_runner.go:195] Run: crio --version
	I0401 19:31:02.059077   70687 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0401 19:31:00.474118   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .Start
	I0401 19:31:00.474275   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Ensuring networks are active...
	I0401 19:31:00.474896   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Ensuring network default is active
	I0401 19:31:00.475289   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Ensuring network mk-default-k8s-diff-port-734648 is active
	I0401 19:31:00.475650   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Getting domain xml...
	I0401 19:31:00.476263   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Creating domain...
	I0401 19:31:01.736646   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting to get IP...
	I0401 19:31:01.737490   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:01.737889   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:01.737939   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:01.737867   71724 retry.go:31] will retry after 198.445345ms: waiting for machine to come up
	I0401 19:31:01.938446   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:01.938981   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:01.939012   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:01.938936   71724 retry.go:31] will retry after 320.128802ms: waiting for machine to come up
	I0401 19:31:02.260257   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:02.260673   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:02.260703   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:02.260633   71724 retry.go:31] will retry after 357.316906ms: waiting for machine to come up
	I0401 19:31:02.060343   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetIP
	I0401 19:31:02.063382   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:02.063775   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:31:02.063808   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:02.064047   70687 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0401 19:31:02.069227   70687 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:31:02.085344   70687 kubeadm.go:877] updating cluster {Name:embed-certs-882095 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.3 ClusterName:embed-certs-882095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.190 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 19:31:02.085451   70687 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0401 19:31:02.085490   70687 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:31:02.139383   70687 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0401 19:31:02.139454   70687 ssh_runner.go:195] Run: which lz4
	I0401 19:31:02.144331   70687 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0401 19:31:02.149534   70687 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 19:31:02.149561   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0401 19:31:03.954448   70687 crio.go:462] duration metric: took 1.810143668s to copy over tarball
	I0401 19:31:03.954523   70687 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 19:31:06.445735   70687 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.491184732s)
	I0401 19:31:06.445759   70687 crio.go:469] duration metric: took 2.491285648s to extract the tarball
	I0401 19:31:06.445765   70687 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 19:31:02.620250   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:02.620729   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:02.620760   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:02.620666   71724 retry.go:31] will retry after 520.509423ms: waiting for machine to come up
	I0401 19:31:03.142471   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:03.142902   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:03.142930   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:03.142864   71724 retry.go:31] will retry after 714.309176ms: waiting for machine to come up
	I0401 19:31:03.858594   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:03.859071   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:03.859104   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:03.859035   71724 retry.go:31] will retry after 620.601084ms: waiting for machine to come up
	I0401 19:31:04.480923   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:04.481350   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:04.481381   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:04.481313   71724 retry.go:31] will retry after 1.00716549s: waiting for machine to come up
	I0401 19:31:05.489788   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:05.490243   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:05.490273   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:05.490186   71724 retry.go:31] will retry after 1.158564029s: waiting for machine to come up
	I0401 19:31:06.650440   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:06.650969   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:06.650997   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:06.650915   71724 retry.go:31] will retry after 1.172294728s: waiting for machine to come up
	I0401 19:31:06.485475   70687 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:31:06.532426   70687 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 19:31:06.532448   70687 cache_images.go:84] Images are preloaded, skipping loading
	I0401 19:31:06.532455   70687 kubeadm.go:928] updating node { 192.168.39.190 8443 v1.29.3 crio true true} ...
	I0401 19:31:06.532544   70687 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-882095 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.190
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:embed-certs-882095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 19:31:06.532611   70687 ssh_runner.go:195] Run: crio config
	I0401 19:31:06.585119   70687 cni.go:84] Creating CNI manager for ""
	I0401 19:31:06.585144   70687 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:31:06.585158   70687 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 19:31:06.585185   70687 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.190 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-882095 NodeName:embed-certs-882095 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.190"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.190 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 19:31:06.585374   70687 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.190
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-882095"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.190
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.190"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 19:31:06.585473   70687 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0401 19:31:06.596747   70687 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 19:31:06.596818   70687 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 19:31:06.606959   70687 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0401 19:31:06.628202   70687 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 19:31:06.649043   70687 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0401 19:31:06.668400   70687 ssh_runner.go:195] Run: grep 192.168.39.190	control-plane.minikube.internal$ /etc/hosts
	I0401 19:31:06.672469   70687 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.190	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:31:06.685666   70687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:31:06.806186   70687 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:31:06.823315   70687 certs.go:68] Setting up /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/embed-certs-882095 for IP: 192.168.39.190
	I0401 19:31:06.823355   70687 certs.go:194] generating shared ca certs ...
	I0401 19:31:06.823376   70687 certs.go:226] acquiring lock for ca certs: {Name:mk348b3e250c104b662139cd7212c6c6dfda3180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:31:06.823569   70687 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key
	I0401 19:31:06.823645   70687 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key
	I0401 19:31:06.823659   70687 certs.go:256] generating profile certs ...
	I0401 19:31:06.823764   70687 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/embed-certs-882095/client.key
	I0401 19:31:06.823872   70687 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/embed-certs-882095/apiserver.key.c07921ce
	I0401 19:31:06.823945   70687 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/embed-certs-882095/proxy-client.key
	I0401 19:31:06.824092   70687 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem (1338 bytes)
	W0401 19:31:06.824132   70687 certs.go:480] ignoring /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751_empty.pem, impossibly tiny 0 bytes
	I0401 19:31:06.824145   70687 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem (1675 bytes)
	I0401 19:31:06.824183   70687 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem (1082 bytes)
	I0401 19:31:06.824223   70687 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem (1123 bytes)
	I0401 19:31:06.824254   70687 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem (1679 bytes)
	I0401 19:31:06.824309   70687 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:31:06.824942   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 19:31:06.867274   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 19:31:06.907288   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 19:31:06.948328   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 19:31:06.975058   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/embed-certs-882095/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0401 19:31:07.003183   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/embed-certs-882095/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0401 19:31:07.032030   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/embed-certs-882095/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 19:31:07.061612   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/embed-certs-882095/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 19:31:07.090149   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /usr/share/ca-certificates/177512.pem (1708 bytes)
	I0401 19:31:07.116885   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 19:31:07.143296   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem --> /usr/share/ca-certificates/17751.pem (1338 bytes)
	I0401 19:31:07.169420   70687 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0401 19:31:07.188908   70687 ssh_runner.go:195] Run: openssl version
	I0401 19:31:07.195591   70687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177512.pem && ln -fs /usr/share/ca-certificates/177512.pem /etc/ssl/certs/177512.pem"
	I0401 19:31:07.211583   70687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177512.pem
	I0401 19:31:07.217049   70687 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 18:15 /usr/share/ca-certificates/177512.pem
	I0401 19:31:07.217110   70687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177512.pem
	I0401 19:31:07.223751   70687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177512.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 19:31:07.237393   70687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 19:31:07.250523   70687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:07.255928   70687 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 18:07 /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:07.255981   70687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:07.262373   70687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 19:31:07.275174   70687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17751.pem && ln -fs /usr/share/ca-certificates/17751.pem /etc/ssl/certs/17751.pem"
	I0401 19:31:07.288039   70687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17751.pem
	I0401 19:31:07.293339   70687 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 18:15 /usr/share/ca-certificates/17751.pem
	I0401 19:31:07.293392   70687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17751.pem
	I0401 19:31:07.299983   70687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17751.pem /etc/ssl/certs/51391683.0"
	I0401 19:31:07.313120   70687 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 19:31:07.318425   70687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 19:31:07.325172   70687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 19:31:07.331674   70687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 19:31:07.338299   70687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 19:31:07.344896   70687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 19:31:07.351424   70687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 19:31:07.357898   70687 kubeadm.go:391] StartCluster: {Name:embed-certs-882095 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.3 ClusterName:embed-certs-882095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.190 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:31:07.357995   70687 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 19:31:07.358047   70687 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:31:07.401268   70687 cri.go:89] found id: ""
	I0401 19:31:07.401326   70687 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0401 19:31:07.414232   70687 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0401 19:31:07.414255   70687 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0401 19:31:07.414262   70687 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0401 19:31:07.414308   70687 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 19:31:07.425972   70687 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 19:31:07.426977   70687 kubeconfig.go:125] found "embed-certs-882095" server: "https://192.168.39.190:8443"
	I0401 19:31:07.428767   70687 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 19:31:07.440164   70687 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.190
	I0401 19:31:07.440191   70687 kubeadm.go:1154] stopping kube-system containers ...
	I0401 19:31:07.440201   70687 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0401 19:31:07.440244   70687 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:31:07.484303   70687 cri.go:89] found id: ""
	I0401 19:31:07.484407   70687 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0401 19:31:07.505186   70687 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:31:07.518316   70687 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:31:07.518342   70687 kubeadm.go:156] found existing configuration files:
	
	I0401 19:31:07.518393   70687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 19:31:07.530759   70687 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:31:07.530832   70687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:31:07.542799   70687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 19:31:07.553972   70687 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:31:07.554031   70687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:31:07.565324   70687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 19:31:07.576244   70687 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:31:07.576318   70687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:31:07.588874   70687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 19:31:07.600440   70687 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:31:07.600526   70687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:31:07.611963   70687 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:31:07.623225   70687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:07.740800   70687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:09.050887   70687 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.310046744s)
	I0401 19:31:09.050920   70687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:09.266170   70687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:09.336585   70687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:09.422513   70687 api_server.go:52] waiting for apiserver process to appear ...
	I0401 19:31:09.422594   70687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:09.923709   70687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:10.422822   70687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:10.922892   70687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:10.946590   70687 api_server.go:72] duration metric: took 1.524076694s to wait for apiserver process to appear ...
	I0401 19:31:10.946627   70687 api_server.go:88] waiting for apiserver healthz status ...
	I0401 19:31:10.946650   70687 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0401 19:31:07.825239   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:07.825629   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:07.825676   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:07.825586   71724 retry.go:31] will retry after 1.412332675s: waiting for machine to come up
	I0401 19:31:09.240010   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:09.240385   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:09.240416   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:09.240327   71724 retry.go:31] will retry after 2.601344034s: waiting for machine to come up
	I0401 19:31:11.843464   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:11.843948   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:11.843976   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:11.843900   71724 retry.go:31] will retry after 3.297720076s: waiting for machine to come up
	I0401 19:31:13.350274   70687 api_server.go:279] https://192.168.39.190:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0401 19:31:13.350309   70687 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0401 19:31:13.350325   70687 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0401 19:31:13.383494   70687 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0401 19:31:13.383543   70687 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0401 19:31:13.447744   70687 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0401 19:31:13.452796   70687 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0401 19:31:13.452852   70687 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0401 19:31:13.946971   70687 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0401 19:31:13.951522   70687 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0401 19:31:13.951554   70687 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0401 19:31:14.447104   70687 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0401 19:31:14.455165   70687 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0401 19:31:14.455204   70687 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0401 19:31:14.947278   70687 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0401 19:31:14.951487   70687 api_server.go:279] https://192.168.39.190:8443/healthz returned 200:
	ok
	I0401 19:31:14.958647   70687 api_server.go:141] control plane version: v1.29.3
	I0401 19:31:14.958670   70687 api_server.go:131] duration metric: took 4.012036456s to wait for apiserver health ...
	I0401 19:31:14.958687   70687 cni.go:84] Creating CNI manager for ""
	I0401 19:31:14.958693   70687 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:31:14.960494   70687 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0401 19:31:14.961899   70687 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0401 19:31:14.973709   70687 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0401 19:31:14.998105   70687 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 19:31:15.008481   70687 system_pods.go:59] 8 kube-system pods found
	I0401 19:31:15.008525   70687 system_pods.go:61] "coredns-76f75df574-nvcq4" [663bd69b-6da8-4a66-b20f-ea1eb507096a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:31:15.008536   70687 system_pods.go:61] "etcd-embed-certs-882095" [2b56dddc-b309-4965-811e-459c59b86dac] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0401 19:31:15.008551   70687 system_pods.go:61] "kube-apiserver-embed-certs-882095" [2e376ce4-504c-441a-baf8-0184a17e5bf4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0401 19:31:15.008561   70687 system_pods.go:61] "kube-controller-manager-embed-certs-882095" [e6bf3b2f-289b-4719-86f7-43e873fe8d85] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0401 19:31:15.008571   70687 system_pods.go:61] "kube-proxy-td6jk" [275536ff-4ec0-4d2c-8658-57aadda367b2] Running
	I0401 19:31:15.008580   70687 system_pods.go:61] "kube-scheduler-embed-certs-882095" [4551eb2a-9560-4d4f-aac0-9cfe6c790649] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0401 19:31:15.008591   70687 system_pods.go:61] "metrics-server-57f55c9bc5-g6z6c" [dc8aee6a-f101-4109-a259-351fddbddd44] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:31:15.008599   70687 system_pods.go:61] "storage-provisioner" [82a76833-c874-45d8-8ba7-1a483c15a997] Running
	I0401 19:31:15.008609   70687 system_pods.go:74] duration metric: took 10.480741ms to wait for pod list to return data ...
	I0401 19:31:15.008622   70687 node_conditions.go:102] verifying NodePressure condition ...
	I0401 19:31:15.012256   70687 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 19:31:15.012289   70687 node_conditions.go:123] node cpu capacity is 2
	I0401 19:31:15.012303   70687 node_conditions.go:105] duration metric: took 3.672159ms to run NodePressure ...
	I0401 19:31:15.012327   70687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:15.288861   70687 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0401 19:31:15.293731   70687 kubeadm.go:733] kubelet initialised
	I0401 19:31:15.293750   70687 kubeadm.go:734] duration metric: took 4.868595ms waiting for restarted kubelet to initialise ...
	I0401 19:31:15.293758   70687 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:31:15.298657   70687 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-nvcq4" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:15.304795   70687 pod_ready.go:97] node "embed-certs-882095" hosting pod "coredns-76f75df574-nvcq4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-882095" has status "Ready":"False"
	I0401 19:31:15.304813   70687 pod_ready.go:81] duration metric: took 6.134849ms for pod "coredns-76f75df574-nvcq4" in "kube-system" namespace to be "Ready" ...
	E0401 19:31:15.304822   70687 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-882095" hosting pod "coredns-76f75df574-nvcq4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-882095" has status "Ready":"False"
	I0401 19:31:15.304827   70687 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:15.309184   70687 pod_ready.go:97] node "embed-certs-882095" hosting pod "etcd-embed-certs-882095" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-882095" has status "Ready":"False"
	I0401 19:31:15.309204   70687 pod_ready.go:81] duration metric: took 4.369325ms for pod "etcd-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	E0401 19:31:15.309213   70687 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-882095" hosting pod "etcd-embed-certs-882095" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-882095" has status "Ready":"False"
	I0401 19:31:15.309221   70687 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:15.313737   70687 pod_ready.go:97] node "embed-certs-882095" hosting pod "kube-apiserver-embed-certs-882095" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-882095" has status "Ready":"False"
	I0401 19:31:15.313755   70687 pod_ready.go:81] duration metric: took 4.525801ms for pod "kube-apiserver-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	E0401 19:31:15.313764   70687 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-882095" hosting pod "kube-apiserver-embed-certs-882095" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-882095" has status "Ready":"False"
	I0401 19:31:15.313771   70687 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:15.401827   70687 pod_ready.go:97] node "embed-certs-882095" hosting pod "kube-controller-manager-embed-certs-882095" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-882095" has status "Ready":"False"
	I0401 19:31:15.401857   70687 pod_ready.go:81] duration metric: took 88.077915ms for pod "kube-controller-manager-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	E0401 19:31:15.401871   70687 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-882095" hosting pod "kube-controller-manager-embed-certs-882095" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-882095" has status "Ready":"False"
	I0401 19:31:15.401878   70687 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-td6jk" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:15.802462   70687 pod_ready.go:92] pod "kube-proxy-td6jk" in "kube-system" namespace has status "Ready":"True"
	I0401 19:31:15.802484   70687 pod_ready.go:81] duration metric: took 400.599194ms for pod "kube-proxy-td6jk" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:15.802494   70687 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:15.142653   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:15.143000   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:15.143062   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:15.142972   71724 retry.go:31] will retry after 3.764823961s: waiting for machine to come up
	I0401 19:31:20.350903   71168 start.go:364] duration metric: took 3m27.278785625s to acquireMachinesLock for "old-k8s-version-163608"
	I0401 19:31:20.350993   71168 start.go:96] Skipping create...Using existing machine configuration
	I0401 19:31:20.351010   71168 fix.go:54] fixHost starting: 
	I0401 19:31:20.351490   71168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:31:20.351571   71168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:31:20.368575   71168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38247
	I0401 19:31:20.368936   71168 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:31:20.369448   71168 main.go:141] libmachine: Using API Version  1
	I0401 19:31:20.369469   71168 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:31:20.369822   71168 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:31:20.370033   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:31:20.370195   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetState
	I0401 19:31:20.371625   71168 fix.go:112] recreateIfNeeded on old-k8s-version-163608: state=Stopped err=<nil>
	I0401 19:31:20.371681   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	W0401 19:31:20.371842   71168 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 19:31:20.374328   71168 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-163608" ...
	I0401 19:31:17.809256   70687 pod_ready.go:102] pod "kube-scheduler-embed-certs-882095" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:19.809947   70687 pod_ready.go:102] pod "kube-scheduler-embed-certs-882095" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:20.818455   70687 pod_ready.go:92] pod "kube-scheduler-embed-certs-882095" in "kube-system" namespace has status "Ready":"True"
	I0401 19:31:20.818481   70687 pod_ready.go:81] duration metric: took 5.015979611s for pod "kube-scheduler-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:20.818493   70687 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:18.910798   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:18.911231   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Found IP for machine: 192.168.61.145
	I0401 19:31:18.911266   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has current primary IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:18.911277   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Reserving static IP address...
	I0401 19:31:18.911761   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-734648", mac: "52:54:00:49:dc:50", ip: "192.168.61.145"} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:18.911795   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | skip adding static IP to network mk-default-k8s-diff-port-734648 - found existing host DHCP lease matching {name: "default-k8s-diff-port-734648", mac: "52:54:00:49:dc:50", ip: "192.168.61.145"}
	I0401 19:31:18.911819   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Reserved static IP address: 192.168.61.145
	I0401 19:31:18.911835   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for SSH to be available...
	I0401 19:31:18.911869   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | Getting to WaitForSSH function...
	I0401 19:31:18.913767   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:18.914054   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:18.914082   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:18.914207   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | Using SSH client type: external
	I0401 19:31:18.914236   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | Using SSH private key: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/default-k8s-diff-port-734648/id_rsa (-rw-------)
	I0401 19:31:18.914278   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.145 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18233-10493/.minikube/machines/default-k8s-diff-port-734648/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0401 19:31:18.914300   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | About to run SSH command:
	I0401 19:31:18.914313   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | exit 0
	I0401 19:31:19.037713   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | SSH cmd err, output: <nil>: 
	I0401 19:31:19.038080   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetConfigRaw
	I0401 19:31:19.038767   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetIP
	I0401 19:31:19.042390   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.043249   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:19.043311   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.043949   70962 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/default-k8s-diff-port-734648/config.json ...
	I0401 19:31:19.044504   70962 machine.go:94] provisionDockerMachine start ...
	I0401 19:31:19.044554   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:31:19.044916   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:19.047637   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.047908   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:19.047941   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.048088   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:31:19.048265   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:19.048408   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:19.048522   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:31:19.048636   70962 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:19.048790   70962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.145 22 <nil> <nil>}
	I0401 19:31:19.048800   70962 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 19:31:19.154415   70962 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0401 19:31:19.154444   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetMachineName
	I0401 19:31:19.154683   70962 buildroot.go:166] provisioning hostname "default-k8s-diff-port-734648"
	I0401 19:31:19.154713   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetMachineName
	I0401 19:31:19.154887   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:19.157442   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.157867   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:19.157896   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.158041   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:31:19.158237   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:19.158402   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:19.158540   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:31:19.158713   70962 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:19.158905   70962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.145 22 <nil> <nil>}
	I0401 19:31:19.158920   70962 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-734648 && echo "default-k8s-diff-port-734648" | sudo tee /etc/hostname
	I0401 19:31:19.276129   70962 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-734648
	
	I0401 19:31:19.276160   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:19.278657   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.278918   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:19.278940   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.279158   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:31:19.279353   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:19.279523   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:19.279671   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:31:19.279831   70962 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:19.280057   70962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.145 22 <nil> <nil>}
	I0401 19:31:19.280082   70962 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-734648' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-734648/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-734648' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 19:31:19.395730   70962 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 19:31:19.395755   70962 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18233-10493/.minikube CaCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18233-10493/.minikube}
	I0401 19:31:19.395779   70962 buildroot.go:174] setting up certificates
	I0401 19:31:19.395788   70962 provision.go:84] configureAuth start
	I0401 19:31:19.395798   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetMachineName
	I0401 19:31:19.396046   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetIP
	I0401 19:31:19.398668   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.399036   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:19.399065   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.399219   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:19.401309   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.401611   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:19.401656   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.401750   70962 provision.go:143] copyHostCerts
	I0401 19:31:19.401812   70962 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem, removing ...
	I0401 19:31:19.401822   70962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem
	I0401 19:31:19.401876   70962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem (1082 bytes)
	I0401 19:31:19.401978   70962 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem, removing ...
	I0401 19:31:19.401988   70962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem
	I0401 19:31:19.402015   70962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem (1123 bytes)
	I0401 19:31:19.402121   70962 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem, removing ...
	I0401 19:31:19.402129   70962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem
	I0401 19:31:19.402147   70962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem (1679 bytes)
	I0401 19:31:19.402205   70962 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-734648 san=[127.0.0.1 192.168.61.145 default-k8s-diff-port-734648 localhost minikube]
	I0401 19:31:19.655203   70962 provision.go:177] copyRemoteCerts
	I0401 19:31:19.655256   70962 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 19:31:19.655281   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:19.658194   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.658512   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:19.658540   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.658693   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:31:19.658896   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:19.659039   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:31:19.659187   70962 sshutil.go:53] new ssh client: &{IP:192.168.61.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/default-k8s-diff-port-734648/id_rsa Username:docker}
	I0401 19:31:19.743131   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 19:31:19.771327   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0401 19:31:19.797350   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 19:31:19.824244   70962 provision.go:87] duration metric: took 428.444366ms to configureAuth
	I0401 19:31:19.824274   70962 buildroot.go:189] setting minikube options for container-runtime
	I0401 19:31:19.824473   70962 config.go:182] Loaded profile config "default-k8s-diff-port-734648": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 19:31:19.824563   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:19.827376   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.827798   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:19.827838   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.827984   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:31:19.828184   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:19.828352   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:19.828496   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:31:19.828653   70962 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:19.828827   70962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.145 22 <nil> <nil>}
	I0401 19:31:19.828865   70962 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 19:31:20.107291   70962 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 19:31:20.107320   70962 machine.go:97] duration metric: took 1.062788118s to provisionDockerMachine
	I0401 19:31:20.107333   70962 start.go:293] postStartSetup for "default-k8s-diff-port-734648" (driver="kvm2")
	I0401 19:31:20.107347   70962 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 19:31:20.107369   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:31:20.107671   70962 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 19:31:20.107693   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:20.110380   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.110739   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:20.110780   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.110895   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:31:20.111075   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:20.111218   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:31:20.111353   70962 sshutil.go:53] new ssh client: &{IP:192.168.61.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/default-k8s-diff-port-734648/id_rsa Username:docker}
	I0401 19:31:20.193908   70962 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 19:31:20.198544   70962 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 19:31:20.198572   70962 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/addons for local assets ...
	I0401 19:31:20.198639   70962 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/files for local assets ...
	I0401 19:31:20.198704   70962 filesync.go:149] local asset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> 177512.pem in /etc/ssl/certs
	I0401 19:31:20.198788   70962 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 19:31:20.209866   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:31:20.240362   70962 start.go:296] duration metric: took 133.016405ms for postStartSetup
	I0401 19:31:20.240399   70962 fix.go:56] duration metric: took 19.789546756s for fixHost
	I0401 19:31:20.240418   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:20.243069   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.243448   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:20.243479   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.243657   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:31:20.243865   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:20.244061   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:20.244209   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:31:20.244399   70962 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:20.244600   70962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.145 22 <nil> <nil>}
	I0401 19:31:20.244616   70962 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 19:31:20.350752   70962 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711999880.326440079
	
	I0401 19:31:20.350779   70962 fix.go:216] guest clock: 1711999880.326440079
	I0401 19:31:20.350789   70962 fix.go:229] Guest: 2024-04-01 19:31:20.326440079 +0000 UTC Remote: 2024-04-01 19:31:20.240403038 +0000 UTC m=+222.858311555 (delta=86.037041ms)
	I0401 19:31:20.350808   70962 fix.go:200] guest clock delta is within tolerance: 86.037041ms
	I0401 19:31:20.350812   70962 start.go:83] releasing machines lock for "default-k8s-diff-port-734648", held for 19.899997669s
	I0401 19:31:20.350838   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:31:20.351118   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetIP
	I0401 19:31:20.354040   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.354395   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:20.354413   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.354595   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:31:20.355068   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:31:20.355238   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:31:20.355317   70962 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 19:31:20.355356   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:20.355530   70962 ssh_runner.go:195] Run: cat /version.json
	I0401 19:31:20.355557   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:20.357970   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.358372   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:20.358405   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.358430   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.358585   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:31:20.358766   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:20.358807   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:20.358834   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.358957   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:31:20.359013   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:31:20.359150   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:20.359203   70962 sshutil.go:53] new ssh client: &{IP:192.168.61.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/default-k8s-diff-port-734648/id_rsa Username:docker}
	I0401 19:31:20.359292   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:31:20.359439   70962 sshutil.go:53] new ssh client: &{IP:192.168.61.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/default-k8s-diff-port-734648/id_rsa Username:docker}
	I0401 19:31:20.466422   70962 ssh_runner.go:195] Run: systemctl --version
	I0401 19:31:20.472949   70962 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 19:31:20.626069   70962 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 19:31:20.633425   70962 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 19:31:20.633497   70962 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 19:31:20.658883   70962 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 19:31:20.658910   70962 start.go:494] detecting cgroup driver to use...
	I0401 19:31:20.658979   70962 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 19:31:20.686302   70962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 19:31:20.704507   70962 docker.go:217] disabling cri-docker service (if available) ...
	I0401 19:31:20.704583   70962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 19:31:20.725216   70962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 19:31:20.740635   70962 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 19:31:20.864184   70962 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 19:31:21.010752   70962 docker.go:233] disabling docker service ...
	I0401 19:31:21.010821   70962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 19:31:21.030718   70962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 19:31:21.047787   70962 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 19:31:21.194455   70962 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 19:31:21.337547   70962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 19:31:21.357144   70962 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 19:31:21.381709   70962 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0401 19:31:21.381782   70962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:21.393160   70962 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 19:31:21.393229   70962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:21.405047   70962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:21.416810   70962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:21.428947   70962 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 19:31:21.440886   70962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:21.452872   70962 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:21.473096   70962 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:21.484427   70962 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 19:31:21.494121   70962 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0401 19:31:21.494190   70962 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0401 19:31:21.509859   70962 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 19:31:21.520329   70962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:31:21.671075   70962 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 19:31:21.818822   70962 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 19:31:21.818892   70962 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 19:31:21.825189   70962 start.go:562] Will wait 60s for crictl version
	I0401 19:31:21.825260   70962 ssh_runner.go:195] Run: which crictl
	I0401 19:31:21.830058   70962 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 19:31:21.869617   70962 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 19:31:21.869721   70962 ssh_runner.go:195] Run: crio --version
	I0401 19:31:21.906091   70962 ssh_runner.go:195] Run: crio --version
	I0401 19:31:21.946240   70962 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0401 19:31:21.947653   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetIP
	I0401 19:31:21.950691   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:21.951156   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:21.951201   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:21.951445   70962 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0401 19:31:21.959376   70962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:31:21.974226   70962 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-734648 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:default-k8s-diff-port-734648 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.145 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 19:31:21.974348   70962 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0401 19:31:21.974426   70962 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:31:22.011856   70962 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0401 19:31:22.011930   70962 ssh_runner.go:195] Run: which lz4
	I0401 19:31:22.016672   70962 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0401 19:31:22.021864   70962 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 19:31:22.021893   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0401 19:31:20.375755   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .Start
	I0401 19:31:20.375932   71168 main.go:141] libmachine: (old-k8s-version-163608) Ensuring networks are active...
	I0401 19:31:20.376713   71168 main.go:141] libmachine: (old-k8s-version-163608) Ensuring network default is active
	I0401 19:31:20.377858   71168 main.go:141] libmachine: (old-k8s-version-163608) Ensuring network mk-old-k8s-version-163608 is active
	I0401 19:31:20.378278   71168 main.go:141] libmachine: (old-k8s-version-163608) Getting domain xml...
	I0401 19:31:20.378972   71168 main.go:141] libmachine: (old-k8s-version-163608) Creating domain...
	I0401 19:31:21.643237   71168 main.go:141] libmachine: (old-k8s-version-163608) Waiting to get IP...
	I0401 19:31:21.644082   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:21.644468   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:21.644535   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:21.644446   71902 retry.go:31] will retry after 208.251344ms: waiting for machine to come up
	I0401 19:31:21.854070   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:21.854545   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:21.854593   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:21.854527   71902 retry.go:31] will retry after 240.466964ms: waiting for machine to come up
	I0401 19:31:22.096940   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:22.097447   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:22.097470   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:22.097405   71902 retry.go:31] will retry after 480.217755ms: waiting for machine to come up
	I0401 19:31:22.579111   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:22.579596   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:22.579628   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:22.579518   71902 retry.go:31] will retry after 581.713487ms: waiting for machine to come up
	I0401 19:31:22.826723   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:25.326165   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:23.813558   70962 crio.go:462] duration metric: took 1.796902191s to copy over tarball
	I0401 19:31:23.813619   70962 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 19:31:26.447802   70962 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.634145928s)
	I0401 19:31:26.447840   70962 crio.go:469] duration metric: took 2.634257029s to extract the tarball
	I0401 19:31:26.447849   70962 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 19:31:26.488228   70962 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:31:26.535741   70962 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 19:31:26.535770   70962 cache_images.go:84] Images are preloaded, skipping loading
	I0401 19:31:26.535780   70962 kubeadm.go:928] updating node { 192.168.61.145 8444 v1.29.3 crio true true} ...
	I0401 19:31:26.535931   70962 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-734648 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.145
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-734648 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 19:31:26.536019   70962 ssh_runner.go:195] Run: crio config
	I0401 19:31:26.590211   70962 cni.go:84] Creating CNI manager for ""
	I0401 19:31:26.590239   70962 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:31:26.590254   70962 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 19:31:26.590282   70962 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.145 APIServerPort:8444 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-734648 NodeName:default-k8s-diff-port-734648 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.145"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.145 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 19:31:26.590459   70962 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.145
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-734648"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.145
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.145"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 19:31:26.590533   70962 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0401 19:31:26.602186   70962 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 19:31:26.602264   70962 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 19:31:26.616193   70962 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0401 19:31:26.636634   70962 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 19:31:26.660339   70962 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0401 19:31:26.687935   70962 ssh_runner.go:195] Run: grep 192.168.61.145	control-plane.minikube.internal$ /etc/hosts
	I0401 19:31:26.693966   70962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.145	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:31:26.709876   70962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:31:26.854990   70962 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:31:26.877303   70962 certs.go:68] Setting up /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/default-k8s-diff-port-734648 for IP: 192.168.61.145
	I0401 19:31:26.877327   70962 certs.go:194] generating shared ca certs ...
	I0401 19:31:26.877350   70962 certs.go:226] acquiring lock for ca certs: {Name:mk348b3e250c104b662139cd7212c6c6dfda3180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:31:26.877578   70962 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key
	I0401 19:31:26.877621   70962 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key
	I0401 19:31:26.877637   70962 certs.go:256] generating profile certs ...
	I0401 19:31:26.877777   70962 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/default-k8s-diff-port-734648/client.key
	I0401 19:31:26.877864   70962 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/default-k8s-diff-port-734648/apiserver.key.e4671486
	I0401 19:31:26.877909   70962 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/default-k8s-diff-port-734648/proxy-client.key
	I0401 19:31:26.878007   70962 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem (1338 bytes)
	W0401 19:31:26.878049   70962 certs.go:480] ignoring /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751_empty.pem, impossibly tiny 0 bytes
	I0401 19:31:26.878062   70962 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem (1675 bytes)
	I0401 19:31:26.878094   70962 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem (1082 bytes)
	I0401 19:31:26.878128   70962 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem (1123 bytes)
	I0401 19:31:26.878153   70962 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem (1679 bytes)
	I0401 19:31:26.878203   70962 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:31:26.879101   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 19:31:26.917600   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 19:31:26.968606   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 19:31:27.012527   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 19:31:27.078525   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/default-k8s-diff-port-734648/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0401 19:31:27.125195   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/default-k8s-diff-port-734648/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 19:31:27.157190   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/default-k8s-diff-port-734648/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 19:31:27.185434   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/default-k8s-diff-port-734648/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 19:31:27.215215   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 19:31:27.246938   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem --> /usr/share/ca-certificates/17751.pem (1338 bytes)
	I0401 19:31:27.277210   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /usr/share/ca-certificates/177512.pem (1708 bytes)
	I0401 19:31:27.307099   70962 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0401 19:31:27.326664   70962 ssh_runner.go:195] Run: openssl version
	I0401 19:31:27.333292   70962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 19:31:27.344724   70962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:27.350096   70962 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 18:07 /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:27.350146   70962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:27.356421   70962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 19:31:27.368124   70962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17751.pem && ln -fs /usr/share/ca-certificates/17751.pem /etc/ssl/certs/17751.pem"
	I0401 19:31:27.379331   70962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17751.pem
	I0401 19:31:27.384465   70962 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 18:15 /usr/share/ca-certificates/17751.pem
	I0401 19:31:27.384518   70962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17751.pem
	I0401 19:31:27.391192   70962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17751.pem /etc/ssl/certs/51391683.0"
	I0401 19:31:27.403898   70962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177512.pem && ln -fs /usr/share/ca-certificates/177512.pem /etc/ssl/certs/177512.pem"
	I0401 19:31:27.418676   70962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177512.pem
	I0401 19:31:27.424254   70962 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 18:15 /usr/share/ca-certificates/177512.pem
	I0401 19:31:27.424308   70962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177512.pem
	I0401 19:31:23.163331   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:23.163803   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:23.163838   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:23.163770   71902 retry.go:31] will retry after 737.12898ms: waiting for machine to come up
	I0401 19:31:23.902739   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:23.903192   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:23.903222   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:23.903139   71902 retry.go:31] will retry after 718.826495ms: waiting for machine to come up
	I0401 19:31:24.624169   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:24.624620   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:24.624648   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:24.624574   71902 retry.go:31] will retry after 1.020701715s: waiting for machine to come up
	I0401 19:31:25.647470   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:25.647957   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:25.647988   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:25.647921   71902 retry.go:31] will retry after 1.318891306s: waiting for machine to come up
	I0401 19:31:26.968134   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:26.968588   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:26.968613   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:26.968535   71902 retry.go:31] will retry after 1.465864517s: waiting for machine to come up
	I0401 19:31:27.752110   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:29.827324   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:27.431798   70962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177512.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 19:31:27.749367   70962 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 19:31:27.757123   70962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 19:31:27.768626   70962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 19:31:27.778119   70962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 19:31:27.786893   70962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 19:31:27.797129   70962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 19:31:27.804804   70962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 19:31:27.813194   70962 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-734648 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.29.3 ClusterName:default-k8s-diff-port-734648 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.145 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:31:27.813274   70962 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 19:31:27.813325   70962 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:31:27.864565   70962 cri.go:89] found id: ""
	I0401 19:31:27.864637   70962 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0401 19:31:27.876745   70962 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0401 19:31:27.876789   70962 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0401 19:31:27.876797   70962 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0401 19:31:27.876862   70962 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 19:31:27.887494   70962 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 19:31:27.888632   70962 kubeconfig.go:125] found "default-k8s-diff-port-734648" server: "https://192.168.61.145:8444"
	I0401 19:31:27.890729   70962 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 19:31:27.900847   70962 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.145
	I0401 19:31:27.900877   70962 kubeadm.go:1154] stopping kube-system containers ...
	I0401 19:31:27.900889   70962 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0401 19:31:27.900936   70962 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:31:27.952874   70962 cri.go:89] found id: ""
	I0401 19:31:27.952954   70962 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0401 19:31:27.971647   70962 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:31:27.982541   70962 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:31:27.982576   70962 kubeadm.go:156] found existing configuration files:
	
	I0401 19:31:27.982612   70962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0401 19:31:27.992341   70962 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:31:27.992414   70962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:31:28.002685   70962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0401 19:31:28.012599   70962 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:31:28.012658   70962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:31:28.022731   70962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0401 19:31:28.033584   70962 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:31:28.033661   70962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:31:28.044940   70962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0401 19:31:28.055832   70962 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:31:28.055886   70962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:31:28.066919   70962 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:31:28.078715   70962 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:28.212251   70962 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:29.214190   70962 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.001904972s)
	I0401 19:31:29.214224   70962 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:29.444484   70962 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:29.536112   70962 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:29.664087   70962 api_server.go:52] waiting for apiserver process to appear ...
	I0401 19:31:29.664201   70962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:30.165117   70962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:30.664872   70962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:30.707251   70962 api_server.go:72] duration metric: took 1.04316448s to wait for apiserver process to appear ...
	I0401 19:31:30.707280   70962 api_server.go:88] waiting for apiserver healthz status ...
	I0401 19:31:30.707297   70962 api_server.go:253] Checking apiserver healthz at https://192.168.61.145:8444/healthz ...
	I0401 19:31:30.707881   70962 api_server.go:269] stopped: https://192.168.61.145:8444/healthz: Get "https://192.168.61.145:8444/healthz": dial tcp 192.168.61.145:8444: connect: connection refused
	I0401 19:31:31.207434   70962 api_server.go:253] Checking apiserver healthz at https://192.168.61.145:8444/healthz ...
	I0401 19:31:28.435890   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:28.436304   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:28.436334   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:28.436255   71902 retry.go:31] will retry after 2.062597688s: waiting for machine to come up
	I0401 19:31:30.500523   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:30.500999   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:30.501027   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:30.500954   71902 retry.go:31] will retry after 2.068480339s: waiting for machine to come up
	I0401 19:31:32.571229   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:32.571603   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:32.571635   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:32.571550   71902 retry.go:31] will retry after 3.355965883s: waiting for machine to come up
	I0401 19:31:33.707613   70962 api_server.go:279] https://192.168.61.145:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0401 19:31:33.707647   70962 api_server.go:103] status: https://192.168.61.145:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0401 19:31:33.707663   70962 api_server.go:253] Checking apiserver healthz at https://192.168.61.145:8444/healthz ...
	I0401 19:31:33.728509   70962 api_server.go:279] https://192.168.61.145:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0401 19:31:33.728582   70962 api_server.go:103] status: https://192.168.61.145:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0401 19:31:34.208163   70962 api_server.go:253] Checking apiserver healthz at https://192.168.61.145:8444/healthz ...
	I0401 19:31:34.212754   70962 api_server.go:279] https://192.168.61.145:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0401 19:31:34.212784   70962 api_server.go:103] status: https://192.168.61.145:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0401 19:31:34.708282   70962 api_server.go:253] Checking apiserver healthz at https://192.168.61.145:8444/healthz ...
	I0401 19:31:34.715268   70962 api_server.go:279] https://192.168.61.145:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0401 19:31:34.715294   70962 api_server.go:103] status: https://192.168.61.145:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0401 19:31:35.207460   70962 api_server.go:253] Checking apiserver healthz at https://192.168.61.145:8444/healthz ...
	I0401 19:31:35.212542   70962 api_server.go:279] https://192.168.61.145:8444/healthz returned 200:
	ok
	I0401 19:31:35.219264   70962 api_server.go:141] control plane version: v1.29.3
	I0401 19:31:35.219287   70962 api_server.go:131] duration metric: took 4.512000334s to wait for apiserver health ...
	I0401 19:31:35.219294   70962 cni.go:84] Creating CNI manager for ""
	I0401 19:31:35.219309   70962 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:31:35.221080   70962 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0401 19:31:31.828694   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:34.325740   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:35.222800   70962 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0401 19:31:35.238787   70962 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0401 19:31:35.286002   70962 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 19:31:35.302379   70962 system_pods.go:59] 8 kube-system pods found
	I0401 19:31:35.302420   70962 system_pods.go:61] "coredns-76f75df574-tdwrh" [c1d3b591-fa81-46dd-847c-ffdfc22937fa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:31:35.302437   70962 system_pods.go:61] "etcd-default-k8s-diff-port-734648" [e977793d-ec92-40b8-a0fe-1b2400fb1af6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0401 19:31:35.302447   70962 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-734648" [2d0eae31-35c3-40aa-9d28-a2f51849c15d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0401 19:31:35.302469   70962 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-734648" [cded1171-2e1b-4d70-9f26-d1d3a6558da1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0401 19:31:35.302483   70962 system_pods.go:61] "kube-proxy-mn546" [f9b6366f-7095-418c-ba24-529c0555f438] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:31:35.302493   70962 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-734648" [c1518ece-8cbf-49fe-9091-15b38dc1bd62] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0401 19:31:35.302504   70962 system_pods.go:61] "metrics-server-57f55c9bc5-g7mg2" [d1ede79a-a7e6-42bd-a799-197ffc7c7939] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:31:35.302519   70962 system_pods.go:61] "storage-provisioner" [bd55f9c8-580c-4eb1-adbc-020d5bbedce9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:31:35.302532   70962 system_pods.go:74] duration metric: took 16.508651ms to wait for pod list to return data ...
	I0401 19:31:35.302545   70962 node_conditions.go:102] verifying NodePressure condition ...
	I0401 19:31:35.305826   70962 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 19:31:35.305862   70962 node_conditions.go:123] node cpu capacity is 2
	I0401 19:31:35.305876   70962 node_conditions.go:105] duration metric: took 3.322577ms to run NodePressure ...
	I0401 19:31:35.305895   70962 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:35.603225   70962 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0401 19:31:35.608584   70962 kubeadm.go:733] kubelet initialised
	I0401 19:31:35.608611   70962 kubeadm.go:734] duration metric: took 5.361549ms waiting for restarted kubelet to initialise ...
	I0401 19:31:35.608620   70962 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:31:35.615252   70962 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-tdwrh" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:35.620605   70962 pod_ready.go:97] node "default-k8s-diff-port-734648" hosting pod "coredns-76f75df574-tdwrh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-734648" has status "Ready":"False"
	I0401 19:31:35.620627   70962 pod_ready.go:81] duration metric: took 5.353257ms for pod "coredns-76f75df574-tdwrh" in "kube-system" namespace to be "Ready" ...
	E0401 19:31:35.620634   70962 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-734648" hosting pod "coredns-76f75df574-tdwrh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-734648" has status "Ready":"False"
	I0401 19:31:35.620641   70962 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:35.625280   70962 pod_ready.go:97] node "default-k8s-diff-port-734648" hosting pod "etcd-default-k8s-diff-port-734648" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-734648" has status "Ready":"False"
	I0401 19:31:35.625297   70962 pod_ready.go:81] duration metric: took 4.646748ms for pod "etcd-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	E0401 19:31:35.625311   70962 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-734648" hosting pod "etcd-default-k8s-diff-port-734648" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-734648" has status "Ready":"False"
	I0401 19:31:35.625325   70962 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:35.630150   70962 pod_ready.go:97] node "default-k8s-diff-port-734648" hosting pod "kube-apiserver-default-k8s-diff-port-734648" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-734648" has status "Ready":"False"
	I0401 19:31:35.630170   70962 pod_ready.go:81] duration metric: took 4.83409ms for pod "kube-apiserver-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	E0401 19:31:35.630178   70962 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-734648" hosting pod "kube-apiserver-default-k8s-diff-port-734648" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-734648" has status "Ready":"False"
	I0401 19:31:35.630184   70962 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:35.693865   70962 pod_ready.go:97] node "default-k8s-diff-port-734648" hosting pod "kube-controller-manager-default-k8s-diff-port-734648" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-734648" has status "Ready":"False"
	I0401 19:31:35.693890   70962 pod_ready.go:81] duration metric: took 63.697397ms for pod "kube-controller-manager-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	E0401 19:31:35.693901   70962 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-734648" hosting pod "kube-controller-manager-default-k8s-diff-port-734648" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-734648" has status "Ready":"False"
	I0401 19:31:35.693908   70962 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mn546" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:36.090904   70962 pod_ready.go:92] pod "kube-proxy-mn546" in "kube-system" namespace has status "Ready":"True"
	I0401 19:31:36.090928   70962 pod_ready.go:81] duration metric: took 397.013717ms for pod "kube-proxy-mn546" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:36.090938   70962 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:35.929498   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:35.930010   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:35.930042   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:35.929963   71902 retry.go:31] will retry after 3.806123644s: waiting for machine to come up
	I0401 19:31:41.203538   70284 start.go:364] duration metric: took 56.718693538s to acquireMachinesLock for "no-preload-472858"
	I0401 19:31:41.203592   70284 start.go:96] Skipping create...Using existing machine configuration
	I0401 19:31:41.203607   70284 fix.go:54] fixHost starting: 
	I0401 19:31:41.204096   70284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:31:41.204143   70284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:31:41.221574   70284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42471
	I0401 19:31:41.222045   70284 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:31:41.222527   70284 main.go:141] libmachine: Using API Version  1
	I0401 19:31:41.222547   70284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:31:41.222856   70284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:31:41.223051   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:31:41.223209   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetState
	I0401 19:31:41.224801   70284 fix.go:112] recreateIfNeeded on no-preload-472858: state=Stopped err=<nil>
	I0401 19:31:41.224827   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	W0401 19:31:41.224979   70284 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 19:31:41.226937   70284 out.go:177] * Restarting existing kvm2 VM for "no-preload-472858" ...
	I0401 19:31:36.824790   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:38.824976   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:40.827269   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:41.228315   70284 main.go:141] libmachine: (no-preload-472858) Calling .Start
	I0401 19:31:41.228509   70284 main.go:141] libmachine: (no-preload-472858) Ensuring networks are active...
	I0401 19:31:41.229206   70284 main.go:141] libmachine: (no-preload-472858) Ensuring network default is active
	I0401 19:31:41.229603   70284 main.go:141] libmachine: (no-preload-472858) Ensuring network mk-no-preload-472858 is active
	I0401 19:31:41.229999   70284 main.go:141] libmachine: (no-preload-472858) Getting domain xml...
	I0401 19:31:41.230682   70284 main.go:141] libmachine: (no-preload-472858) Creating domain...
	I0401 19:31:38.097417   70962 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:40.098187   70962 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:42.099891   70962 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:39.739700   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.740313   71168 main.go:141] libmachine: (old-k8s-version-163608) Found IP for machine: 192.168.50.106
	I0401 19:31:39.740369   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has current primary IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.740386   71168 main.go:141] libmachine: (old-k8s-version-163608) Reserving static IP address...
	I0401 19:31:39.740767   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "old-k8s-version-163608", mac: "52:54:00:fe:1b:e7", ip: "192.168.50.106"} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:39.740798   71168 main.go:141] libmachine: (old-k8s-version-163608) Reserved static IP address: 192.168.50.106
	I0401 19:31:39.740818   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | skip adding static IP to network mk-old-k8s-version-163608 - found existing host DHCP lease matching {name: "old-k8s-version-163608", mac: "52:54:00:fe:1b:e7", ip: "192.168.50.106"}
	I0401 19:31:39.740839   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | Getting to WaitForSSH function...
	I0401 19:31:39.740857   71168 main.go:141] libmachine: (old-k8s-version-163608) Waiting for SSH to be available...
	I0401 19:31:39.743023   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.743417   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:39.743447   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.743589   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | Using SSH client type: external
	I0401 19:31:39.743614   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | Using SSH private key: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/old-k8s-version-163608/id_rsa (-rw-------)
	I0401 19:31:39.743648   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.106 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18233-10493/.minikube/machines/old-k8s-version-163608/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0401 19:31:39.743662   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | About to run SSH command:
	I0401 19:31:39.743676   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | exit 0
	I0401 19:31:39.877699   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | SSH cmd err, output: <nil>: 
	I0401 19:31:39.878044   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetConfigRaw
	I0401 19:31:39.878611   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetIP
	I0401 19:31:39.880733   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.881074   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:39.881107   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.881352   71168 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/config.json ...
	I0401 19:31:39.881510   71168 machine.go:94] provisionDockerMachine start ...
	I0401 19:31:39.881529   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:31:39.881766   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:39.883980   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.884318   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:39.884360   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.884483   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:39.884675   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:39.884877   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:39.885029   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:39.885175   71168 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:39.885339   71168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.106 22 <nil> <nil>}
	I0401 19:31:39.885349   71168 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 19:31:39.994935   71168 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0401 19:31:39.994971   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetMachineName
	I0401 19:31:39.995213   71168 buildroot.go:166] provisioning hostname "old-k8s-version-163608"
	I0401 19:31:39.995241   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetMachineName
	I0401 19:31:39.995472   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:39.998179   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.998490   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:39.998525   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.998656   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:39.998805   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:39.998949   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:39.999054   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:39.999183   71168 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:39.999372   71168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.106 22 <nil> <nil>}
	I0401 19:31:39.999390   71168 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-163608 && echo "old-k8s-version-163608" | sudo tee /etc/hostname
	I0401 19:31:40.128852   71168 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-163608
	
	I0401 19:31:40.128880   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:40.131508   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.131817   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:40.131874   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.131987   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:40.132188   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:40.132365   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:40.132503   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:40.132693   71168 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:40.132890   71168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.106 22 <nil> <nil>}
	I0401 19:31:40.132908   71168 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-163608' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-163608/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-163608' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 19:31:40.252693   71168 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 19:31:40.252727   71168 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18233-10493/.minikube CaCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18233-10493/.minikube}
	I0401 19:31:40.252749   71168 buildroot.go:174] setting up certificates
	I0401 19:31:40.252759   71168 provision.go:84] configureAuth start
	I0401 19:31:40.252767   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetMachineName
	I0401 19:31:40.253030   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetIP
	I0401 19:31:40.255827   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.256183   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:40.256210   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.256418   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:40.259041   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.259388   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:40.259418   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.259540   71168 provision.go:143] copyHostCerts
	I0401 19:31:40.259592   71168 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem, removing ...
	I0401 19:31:40.259602   71168 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem
	I0401 19:31:40.259654   71168 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem (1082 bytes)
	I0401 19:31:40.259745   71168 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem, removing ...
	I0401 19:31:40.259754   71168 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem
	I0401 19:31:40.259773   71168 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem (1123 bytes)
	I0401 19:31:40.259822   71168 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem, removing ...
	I0401 19:31:40.259830   71168 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem
	I0401 19:31:40.259846   71168 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem (1679 bytes)
	I0401 19:31:40.259891   71168 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-163608 san=[127.0.0.1 192.168.50.106 localhost minikube old-k8s-version-163608]
	I0401 19:31:40.465177   71168 provision.go:177] copyRemoteCerts
	I0401 19:31:40.465241   71168 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 19:31:40.465265   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:40.467676   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.468040   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:40.468070   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.468272   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:40.468456   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:40.468622   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:40.468767   71168 sshutil.go:53] new ssh client: &{IP:192.168.50.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/old-k8s-version-163608/id_rsa Username:docker}
	I0401 19:31:40.557764   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 19:31:40.585326   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0401 19:31:40.611671   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 19:31:40.639265   71168 provision.go:87] duration metric: took 386.497023ms to configureAuth
	I0401 19:31:40.639296   71168 buildroot.go:189] setting minikube options for container-runtime
	I0401 19:31:40.639521   71168 config.go:182] Loaded profile config "old-k8s-version-163608": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 19:31:40.639590   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:40.642321   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.642733   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:40.642762   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.642921   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:40.643122   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:40.643294   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:40.643442   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:40.643647   71168 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:40.643802   71168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.106 22 <nil> <nil>}
	I0401 19:31:40.643819   71168 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 19:31:40.940619   71168 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 19:31:40.940647   71168 machine.go:97] duration metric: took 1.059122816s to provisionDockerMachine
	I0401 19:31:40.940661   71168 start.go:293] postStartSetup for "old-k8s-version-163608" (driver="kvm2")
	I0401 19:31:40.940672   71168 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 19:31:40.940687   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:31:40.940955   71168 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 19:31:40.940981   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:40.943787   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.944159   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:40.944197   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.944347   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:40.944556   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:40.944700   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:40.944834   71168 sshutil.go:53] new ssh client: &{IP:192.168.50.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/old-k8s-version-163608/id_rsa Username:docker}
	I0401 19:31:41.035824   71168 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 19:31:41.040975   71168 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 19:31:41.041007   71168 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/addons for local assets ...
	I0401 19:31:41.041085   71168 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/files for local assets ...
	I0401 19:31:41.041165   71168 filesync.go:149] local asset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> 177512.pem in /etc/ssl/certs
	I0401 19:31:41.041255   71168 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 19:31:41.052356   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:31:41.080699   71168 start.go:296] duration metric: took 140.024653ms for postStartSetup
	I0401 19:31:41.080737   71168 fix.go:56] duration metric: took 20.729726297s for fixHost
	I0401 19:31:41.080759   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:41.083664   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:41.084045   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:41.084075   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:41.084202   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:41.084405   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:41.084599   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:41.084796   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:41.084971   71168 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:41.085169   71168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.106 22 <nil> <nil>}
	I0401 19:31:41.085180   71168 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 19:31:41.203392   71168 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711999901.182365994
	
	I0401 19:31:41.203412   71168 fix.go:216] guest clock: 1711999901.182365994
	I0401 19:31:41.203419   71168 fix.go:229] Guest: 2024-04-01 19:31:41.182365994 +0000 UTC Remote: 2024-04-01 19:31:41.080741553 +0000 UTC m=+228.159955492 (delta=101.624441ms)
	I0401 19:31:41.203437   71168 fix.go:200] guest clock delta is within tolerance: 101.624441ms
	I0401 19:31:41.203442   71168 start.go:83] releasing machines lock for "old-k8s-version-163608", held for 20.852486097s
	I0401 19:31:41.203462   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:31:41.203744   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetIP
	I0401 19:31:41.206582   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:41.206952   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:41.206973   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:41.207151   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:31:41.207701   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:31:41.207891   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:31:41.207954   71168 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 19:31:41.207996   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:41.208096   71168 ssh_runner.go:195] Run: cat /version.json
	I0401 19:31:41.208127   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:41.210731   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:41.210928   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:41.211107   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:41.211132   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:41.211317   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:41.211446   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:41.211488   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:41.211491   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:41.211636   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:41.211692   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:41.211783   71168 sshutil.go:53] new ssh client: &{IP:192.168.50.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/old-k8s-version-163608/id_rsa Username:docker}
	I0401 19:31:41.211891   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:41.212031   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:41.212187   71168 sshutil.go:53] new ssh client: &{IP:192.168.50.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/old-k8s-version-163608/id_rsa Username:docker}
	I0401 19:31:41.296330   71168 ssh_runner.go:195] Run: systemctl --version
	I0401 19:31:41.326247   71168 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 19:31:41.479411   71168 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 19:31:41.486996   71168 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 19:31:41.487063   71168 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 19:31:41.507840   71168 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 19:31:41.507870   71168 start.go:494] detecting cgroup driver to use...
	I0401 19:31:41.507942   71168 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 19:31:41.533063   71168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 19:31:41.551699   71168 docker.go:217] disabling cri-docker service (if available) ...
	I0401 19:31:41.551754   71168 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 19:31:41.568078   71168 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 19:31:41.584278   71168 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 19:31:41.726884   71168 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 19:31:41.882514   71168 docker.go:233] disabling docker service ...
	I0401 19:31:41.882587   71168 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 19:31:41.901235   71168 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 19:31:41.919787   71168 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 19:31:42.082420   71168 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 19:31:42.248527   71168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 19:31:42.266610   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 19:31:42.295677   71168 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0401 19:31:42.295740   71168 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:42.313855   71168 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 19:31:42.313920   71168 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:42.327176   71168 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:42.339527   71168 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:42.351220   71168 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 19:31:42.363716   71168 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 19:31:42.379911   71168 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0401 19:31:42.379971   71168 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0401 19:31:42.395282   71168 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 19:31:42.407713   71168 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:31:42.579648   71168 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 19:31:42.764748   71168 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 19:31:42.764858   71168 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 19:31:42.771038   71168 start.go:562] Will wait 60s for crictl version
	I0401 19:31:42.771125   71168 ssh_runner.go:195] Run: which crictl
	I0401 19:31:42.775871   71168 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 19:31:42.823135   71168 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 19:31:42.823218   71168 ssh_runner.go:195] Run: crio --version
	I0401 19:31:42.863748   71168 ssh_runner.go:195] Run: crio --version
	I0401 19:31:42.900263   71168 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0401 19:31:42.901631   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetIP
	I0401 19:31:42.904464   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:42.904773   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:42.904812   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:42.905048   71168 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0401 19:31:42.910117   71168 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:31:42.925313   71168 kubeadm.go:877] updating cluster {Name:old-k8s-version-163608 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-163608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.106 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 19:31:42.925475   71168 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0401 19:31:42.925542   71168 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:31:42.828772   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:44.829527   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:42.553437   70284 main.go:141] libmachine: (no-preload-472858) Waiting to get IP...
	I0401 19:31:42.554422   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:42.554810   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:42.554907   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:42.554806   72041 retry.go:31] will retry after 237.823736ms: waiting for machine to come up
	I0401 19:31:42.794546   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:42.795159   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:42.795205   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:42.795117   72041 retry.go:31] will retry after 326.387674ms: waiting for machine to come up
	I0401 19:31:43.123632   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:43.124306   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:43.124342   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:43.124244   72041 retry.go:31] will retry after 455.262949ms: waiting for machine to come up
	I0401 19:31:43.580752   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:43.581420   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:43.581440   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:43.581375   72041 retry.go:31] will retry after 520.307316ms: waiting for machine to come up
	I0401 19:31:44.103924   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:44.104407   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:44.104431   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:44.104361   72041 retry.go:31] will retry after 491.638031ms: waiting for machine to come up
	I0401 19:31:44.598440   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:44.598990   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:44.599015   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:44.598901   72041 retry.go:31] will retry after 652.234963ms: waiting for machine to come up
	I0401 19:31:45.252362   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:45.252901   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:45.252933   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:45.252853   72041 retry.go:31] will retry after 1.047335678s: waiting for machine to come up
	I0401 19:31:46.301894   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:46.302324   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:46.302349   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:46.302281   72041 retry.go:31] will retry after 1.303326069s: waiting for machine to come up
	I0401 19:31:44.101042   70962 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:46.099803   70962 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace has status "Ready":"True"
	I0401 19:31:46.099828   70962 pod_ready.go:81] duration metric: took 10.008882274s for pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:46.099843   70962 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:42.974220   71168 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0401 19:31:42.974307   71168 ssh_runner.go:195] Run: which lz4
	I0401 19:31:42.979179   71168 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0401 19:31:42.984204   71168 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 19:31:42.984236   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0401 19:31:45.108131   71168 crio.go:462] duration metric: took 2.128988098s to copy over tarball
	I0401 19:31:45.108232   71168 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 19:31:47.328534   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:49.827306   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:47.606907   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:47.607392   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:47.607419   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:47.607356   72041 retry.go:31] will retry after 1.729010443s: waiting for machine to come up
	I0401 19:31:49.338200   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:49.338722   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:49.338751   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:49.338667   72041 retry.go:31] will retry after 2.069036941s: waiting for machine to come up
	I0401 19:31:51.409458   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:51.409945   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:51.409976   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:51.409894   72041 retry.go:31] will retry after 2.405834741s: waiting for machine to come up
	I0401 19:31:48.108234   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:50.607720   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:48.581824   71168 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.473552916s)
	I0401 19:31:48.581871   71168 crio.go:469] duration metric: took 3.473700991s to extract the tarball
	I0401 19:31:48.581881   71168 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 19:31:48.630609   71168 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:31:48.673027   71168 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0401 19:31:48.673048   71168 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0401 19:31:48.673085   71168 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:31:48.673129   71168 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 19:31:48.673155   71168 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 19:31:48.673190   71168 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 19:31:48.673133   71168 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0401 19:31:48.673273   71168 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0401 19:31:48.673143   71168 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0401 19:31:48.673336   71168 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 19:31:48.675068   71168 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:31:48.675073   71168 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 19:31:48.675068   71168 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 19:31:48.675093   71168 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0401 19:31:48.675072   71168 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0401 19:31:48.675073   71168 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 19:31:48.675115   71168 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 19:31:48.675096   71168 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0401 19:31:48.827947   71168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0401 19:31:48.846025   71168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0401 19:31:48.848769   71168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0401 19:31:48.858366   71168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0401 19:31:48.858613   71168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0401 19:31:48.859241   71168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 19:31:48.862047   71168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0401 19:31:48.912299   71168 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0401 19:31:48.912346   71168 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0401 19:31:48.912399   71168 ssh_runner.go:195] Run: which crictl
	I0401 19:31:49.030117   71168 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0401 19:31:49.030357   71168 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 19:31:49.030122   71168 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0401 19:31:49.030433   71168 ssh_runner.go:195] Run: which crictl
	I0401 19:31:49.030460   71168 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 19:31:49.030526   71168 ssh_runner.go:195] Run: which crictl
	I0401 19:31:49.062211   71168 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0401 19:31:49.062327   71168 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0401 19:31:49.062234   71168 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0401 19:31:49.062415   71168 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0401 19:31:49.062396   71168 ssh_runner.go:195] Run: which crictl
	I0401 19:31:49.062461   71168 ssh_runner.go:195] Run: which crictl
	I0401 19:31:49.078249   71168 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0401 19:31:49.078308   71168 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 19:31:49.078323   71168 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0401 19:31:49.078358   71168 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 19:31:49.078379   71168 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0401 19:31:49.078398   71168 ssh_runner.go:195] Run: which crictl
	I0401 19:31:49.078426   71168 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0401 19:31:49.078440   71168 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0401 19:31:49.078362   71168 ssh_runner.go:195] Run: which crictl
	I0401 19:31:49.078466   71168 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0401 19:31:49.078494   71168 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0401 19:31:49.225060   71168 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0401 19:31:49.225137   71168 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0401 19:31:49.225160   71168 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0401 19:31:49.225199   71168 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0401 19:31:49.225250   71168 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0401 19:31:49.225252   71168 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 19:31:49.225326   71168 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0401 19:31:49.280782   71168 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0401 19:31:49.281709   71168 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0401 19:31:49.299218   71168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:31:49.465497   71168 cache_images.go:92] duration metric: took 792.432136ms to LoadCachedImages
	W0401 19:31:49.465595   71168 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0401 19:31:49.465613   71168 kubeadm.go:928] updating node { 192.168.50.106 8443 v1.20.0 crio true true} ...
	I0401 19:31:49.465768   71168 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-163608 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.106
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-163608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 19:31:49.465862   71168 ssh_runner.go:195] Run: crio config
	I0401 19:31:49.529730   71168 cni.go:84] Creating CNI manager for ""
	I0401 19:31:49.529757   71168 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:31:49.529771   71168 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 19:31:49.529799   71168 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.106 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-163608 NodeName:old-k8s-version-163608 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.106"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.106 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0401 19:31:49.529969   71168 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.106
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-163608"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.106
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.106"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 19:31:49.530037   71168 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0401 19:31:49.542642   71168 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 19:31:49.542724   71168 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 19:31:49.557001   71168 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0401 19:31:49.579568   71168 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 19:31:49.599692   71168 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0401 19:31:49.619780   71168 ssh_runner.go:195] Run: grep 192.168.50.106	control-plane.minikube.internal$ /etc/hosts
	I0401 19:31:49.625597   71168 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.106	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:31:49.643862   71168 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:31:49.791391   71168 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:31:49.814470   71168 certs.go:68] Setting up /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608 for IP: 192.168.50.106
	I0401 19:31:49.814497   71168 certs.go:194] generating shared ca certs ...
	I0401 19:31:49.814516   71168 certs.go:226] acquiring lock for ca certs: {Name:mk348b3e250c104b662139cd7212c6c6dfda3180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:31:49.814680   71168 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key
	I0401 19:31:49.814736   71168 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key
	I0401 19:31:49.814745   71168 certs.go:256] generating profile certs ...
	I0401 19:31:49.814852   71168 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/client.key
	I0401 19:31:49.814916   71168 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/apiserver.key.f2de0982
	I0401 19:31:49.814964   71168 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/proxy-client.key
	I0401 19:31:49.815119   71168 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem (1338 bytes)
	W0401 19:31:49.815178   71168 certs.go:480] ignoring /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751_empty.pem, impossibly tiny 0 bytes
	I0401 19:31:49.815195   71168 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem (1675 bytes)
	I0401 19:31:49.815224   71168 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem (1082 bytes)
	I0401 19:31:49.815266   71168 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem (1123 bytes)
	I0401 19:31:49.815299   71168 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem (1679 bytes)
	I0401 19:31:49.815362   71168 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:31:49.816196   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 19:31:49.866842   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 19:31:49.913788   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 19:31:49.953223   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 19:31:50.004313   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0401 19:31:50.046972   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 19:31:50.086990   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 19:31:50.134907   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 19:31:50.163395   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /usr/share/ca-certificates/177512.pem (1708 bytes)
	I0401 19:31:50.191901   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 19:31:50.221196   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem --> /usr/share/ca-certificates/17751.pem (1338 bytes)
	I0401 19:31:50.253024   71168 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0401 19:31:50.275781   71168 ssh_runner.go:195] Run: openssl version
	I0401 19:31:50.282795   71168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177512.pem && ln -fs /usr/share/ca-certificates/177512.pem /etc/ssl/certs/177512.pem"
	I0401 19:31:50.296952   71168 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177512.pem
	I0401 19:31:50.303868   71168 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 18:15 /usr/share/ca-certificates/177512.pem
	I0401 19:31:50.303950   71168 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177512.pem
	I0401 19:31:50.312249   71168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177512.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 19:31:50.328985   71168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 19:31:50.345917   71168 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:50.352041   71168 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 18:07 /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:50.352103   71168 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:50.358752   71168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 19:31:50.371702   71168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17751.pem && ln -fs /usr/share/ca-certificates/17751.pem /etc/ssl/certs/17751.pem"
	I0401 19:31:50.384633   71168 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17751.pem
	I0401 19:31:50.391229   71168 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 18:15 /usr/share/ca-certificates/17751.pem
	I0401 19:31:50.391277   71168 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17751.pem
	I0401 19:31:50.397980   71168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17751.pem /etc/ssl/certs/51391683.0"
	I0401 19:31:50.412674   71168 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 19:31:50.418084   71168 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 19:31:50.425102   71168 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 19:31:50.431949   71168 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 19:31:50.438665   71168 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 19:31:50.446633   71168 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 19:31:50.454688   71168 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 19:31:50.462805   71168 kubeadm.go:391] StartCluster: {Name:old-k8s-version-163608 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-163608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.106 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:31:50.462922   71168 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 19:31:50.462956   71168 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:31:50.505702   71168 cri.go:89] found id: ""
	I0401 19:31:50.505788   71168 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0401 19:31:50.517916   71168 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0401 19:31:50.517934   71168 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0401 19:31:50.517940   71168 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0401 19:31:50.517995   71168 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 19:31:50.529459   71168 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 19:31:50.530408   71168 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-163608" does not appear in /home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 19:31:50.531055   71168 kubeconfig.go:62] /home/jenkins/minikube-integration/18233-10493/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-163608" cluster setting kubeconfig missing "old-k8s-version-163608" context setting]
	I0401 19:31:50.532369   71168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/kubeconfig: {Name:mkbd988e40ba29769e9f8a43c4d876f38e957f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:31:50.534578   71168 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 19:31:50.546275   71168 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.106
	I0401 19:31:50.546309   71168 kubeadm.go:1154] stopping kube-system containers ...
	I0401 19:31:50.546328   71168 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0401 19:31:50.546371   71168 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:31:50.588826   71168 cri.go:89] found id: ""
	I0401 19:31:50.588881   71168 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0401 19:31:50.610933   71168 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:31:50.622201   71168 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:31:50.622221   71168 kubeadm.go:156] found existing configuration files:
	
	I0401 19:31:50.622266   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 19:31:50.634006   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:31:50.634071   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:31:50.647891   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 19:31:50.662548   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:31:50.662596   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:31:50.674627   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 19:31:50.686739   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:31:50.686825   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:31:50.700400   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 19:31:50.712952   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:31:50.713014   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:31:50.725616   71168 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:31:50.739130   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:50.874552   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:51.568640   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:51.850288   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:52.009607   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:52.122887   71168 api_server.go:52] waiting for apiserver process to appear ...
	I0401 19:31:52.122962   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:52.623084   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:51.827968   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:54.325686   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:56.325892   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:53.817748   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:53.818158   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:53.818184   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:53.818122   72041 retry.go:31] will retry after 2.747390243s: waiting for machine to come up
	I0401 19:31:56.567288   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:56.567711   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:56.567742   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:56.567657   72041 retry.go:31] will retry after 3.904473051s: waiting for machine to come up
	I0401 19:31:53.107786   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:55.108974   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:53.123783   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:53.623248   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:54.124004   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:54.623873   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:55.123458   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:55.623923   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:56.123441   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:56.623192   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:57.123012   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:57.624010   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:58.325934   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:00.825343   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:00.476692   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.477192   70284 main.go:141] libmachine: (no-preload-472858) Found IP for machine: 192.168.72.119
	I0401 19:32:00.477217   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has current primary IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.477223   70284 main.go:141] libmachine: (no-preload-472858) Reserving static IP address...
	I0401 19:32:00.477672   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "no-preload-472858", mac: "52:54:00:0a:2e:03", ip: "192.168.72.119"} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:00.477708   70284 main.go:141] libmachine: (no-preload-472858) DBG | skip adding static IP to network mk-no-preload-472858 - found existing host DHCP lease matching {name: "no-preload-472858", mac: "52:54:00:0a:2e:03", ip: "192.168.72.119"}
	I0401 19:32:00.477726   70284 main.go:141] libmachine: (no-preload-472858) Reserved static IP address: 192.168.72.119
	I0401 19:32:00.477742   70284 main.go:141] libmachine: (no-preload-472858) Waiting for SSH to be available...
	I0401 19:32:00.477770   70284 main.go:141] libmachine: (no-preload-472858) DBG | Getting to WaitForSSH function...
	I0401 19:32:00.479949   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.480306   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:00.480334   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.480475   70284 main.go:141] libmachine: (no-preload-472858) DBG | Using SSH client type: external
	I0401 19:32:00.480508   70284 main.go:141] libmachine: (no-preload-472858) DBG | Using SSH private key: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/no-preload-472858/id_rsa (-rw-------)
	I0401 19:32:00.480538   70284 main.go:141] libmachine: (no-preload-472858) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.119 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18233-10493/.minikube/machines/no-preload-472858/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0401 19:32:00.480554   70284 main.go:141] libmachine: (no-preload-472858) DBG | About to run SSH command:
	I0401 19:32:00.480566   70284 main.go:141] libmachine: (no-preload-472858) DBG | exit 0
	I0401 19:32:00.610108   70284 main.go:141] libmachine: (no-preload-472858) DBG | SSH cmd err, output: <nil>: 
	I0401 19:32:00.610458   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetConfigRaw
	I0401 19:32:00.611059   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetIP
	I0401 19:32:00.613496   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.613872   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:00.613906   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.614179   70284 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/no-preload-472858/config.json ...
	I0401 19:32:00.614363   70284 machine.go:94] provisionDockerMachine start ...
	I0401 19:32:00.614382   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:32:00.614593   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:00.617019   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.617404   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:00.617430   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.617585   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:32:00.617780   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:00.617953   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:00.618098   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:32:00.618260   70284 main.go:141] libmachine: Using SSH client type: native
	I0401 19:32:00.618451   70284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.119 22 <nil> <nil>}
	I0401 19:32:00.618462   70284 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 19:32:00.730438   70284 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0401 19:32:00.730473   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetMachineName
	I0401 19:32:00.730725   70284 buildroot.go:166] provisioning hostname "no-preload-472858"
	I0401 19:32:00.730754   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetMachineName
	I0401 19:32:00.730994   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:00.733932   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.734274   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:00.734308   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.734419   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:32:00.734591   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:00.734752   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:00.734918   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:32:00.735092   70284 main.go:141] libmachine: Using SSH client type: native
	I0401 19:32:00.735296   70284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.119 22 <nil> <nil>}
	I0401 19:32:00.735313   70284 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-472858 && echo "no-preload-472858" | sudo tee /etc/hostname
	I0401 19:32:00.865664   70284 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-472858
	
	I0401 19:32:00.865702   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:00.868247   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.868619   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:00.868649   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.868845   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:32:00.869037   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:00.869244   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:00.869420   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:32:00.869671   70284 main.go:141] libmachine: Using SSH client type: native
	I0401 19:32:00.869840   70284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.119 22 <nil> <nil>}
	I0401 19:32:00.869859   70284 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-472858' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-472858/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-472858' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 19:32:00.991430   70284 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 19:32:00.991460   70284 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18233-10493/.minikube CaCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18233-10493/.minikube}
	I0401 19:32:00.991484   70284 buildroot.go:174] setting up certificates
	I0401 19:32:00.991493   70284 provision.go:84] configureAuth start
	I0401 19:32:00.991504   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetMachineName
	I0401 19:32:00.991748   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetIP
	I0401 19:32:00.994239   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.994566   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:00.994596   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.994722   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:00.996735   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.997064   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:00.997090   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.997212   70284 provision.go:143] copyHostCerts
	I0401 19:32:00.997265   70284 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem, removing ...
	I0401 19:32:00.997281   70284 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem
	I0401 19:32:00.997346   70284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem (1082 bytes)
	I0401 19:32:00.997493   70284 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem, removing ...
	I0401 19:32:00.997507   70284 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem
	I0401 19:32:00.997533   70284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem (1123 bytes)
	I0401 19:32:00.997619   70284 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem, removing ...
	I0401 19:32:00.997629   70284 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem
	I0401 19:32:00.997667   70284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem (1679 bytes)
	I0401 19:32:00.997733   70284 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem org=jenkins.no-preload-472858 san=[127.0.0.1 192.168.72.119 localhost minikube no-preload-472858]
	I0401 19:32:01.212397   70284 provision.go:177] copyRemoteCerts
	I0401 19:32:01.212453   70284 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 19:32:01.212473   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:01.214810   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.215170   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:01.215198   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.215398   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:32:01.215603   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:01.215761   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:32:01.215903   70284 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/no-preload-472858/id_rsa Username:docker}
	I0401 19:32:01.303113   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0401 19:32:01.331807   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 19:32:01.358429   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 19:32:01.384521   70284 provision.go:87] duration metric: took 393.005717ms to configureAuth
	I0401 19:32:01.384559   70284 buildroot.go:189] setting minikube options for container-runtime
	I0401 19:32:01.384748   70284 config.go:182] Loaded profile config "no-preload-472858": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0401 19:32:01.384862   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:01.387446   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.387828   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:01.387866   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.387966   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:32:01.388168   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:01.388356   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:01.388509   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:32:01.388663   70284 main.go:141] libmachine: Using SSH client type: native
	I0401 19:32:01.388847   70284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.119 22 <nil> <nil>}
	I0401 19:32:01.388867   70284 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 19:32:01.692586   70284 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 19:32:01.692615   70284 machine.go:97] duration metric: took 1.078237975s to provisionDockerMachine
	I0401 19:32:01.692628   70284 start.go:293] postStartSetup for "no-preload-472858" (driver="kvm2")
	I0401 19:32:01.692644   70284 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 19:32:01.692668   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:32:01.692988   70284 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 19:32:01.693012   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:01.696033   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.696405   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:01.696450   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.696603   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:32:01.696763   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:01.696901   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:32:01.697089   70284 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/no-preload-472858/id_rsa Username:docker}
	I0401 19:32:01.786626   70284 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 19:32:01.791703   70284 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 19:32:01.791726   70284 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/addons for local assets ...
	I0401 19:32:01.791802   70284 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/files for local assets ...
	I0401 19:32:01.791901   70284 filesync.go:149] local asset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> 177512.pem in /etc/ssl/certs
	I0401 19:32:01.791991   70284 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 19:32:01.803733   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:32:01.831768   70284 start.go:296] duration metric: took 139.126077ms for postStartSetup
	I0401 19:32:01.831804   70284 fix.go:56] duration metric: took 20.628199635s for fixHost
	I0401 19:32:01.831823   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:01.834218   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.834548   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:01.834574   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.834725   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:32:01.834901   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:01.835066   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:01.835188   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:32:01.835327   70284 main.go:141] libmachine: Using SSH client type: native
	I0401 19:32:01.835544   70284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.119 22 <nil> <nil>}
	I0401 19:32:01.835558   70284 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 19:31:57.607923   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:59.608857   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:02.106942   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:58.123200   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:58.624028   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:59.123026   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:59.623993   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:00.123039   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:00.623632   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:01.123204   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:01.623162   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:02.123264   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:02.623788   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:01.947198   70284 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711999921.892647753
	
	I0401 19:32:01.947267   70284 fix.go:216] guest clock: 1711999921.892647753
	I0401 19:32:01.947279   70284 fix.go:229] Guest: 2024-04-01 19:32:01.892647753 +0000 UTC Remote: 2024-04-01 19:32:01.831808507 +0000 UTC m=+359.938807685 (delta=60.839246ms)
	I0401 19:32:01.947305   70284 fix.go:200] guest clock delta is within tolerance: 60.839246ms
	I0401 19:32:01.947317   70284 start.go:83] releasing machines lock for "no-preload-472858", held for 20.743748352s
	I0401 19:32:01.947347   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:32:01.947621   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetIP
	I0401 19:32:01.950387   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.950719   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:01.950750   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.950940   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:32:01.951438   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:32:01.951631   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:32:01.951681   70284 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 19:32:01.951737   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:01.951854   70284 ssh_runner.go:195] Run: cat /version.json
	I0401 19:32:01.951881   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:01.954468   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.954603   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.954780   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:01.954815   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.954932   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:01.954960   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.954984   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:32:01.955193   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:01.955230   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:32:01.955341   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:32:01.955388   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:01.955510   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:32:01.955501   70284 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/no-preload-472858/id_rsa Username:docker}
	I0401 19:32:01.955670   70284 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/no-preload-472858/id_rsa Username:docker}
	I0401 19:32:02.035332   70284 ssh_runner.go:195] Run: systemctl --version
	I0401 19:32:02.061178   70284 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 19:32:02.220309   70284 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 19:32:02.227811   70284 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 19:32:02.227885   70284 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 19:32:02.247605   70284 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 19:32:02.247634   70284 start.go:494] detecting cgroup driver to use...
	I0401 19:32:02.247690   70284 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 19:32:02.265463   70284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 19:32:02.280175   70284 docker.go:217] disabling cri-docker service (if available) ...
	I0401 19:32:02.280246   70284 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 19:32:02.295003   70284 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 19:32:02.315072   70284 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 19:32:02.449108   70284 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 19:32:02.627772   70284 docker.go:233] disabling docker service ...
	I0401 19:32:02.627850   70284 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 19:32:02.642924   70284 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 19:32:02.657038   70284 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 19:32:02.787085   70284 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 19:32:02.918355   70284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 19:32:02.934828   70284 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 19:32:02.955495   70284 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0401 19:32:02.955548   70284 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:32:02.966690   70284 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 19:32:02.966754   70284 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:32:02.977812   70284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:32:02.989329   70284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:32:03.000727   70284 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 19:32:03.012341   70284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:32:03.023305   70284 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:32:03.044213   70284 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:32:03.055614   70284 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 19:32:03.065880   70284 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0401 19:32:03.065927   70284 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0401 19:32:03.080514   70284 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 19:32:03.090798   70284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:32:03.224199   70284 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 19:32:03.389414   70284 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 19:32:03.389482   70284 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 19:32:03.395493   70284 start.go:562] Will wait 60s for crictl version
	I0401 19:32:03.395539   70284 ssh_runner.go:195] Run: which crictl
	I0401 19:32:03.399739   70284 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 19:32:03.441020   70284 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 19:32:03.441114   70284 ssh_runner.go:195] Run: crio --version
	I0401 19:32:03.474572   70284 ssh_runner.go:195] Run: crio --version
	I0401 19:32:03.511681   70284 out.go:177] * Preparing Kubernetes v1.30.0-rc.0 on CRI-O 1.29.1 ...
	I0401 19:32:02.825628   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:04.825973   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:03.513067   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetIP
	I0401 19:32:03.515901   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:03.516281   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:03.516315   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:03.516523   70284 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0401 19:32:03.521197   70284 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:32:03.536333   70284 kubeadm.go:877] updating cluster {Name:no-preload-472858 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0-rc.0 ClusterName:no-preload-472858 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.119 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 19:32:03.536459   70284 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.0 and runtime crio
	I0401 19:32:03.536507   70284 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:32:03.582858   70284 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0-rc.0". assuming images are not preloaded.
	I0401 19:32:03.582887   70284 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0-rc.0 registry.k8s.io/kube-controller-manager:v1.30.0-rc.0 registry.k8s.io/kube-scheduler:v1.30.0-rc.0 registry.k8s.io/kube-proxy:v1.30.0-rc.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0401 19:32:03.582970   70284 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:32:03.583026   70284 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0401 19:32:03.583032   70284 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0401 19:32:03.583071   70284 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0401 19:32:03.583161   70284 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0401 19:32:03.582997   70284 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0401 19:32:03.583238   70284 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0401 19:32:03.583388   70284 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0401 19:32:03.584618   70284 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0401 19:32:03.584626   70284 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0401 19:32:03.584630   70284 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:32:03.584619   70284 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0401 19:32:03.584640   70284 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0401 19:32:03.584626   70284 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0401 19:32:03.584701   70284 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0401 19:32:03.584856   70284 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0401 19:32:03.730086   70284 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0401 19:32:03.752217   70284 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0401 19:32:03.765621   70284 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0401 19:32:03.766526   70284 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0401 19:32:03.770748   70284 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0401 19:32:03.777614   70284 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0401 19:32:03.777672   70284 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0401 19:32:03.777699   70284 ssh_runner.go:195] Run: which crictl
	I0401 19:32:03.840814   70284 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0401 19:32:03.852416   70284 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0401 19:32:03.869889   70284 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0-rc.0" does not exist at hash "e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3" in container runtime
	I0401 19:32:03.869929   70284 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0401 19:32:03.869979   70284 ssh_runner.go:195] Run: which crictl
	I0401 19:32:03.874654   70284 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0-rc.0" does not exist at hash "ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a" in container runtime
	I0401 19:32:03.874693   70284 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0401 19:32:03.874737   70284 ssh_runner.go:195] Run: which crictl
	I0401 19:32:03.899207   70284 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:32:03.906139   70284 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0-rc.0" does not exist at hash "fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5" in container runtime
	I0401 19:32:03.906182   70284 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0401 19:32:03.906227   70284 ssh_runner.go:195] Run: which crictl
	I0401 19:32:03.996916   70284 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0401 19:32:03.996987   70284 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0-rc.0" does not exist at hash "33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652" in container runtime
	I0401 19:32:03.997022   70284 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0401 19:32:03.997045   70284 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0401 19:32:03.997053   70284 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0401 19:32:03.997054   70284 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0401 19:32:03.997089   70284 ssh_runner.go:195] Run: which crictl
	I0401 19:32:03.997128   70284 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0401 19:32:03.997142   70284 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0401 19:32:03.997090   70284 ssh_runner.go:195] Run: which crictl
	I0401 19:32:03.997164   70284 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:32:03.997194   70284 ssh_runner.go:195] Run: which crictl
	I0401 19:32:03.997211   70284 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0401 19:32:04.090272   70284 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0401 19:32:04.090548   70284 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0401 19:32:04.090639   70284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0401 19:32:04.102041   70284 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0401 19:32:04.102130   70284 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.0
	I0401 19:32:04.102168   70284 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.0
	I0401 19:32:04.102226   70284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0
	I0401 19:32:04.102241   70284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0
	I0401 19:32:04.102278   70284 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:32:04.108100   70284 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.0
	I0401 19:32:04.108192   70284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0
	I0401 19:32:04.182707   70284 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0401 19:32:04.182747   70284 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0401 19:32:04.182759   70284 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0401 19:32:04.182815   70284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0401 19:32:04.182820   70284 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0401 19:32:04.182883   70284 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.0
	I0401 19:32:04.182988   70284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0
	I0401 19:32:04.186135   70284 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0 (exists)
	I0401 19:32:04.186175   70284 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0 (exists)
	I0401 19:32:04.186221   70284 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0 (exists)
	I0401 19:32:04.186242   70284 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0401 19:32:04.186324   70284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0401 19:32:06.352362   70284 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.169442796s)
	I0401 19:32:06.352398   70284 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0401 19:32:06.352419   70284 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0
	I0401 19:32:06.352416   70284 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0: (2.16957379s)
	I0401 19:32:06.352443   70284 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0401 19:32:06.352465   70284 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0
	I0401 19:32:06.352465   70284 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0: (2.16945688s)
	I0401 19:32:06.352479   70284 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.166139431s)
	I0401 19:32:06.352490   70284 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0401 19:32:06.352491   70284 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0 (exists)
	I0401 19:32:04.109989   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:06.294038   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:03.123452   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:03.623784   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:04.123649   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:04.623076   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:05.123822   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:05.623487   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:06.123635   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:06.623689   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:07.123919   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:07.623237   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:06.826244   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:09.326937   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:09.261547   70284 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0: (2.909056315s)
	I0401 19:32:09.261572   70284 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.0 from cache
	I0401 19:32:09.261600   70284 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0
	I0401 19:32:09.261668   70284 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0
	I0401 19:32:11.739636   70284 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0: (2.477945807s)
	I0401 19:32:11.739667   70284 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.0 from cache
	I0401 19:32:11.739702   70284 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0
	I0401 19:32:11.739761   70284 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0
	I0401 19:32:08.609901   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:11.114752   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:08.123689   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:08.623160   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:09.124002   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:09.623090   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:10.123049   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:10.623111   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:11.123042   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:11.623980   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:12.123074   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:12.623530   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:11.826409   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:13.828437   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:16.326097   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:13.195232   70284 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0: (1.455440816s)
	I0401 19:32:13.195267   70284 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.0 from cache
	I0401 19:32:13.195299   70284 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0401 19:32:13.195350   70284 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0401 19:32:13.607042   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:16.107993   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:13.123428   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:13.623899   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:14.123324   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:14.623889   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:15.123496   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:15.623779   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:16.124012   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:16.623620   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:17.123867   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:17.623014   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:18.326127   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:20.326575   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:17.202247   70284 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.006869591s)
	I0401 19:32:17.202284   70284 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0401 19:32:17.202315   70284 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0401 19:32:17.202364   70284 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0401 19:32:17.962735   70284 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0401 19:32:17.962785   70284 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0
	I0401 19:32:17.962850   70284 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0
	I0401 19:32:20.235136   70284 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0: (2.272262595s)
	I0401 19:32:20.235161   70284 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.0 from cache
	I0401 19:32:20.235193   70284 cache_images.go:123] Successfully loaded all cached images
	I0401 19:32:20.235197   70284 cache_images.go:92] duration metric: took 16.652290938s to LoadCachedImages
	I0401 19:32:20.235205   70284 kubeadm.go:928] updating node { 192.168.72.119 8443 v1.30.0-rc.0 crio true true} ...
	I0401 19:32:20.235332   70284 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-472858 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.119
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-rc.0 ClusterName:no-preload-472858 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 19:32:20.235402   70284 ssh_runner.go:195] Run: crio config
	I0401 19:32:20.296015   70284 cni.go:84] Creating CNI manager for ""
	I0401 19:32:20.296039   70284 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:32:20.296050   70284 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 19:32:20.296074   70284 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.119 APIServerPort:8443 KubernetesVersion:v1.30.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-472858 NodeName:no-preload-472858 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.119"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.119 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 19:32:20.296217   70284 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.119
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-472858"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.119
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.119"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 19:32:20.296275   70284 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.0
	I0401 19:32:20.307937   70284 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 19:32:20.308009   70284 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 19:32:20.318571   70284 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0401 19:32:20.339284   70284 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0401 19:32:20.358601   70284 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0401 19:32:20.379394   70284 ssh_runner.go:195] Run: grep 192.168.72.119	control-plane.minikube.internal$ /etc/hosts
	I0401 19:32:20.383948   70284 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.119	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:32:20.397559   70284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:32:20.549147   70284 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:32:20.568027   70284 certs.go:68] Setting up /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/no-preload-472858 for IP: 192.168.72.119
	I0401 19:32:20.568051   70284 certs.go:194] generating shared ca certs ...
	I0401 19:32:20.568070   70284 certs.go:226] acquiring lock for ca certs: {Name:mk348b3e250c104b662139cd7212c6c6dfda3180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:32:20.568273   70284 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key
	I0401 19:32:20.568337   70284 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key
	I0401 19:32:20.568352   70284 certs.go:256] generating profile certs ...
	I0401 19:32:20.568453   70284 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/no-preload-472858/client.key
	I0401 19:32:20.568534   70284 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/no-preload-472858/apiserver.key.bfc8ff8f
	I0401 19:32:20.568586   70284 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/no-preload-472858/proxy-client.key
	I0401 19:32:20.568691   70284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem (1338 bytes)
	W0401 19:32:20.568718   70284 certs.go:480] ignoring /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751_empty.pem, impossibly tiny 0 bytes
	I0401 19:32:20.568728   70284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem (1675 bytes)
	I0401 19:32:20.568747   70284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem (1082 bytes)
	I0401 19:32:20.568773   70284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem (1123 bytes)
	I0401 19:32:20.568795   70284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem (1679 bytes)
	I0401 19:32:20.568830   70284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:32:20.569519   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 19:32:20.605218   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 19:32:20.650321   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 19:32:20.676884   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 19:32:20.705378   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/no-preload-472858/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0401 19:32:20.733068   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/no-preload-472858/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 19:32:20.767387   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/no-preload-472858/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 19:32:20.793543   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/no-preload-472858/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 19:32:20.820843   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /usr/share/ca-certificates/177512.pem (1708 bytes)
	I0401 19:32:20.848364   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 19:32:20.877551   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem --> /usr/share/ca-certificates/17751.pem (1338 bytes)
	I0401 19:32:20.904650   70284 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0401 19:32:20.922876   70284 ssh_runner.go:195] Run: openssl version
	I0401 19:32:20.929441   70284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 19:32:20.942496   70284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:32:20.948011   70284 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 18:07 /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:32:20.948080   70284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:32:20.954320   70284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 19:32:20.968060   70284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17751.pem && ln -fs /usr/share/ca-certificates/17751.pem /etc/ssl/certs/17751.pem"
	I0401 19:32:20.981591   70284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17751.pem
	I0401 19:32:20.986660   70284 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 18:15 /usr/share/ca-certificates/17751.pem
	I0401 19:32:20.986706   70284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17751.pem
	I0401 19:32:20.993394   70284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17751.pem /etc/ssl/certs/51391683.0"
	I0401 19:32:21.006530   70284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177512.pem && ln -fs /usr/share/ca-certificates/177512.pem /etc/ssl/certs/177512.pem"
	I0401 19:32:21.020014   70284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177512.pem
	I0401 19:32:21.025507   70284 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 18:15 /usr/share/ca-certificates/177512.pem
	I0401 19:32:21.025560   70284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177512.pem
	I0401 19:32:21.032433   70284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177512.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 19:32:21.047002   70284 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 19:32:21.052551   70284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 19:32:21.059875   70284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 19:32:21.067243   70284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 19:32:21.074304   70284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 19:32:21.080978   70284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 19:32:21.088051   70284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 19:32:21.095219   70284 kubeadm.go:391] StartCluster: {Name:no-preload-472858 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0-rc.0 ClusterName:no-preload-472858 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.119 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:32:21.095325   70284 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 19:32:21.095403   70284 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:32:21.144103   70284 cri.go:89] found id: ""
	I0401 19:32:21.144187   70284 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0401 19:32:21.157222   70284 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0401 19:32:21.157241   70284 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0401 19:32:21.157246   70284 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0401 19:32:21.157290   70284 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 19:32:21.169027   70284 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 19:32:21.170123   70284 kubeconfig.go:125] found "no-preload-472858" server: "https://192.168.72.119:8443"
	I0401 19:32:21.172523   70284 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 19:32:21.183801   70284 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.119
	I0401 19:32:21.183838   70284 kubeadm.go:1154] stopping kube-system containers ...
	I0401 19:32:21.183847   70284 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0401 19:32:21.183892   70284 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:32:21.229279   70284 cri.go:89] found id: ""
	I0401 19:32:21.229357   70284 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0401 19:32:21.249719   70284 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:32:21.261894   70284 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:32:21.261929   70284 kubeadm.go:156] found existing configuration files:
	
	I0401 19:32:21.261984   70284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 19:32:21.273961   70284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:32:21.274026   70284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:32:21.286746   70284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 19:32:21.297920   70284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:32:21.297986   70284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:32:21.308793   70284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 19:32:21.319612   70284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:32:21.319658   70284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:32:21.332730   70284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 19:32:21.344752   70284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:32:21.344810   70284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:32:21.355821   70284 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:32:21.366649   70284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:32:21.482208   70284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:32:18.607685   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:20.607824   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:18.123795   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:18.623529   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:19.123446   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:19.623223   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:20.123133   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:20.623058   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:21.123302   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:21.623115   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:22.123810   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:22.623878   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:22.826056   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:24.826357   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:22.312148   70284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:32:22.533156   70284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:32:22.620390   70284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:32:22.704948   70284 api_server.go:52] waiting for apiserver process to appear ...
	I0401 19:32:22.705039   70284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:23.205114   70284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:23.706000   70284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:23.725209   70284 api_server.go:72] duration metric: took 1.020261742s to wait for apiserver process to appear ...
	I0401 19:32:23.725243   70284 api_server.go:88] waiting for apiserver healthz status ...
	I0401 19:32:23.725264   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:23.725749   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": dial tcp 192.168.72.119:8443: connect: connection refused
	I0401 19:32:24.226383   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:23.107450   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:25.109899   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:23.123507   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:23.623244   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:24.123444   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:24.623346   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:25.123834   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:25.623814   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:26.124028   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:26.623428   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:27.123592   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:27.623451   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:27.327961   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:29.826272   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:29.226831   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0401 19:32:29.226876   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:27.607575   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:29.608427   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:32.106668   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:28.123454   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:28.623502   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:29.123265   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:29.623449   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:30.123525   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:30.623634   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:31.123972   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:31.623023   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:32.123346   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:32.623839   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:32.325638   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:34.325777   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:36.326510   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:34.227668   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0401 19:32:34.227723   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:34.606929   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:36.607515   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:33.123673   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:33.623088   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:34.123230   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:34.623967   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:35.123420   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:35.623499   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:36.123152   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:36.623963   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:37.123682   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:37.623536   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:38.829585   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:41.325607   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:39.228117   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0401 19:32:39.228164   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:39.107473   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:41.607043   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:38.123238   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:38.623831   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:39.123180   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:39.623801   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:40.123478   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:40.623651   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:41.123687   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:41.624016   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:42.123891   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:42.623493   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:43.326457   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:45.827310   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:44.228934   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0401 19:32:44.228982   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:44.259601   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": read tcp 192.168.72.1:37026->192.168.72.119:8443: read: connection reset by peer
	I0401 19:32:44.726186   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:44.726759   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": dial tcp 192.168.72.119:8443: connect: connection refused
	I0401 19:32:45.226347   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:43.607936   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:46.106775   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:43.123504   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:43.623527   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:44.124016   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:44.623931   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:45.123188   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:45.623649   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:46.123570   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:46.623179   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:47.123273   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:47.623842   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:48.325252   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:50.327365   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:50.226859   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0401 19:32:50.226907   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:48.109152   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:50.607327   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:48.123759   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:48.623092   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:49.123174   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:49.623986   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:50.123301   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:50.623694   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:51.123466   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:51.623618   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:52.123073   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:32:52.123172   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:32:52.164635   71168 cri.go:89] found id: ""
	I0401 19:32:52.164656   71168 logs.go:276] 0 containers: []
	W0401 19:32:52.164663   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:32:52.164669   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:32:52.164738   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:32:52.202531   71168 cri.go:89] found id: ""
	I0401 19:32:52.202560   71168 logs.go:276] 0 containers: []
	W0401 19:32:52.202572   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:32:52.202580   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:32:52.202653   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:32:52.247667   71168 cri.go:89] found id: ""
	I0401 19:32:52.247693   71168 logs.go:276] 0 containers: []
	W0401 19:32:52.247703   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:32:52.247714   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:32:52.247774   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:32:52.289029   71168 cri.go:89] found id: ""
	I0401 19:32:52.289054   71168 logs.go:276] 0 containers: []
	W0401 19:32:52.289062   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:32:52.289068   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:32:52.289114   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:32:52.326820   71168 cri.go:89] found id: ""
	I0401 19:32:52.326864   71168 logs.go:276] 0 containers: []
	W0401 19:32:52.326875   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:32:52.326882   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:32:52.326944   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:32:52.362793   71168 cri.go:89] found id: ""
	I0401 19:32:52.362827   71168 logs.go:276] 0 containers: []
	W0401 19:32:52.362838   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:32:52.362845   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:32:52.362950   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:32:52.400174   71168 cri.go:89] found id: ""
	I0401 19:32:52.400204   71168 logs.go:276] 0 containers: []
	W0401 19:32:52.400215   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:32:52.400222   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:32:52.400282   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:32:52.436027   71168 cri.go:89] found id: ""
	I0401 19:32:52.436056   71168 logs.go:276] 0 containers: []
	W0401 19:32:52.436066   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:32:52.436085   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:32:52.436099   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:32:52.477246   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:32:52.477272   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:32:52.529215   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:32:52.529247   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:32:52.544695   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:32:52.544724   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:32:52.677816   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:32:52.677849   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:32:52.677877   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:32:52.825288   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:54.826043   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:55.228105   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0401 19:32:55.228139   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:53.106774   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:55.107668   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:55.241224   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:55.256975   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:32:55.257045   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:32:55.298280   71168 cri.go:89] found id: ""
	I0401 19:32:55.298307   71168 logs.go:276] 0 containers: []
	W0401 19:32:55.298319   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:32:55.298326   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:32:55.298397   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:32:55.337707   71168 cri.go:89] found id: ""
	I0401 19:32:55.337732   71168 logs.go:276] 0 containers: []
	W0401 19:32:55.337739   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:32:55.337745   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:32:55.337791   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:32:55.381455   71168 cri.go:89] found id: ""
	I0401 19:32:55.381479   71168 logs.go:276] 0 containers: []
	W0401 19:32:55.381490   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:32:55.381496   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:32:55.381557   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:32:55.420715   71168 cri.go:89] found id: ""
	I0401 19:32:55.420739   71168 logs.go:276] 0 containers: []
	W0401 19:32:55.420749   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:32:55.420756   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:32:55.420820   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:32:55.459546   71168 cri.go:89] found id: ""
	I0401 19:32:55.459575   71168 logs.go:276] 0 containers: []
	W0401 19:32:55.459583   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:32:55.459588   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:32:55.459634   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:32:55.504240   71168 cri.go:89] found id: ""
	I0401 19:32:55.504267   71168 logs.go:276] 0 containers: []
	W0401 19:32:55.504277   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:32:55.504285   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:32:55.504368   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:32:55.539399   71168 cri.go:89] found id: ""
	I0401 19:32:55.539426   71168 logs.go:276] 0 containers: []
	W0401 19:32:55.539437   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:32:55.539443   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:32:55.539509   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:32:55.583823   71168 cri.go:89] found id: ""
	I0401 19:32:55.583861   71168 logs.go:276] 0 containers: []
	W0401 19:32:55.583872   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:32:55.583881   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:32:55.583895   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:32:55.645489   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:32:55.645523   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:32:55.712883   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:32:55.712920   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:32:55.734890   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:32:55.734923   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:32:55.853068   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:32:55.853089   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:32:55.853102   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:32:57.325965   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:59.827753   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:00.228533   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0401 19:33:00.228582   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:57.607203   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:59.610732   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:02.108676   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:58.435925   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:58.450910   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:32:58.450980   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:32:58.487470   71168 cri.go:89] found id: ""
	I0401 19:32:58.487495   71168 logs.go:276] 0 containers: []
	W0401 19:32:58.487506   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:32:58.487514   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:32:58.487562   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:32:58.529513   71168 cri.go:89] found id: ""
	I0401 19:32:58.529534   71168 logs.go:276] 0 containers: []
	W0401 19:32:58.529543   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:32:58.529547   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:32:58.529592   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:32:58.574170   71168 cri.go:89] found id: ""
	I0401 19:32:58.574197   71168 logs.go:276] 0 containers: []
	W0401 19:32:58.574205   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:32:58.574211   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:32:58.574258   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:32:58.615379   71168 cri.go:89] found id: ""
	I0401 19:32:58.615405   71168 logs.go:276] 0 containers: []
	W0401 19:32:58.615414   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:32:58.615419   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:32:58.615468   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:32:58.655496   71168 cri.go:89] found id: ""
	I0401 19:32:58.655523   71168 logs.go:276] 0 containers: []
	W0401 19:32:58.655534   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:32:58.655542   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:32:58.655593   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:32:58.697199   71168 cri.go:89] found id: ""
	I0401 19:32:58.697229   71168 logs.go:276] 0 containers: []
	W0401 19:32:58.697238   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:32:58.697246   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:32:58.697312   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:32:58.735618   71168 cri.go:89] found id: ""
	I0401 19:32:58.735643   71168 logs.go:276] 0 containers: []
	W0401 19:32:58.735651   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:32:58.735656   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:32:58.735701   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:32:58.780583   71168 cri.go:89] found id: ""
	I0401 19:32:58.780613   71168 logs.go:276] 0 containers: []
	W0401 19:32:58.780624   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:32:58.780635   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:32:58.780649   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:32:58.829717   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:32:58.829743   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:32:58.844836   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:32:58.844866   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:32:58.923138   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:32:58.923157   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:32:58.923172   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:32:58.993680   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:32:58.993713   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:01.538920   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:01.556943   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:01.557017   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:01.608397   71168 cri.go:89] found id: ""
	I0401 19:33:01.608417   71168 logs.go:276] 0 containers: []
	W0401 19:33:01.608425   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:01.608430   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:01.608490   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:01.666573   71168 cri.go:89] found id: ""
	I0401 19:33:01.666599   71168 logs.go:276] 0 containers: []
	W0401 19:33:01.666609   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:01.666615   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:01.666674   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:01.726308   71168 cri.go:89] found id: ""
	I0401 19:33:01.726331   71168 logs.go:276] 0 containers: []
	W0401 19:33:01.726341   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:01.726347   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:01.726412   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:01.773095   71168 cri.go:89] found id: ""
	I0401 19:33:01.773118   71168 logs.go:276] 0 containers: []
	W0401 19:33:01.773125   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:01.773131   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:01.773189   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:01.813011   71168 cri.go:89] found id: ""
	I0401 19:33:01.813034   71168 logs.go:276] 0 containers: []
	W0401 19:33:01.813042   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:01.813048   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:01.813096   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:01.859124   71168 cri.go:89] found id: ""
	I0401 19:33:01.859151   71168 logs.go:276] 0 containers: []
	W0401 19:33:01.859161   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:01.859169   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:01.859228   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:01.904491   71168 cri.go:89] found id: ""
	I0401 19:33:01.904519   71168 logs.go:276] 0 containers: []
	W0401 19:33:01.904530   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:01.904537   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:01.904596   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:01.946768   71168 cri.go:89] found id: ""
	I0401 19:33:01.946794   71168 logs.go:276] 0 containers: []
	W0401 19:33:01.946804   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:01.946815   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:01.946829   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:02.026315   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:02.026362   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:02.072861   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:02.072893   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:02.132064   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:02.132105   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:02.151545   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:02.151575   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:02.234059   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:02.325806   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:04.327258   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:03.215901   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0401 19:33:03.215933   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0401 19:33:03.215947   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:03.264913   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0401 19:33:03.264946   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0401 19:33:03.264961   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:03.272548   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0401 19:33:03.272580   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0401 19:33:03.726254   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:03.731022   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:03.731050   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:04.225595   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:04.237757   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:04.237783   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:04.725330   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:04.734019   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:04.734047   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:05.225303   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:05.242774   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:05.242811   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:05.726350   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:05.730775   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:05.730838   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:06.225345   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:06.229749   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:06.229793   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:06.725687   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:06.730607   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:06.730640   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:04.112109   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:06.606160   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:04.734559   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:04.755071   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:04.755130   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:04.798316   71168 cri.go:89] found id: ""
	I0401 19:33:04.798345   71168 logs.go:276] 0 containers: []
	W0401 19:33:04.798358   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:04.798366   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:04.798426   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:04.840011   71168 cri.go:89] found id: ""
	I0401 19:33:04.840032   71168 logs.go:276] 0 containers: []
	W0401 19:33:04.840043   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:04.840050   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:04.840106   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:04.883686   71168 cri.go:89] found id: ""
	I0401 19:33:04.883713   71168 logs.go:276] 0 containers: []
	W0401 19:33:04.883725   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:04.883733   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:04.883795   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:04.933810   71168 cri.go:89] found id: ""
	I0401 19:33:04.933844   71168 logs.go:276] 0 containers: []
	W0401 19:33:04.933855   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:04.933863   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:04.933925   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:04.983118   71168 cri.go:89] found id: ""
	I0401 19:33:04.983139   71168 logs.go:276] 0 containers: []
	W0401 19:33:04.983146   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:04.983151   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:04.983207   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:05.036146   71168 cri.go:89] found id: ""
	I0401 19:33:05.036169   71168 logs.go:276] 0 containers: []
	W0401 19:33:05.036179   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:05.036186   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:05.036242   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:05.086269   71168 cri.go:89] found id: ""
	I0401 19:33:05.086296   71168 logs.go:276] 0 containers: []
	W0401 19:33:05.086308   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:05.086315   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:05.086378   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:05.140893   71168 cri.go:89] found id: ""
	I0401 19:33:05.140914   71168 logs.go:276] 0 containers: []
	W0401 19:33:05.140922   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:05.140931   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:05.140946   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:05.161222   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:05.161249   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:05.262254   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:05.262276   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:05.262289   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:05.352880   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:05.352908   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:05.400720   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:05.400748   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:07.954227   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:07.225774   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:07.230656   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:07.230684   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:07.726299   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:07.731793   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:07.731830   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:08.225362   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:08.229716   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:08.229755   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:08.725315   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:08.733428   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 200:
	ok
	I0401 19:33:08.739761   70284 api_server.go:141] control plane version: v1.30.0-rc.0
	I0401 19:33:08.739788   70284 api_server.go:131] duration metric: took 45.014537527s to wait for apiserver health ...
	I0401 19:33:08.739796   70284 cni.go:84] Creating CNI manager for ""
	I0401 19:33:08.739802   70284 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:33:08.741701   70284 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0401 19:33:06.825165   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:08.829987   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:11.327172   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:08.743011   70284 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0401 19:33:08.758184   70284 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0401 19:33:08.778975   70284 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 19:33:08.789725   70284 system_pods.go:59] 8 kube-system pods found
	I0401 19:33:08.789763   70284 system_pods.go:61] "coredns-7db6d8ff4d-gdml5" [039c8887-dff0-40e5-b8b5-00ef2f4a21cc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:33:08.789771   70284 system_pods.go:61] "etcd-no-preload-472858" [09086659-e20f-40da-b01f-3690e110ffeb] Running
	I0401 19:33:08.789781   70284 system_pods.go:61] "kube-apiserver-no-preload-472858" [5139434c-3d23-4736-86ad-28253c89f7da] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0401 19:33:08.789794   70284 system_pods.go:61] "kube-controller-manager-no-preload-472858" [965d600a-612e-4625-b883-7105f9166503] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0401 19:33:08.789806   70284 system_pods.go:61] "kube-proxy-7c22p" [903412f5-252c-41f3-81ac-1ae47522b403] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:33:08.789820   70284 system_pods.go:61] "kube-scheduler-no-preload-472858" [936981be-fc5e-4865-811c-936fab59f37b] Running
	I0401 19:33:08.789832   70284 system_pods.go:61] "metrics-server-569cc877fc-wlr7k" [14010e9a-9662-46c9-bc46-cc6d19c0cddf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:33:08.789839   70284 system_pods.go:61] "storage-provisioner" [2e5d9f78-e74c-4b3b-8878-e4bd8ce34108] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:33:08.789861   70284 system_pods.go:74] duration metric: took 10.868458ms to wait for pod list to return data ...
	I0401 19:33:08.789874   70284 node_conditions.go:102] verifying NodePressure condition ...
	I0401 19:33:08.793853   70284 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 19:33:08.793883   70284 node_conditions.go:123] node cpu capacity is 2
	I0401 19:33:08.793897   70284 node_conditions.go:105] duration metric: took 4.016996ms to run NodePressure ...
	I0401 19:33:08.793916   70284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:33:09.081698   70284 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0401 19:33:09.085681   70284 kubeadm.go:733] kubelet initialised
	I0401 19:33:09.085699   70284 kubeadm.go:734] duration metric: took 3.976973ms waiting for restarted kubelet to initialise ...
	I0401 19:33:09.085705   70284 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:33:09.090647   70284 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:11.102738   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:08.608194   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:11.109659   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:07.970794   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:07.970850   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:08.013694   71168 cri.go:89] found id: ""
	I0401 19:33:08.013719   71168 logs.go:276] 0 containers: []
	W0401 19:33:08.013729   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:08.013737   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:08.013810   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:08.050810   71168 cri.go:89] found id: ""
	I0401 19:33:08.050849   71168 logs.go:276] 0 containers: []
	W0401 19:33:08.050861   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:08.050868   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:08.050932   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:08.092056   71168 cri.go:89] found id: ""
	I0401 19:33:08.092086   71168 logs.go:276] 0 containers: []
	W0401 19:33:08.092096   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:08.092102   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:08.092157   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:08.133171   71168 cri.go:89] found id: ""
	I0401 19:33:08.133195   71168 logs.go:276] 0 containers: []
	W0401 19:33:08.133205   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:08.133212   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:08.133271   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:08.173997   71168 cri.go:89] found id: ""
	I0401 19:33:08.174023   71168 logs.go:276] 0 containers: []
	W0401 19:33:08.174034   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:08.174041   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:08.174102   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:08.212740   71168 cri.go:89] found id: ""
	I0401 19:33:08.212768   71168 logs.go:276] 0 containers: []
	W0401 19:33:08.212778   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:08.212785   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:08.212831   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:08.254815   71168 cri.go:89] found id: ""
	I0401 19:33:08.254837   71168 logs.go:276] 0 containers: []
	W0401 19:33:08.254847   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:08.254854   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:08.254909   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:08.295347   71168 cri.go:89] found id: ""
	I0401 19:33:08.295375   71168 logs.go:276] 0 containers: []
	W0401 19:33:08.295382   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:08.295390   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:08.295402   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:08.311574   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:08.311600   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:08.405437   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:08.405455   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:08.405470   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:08.483687   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:08.483722   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:08.526132   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:08.526158   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:11.076590   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:11.093846   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:11.093983   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:11.146046   71168 cri.go:89] found id: ""
	I0401 19:33:11.146073   71168 logs.go:276] 0 containers: []
	W0401 19:33:11.146083   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:11.146088   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:11.146146   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:11.193751   71168 cri.go:89] found id: ""
	I0401 19:33:11.193782   71168 logs.go:276] 0 containers: []
	W0401 19:33:11.193793   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:11.193801   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:11.193873   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:11.242150   71168 cri.go:89] found id: ""
	I0401 19:33:11.242178   71168 logs.go:276] 0 containers: []
	W0401 19:33:11.242189   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:11.242197   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:11.242271   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:11.294063   71168 cri.go:89] found id: ""
	I0401 19:33:11.294092   71168 logs.go:276] 0 containers: []
	W0401 19:33:11.294103   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:11.294110   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:11.294175   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:11.334764   71168 cri.go:89] found id: ""
	I0401 19:33:11.334784   71168 logs.go:276] 0 containers: []
	W0401 19:33:11.334791   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:11.334797   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:11.334846   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:11.372770   71168 cri.go:89] found id: ""
	I0401 19:33:11.372789   71168 logs.go:276] 0 containers: []
	W0401 19:33:11.372795   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:11.372806   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:11.372871   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:11.413233   71168 cri.go:89] found id: ""
	I0401 19:33:11.413261   71168 logs.go:276] 0 containers: []
	W0401 19:33:11.413271   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:11.413278   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:11.413337   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:11.456044   71168 cri.go:89] found id: ""
	I0401 19:33:11.456073   71168 logs.go:276] 0 containers: []
	W0401 19:33:11.456084   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:11.456093   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:11.456103   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:11.471157   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:11.471183   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:11.550489   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:11.550508   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:11.550523   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:11.635360   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:11.635389   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:11.680683   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:11.680713   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:13.827425   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:16.325563   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:13.104812   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:15.602114   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:13.607926   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:16.107219   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:14.235295   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:14.251513   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:14.251590   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:14.291688   71168 cri.go:89] found id: ""
	I0401 19:33:14.291715   71168 logs.go:276] 0 containers: []
	W0401 19:33:14.291725   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:14.291732   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:14.291792   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:14.332030   71168 cri.go:89] found id: ""
	I0401 19:33:14.332051   71168 logs.go:276] 0 containers: []
	W0401 19:33:14.332060   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:14.332068   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:14.332132   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:14.372098   71168 cri.go:89] found id: ""
	I0401 19:33:14.372122   71168 logs.go:276] 0 containers: []
	W0401 19:33:14.372130   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:14.372137   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:14.372183   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:14.410529   71168 cri.go:89] found id: ""
	I0401 19:33:14.410554   71168 logs.go:276] 0 containers: []
	W0401 19:33:14.410563   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:14.410570   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:14.410624   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:14.451198   71168 cri.go:89] found id: ""
	I0401 19:33:14.451226   71168 logs.go:276] 0 containers: []
	W0401 19:33:14.451238   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:14.451246   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:14.451306   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:14.494588   71168 cri.go:89] found id: ""
	I0401 19:33:14.494616   71168 logs.go:276] 0 containers: []
	W0401 19:33:14.494627   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:14.494635   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:14.494689   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:14.537561   71168 cri.go:89] found id: ""
	I0401 19:33:14.537583   71168 logs.go:276] 0 containers: []
	W0401 19:33:14.537590   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:14.537597   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:14.537674   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:14.580624   71168 cri.go:89] found id: ""
	I0401 19:33:14.580651   71168 logs.go:276] 0 containers: []
	W0401 19:33:14.580662   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:14.580672   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:14.580688   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:14.635769   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:14.635798   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:14.650275   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:14.650304   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:14.742355   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:14.742378   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:14.742394   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:14.827839   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:14.827869   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:17.373408   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:17.390110   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:17.390185   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:17.432355   71168 cri.go:89] found id: ""
	I0401 19:33:17.432384   71168 logs.go:276] 0 containers: []
	W0401 19:33:17.432396   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:17.432409   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:17.432471   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:17.476458   71168 cri.go:89] found id: ""
	I0401 19:33:17.476484   71168 logs.go:276] 0 containers: []
	W0401 19:33:17.476495   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:17.476502   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:17.476587   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:17.519657   71168 cri.go:89] found id: ""
	I0401 19:33:17.519686   71168 logs.go:276] 0 containers: []
	W0401 19:33:17.519694   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:17.519699   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:17.519751   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:17.559962   71168 cri.go:89] found id: ""
	I0401 19:33:17.559985   71168 logs.go:276] 0 containers: []
	W0401 19:33:17.559992   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:17.559997   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:17.560054   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:17.608924   71168 cri.go:89] found id: ""
	I0401 19:33:17.608995   71168 logs.go:276] 0 containers: []
	W0401 19:33:17.609009   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:17.609016   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:17.609075   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:17.648371   71168 cri.go:89] found id: ""
	I0401 19:33:17.648394   71168 logs.go:276] 0 containers: []
	W0401 19:33:17.648401   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:17.648406   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:17.648462   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:17.689217   71168 cri.go:89] found id: ""
	I0401 19:33:17.689239   71168 logs.go:276] 0 containers: []
	W0401 19:33:17.689246   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:17.689252   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:17.689312   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:17.741738   71168 cri.go:89] found id: ""
	I0401 19:33:17.741768   71168 logs.go:276] 0 containers: []
	W0401 19:33:17.741779   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:17.741790   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:17.741805   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:17.839857   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:17.839887   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:17.888684   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:17.888716   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:17.944268   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:17.944298   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:17.959305   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:17.959334   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0401 19:33:18.327388   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:20.826627   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:18.100065   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:20.100714   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:18.107770   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:20.108880   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	W0401 19:33:18.040820   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:20.541980   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:20.558198   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:20.558270   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:20.596329   71168 cri.go:89] found id: ""
	I0401 19:33:20.596357   71168 logs.go:276] 0 containers: []
	W0401 19:33:20.596366   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:20.596373   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:20.596431   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:20.638611   71168 cri.go:89] found id: ""
	I0401 19:33:20.638639   71168 logs.go:276] 0 containers: []
	W0401 19:33:20.638664   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:20.638672   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:20.638729   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:20.677984   71168 cri.go:89] found id: ""
	I0401 19:33:20.678014   71168 logs.go:276] 0 containers: []
	W0401 19:33:20.678024   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:20.678032   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:20.678080   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:20.718491   71168 cri.go:89] found id: ""
	I0401 19:33:20.718520   71168 logs.go:276] 0 containers: []
	W0401 19:33:20.718530   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:20.718537   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:20.718597   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:20.772147   71168 cri.go:89] found id: ""
	I0401 19:33:20.772174   71168 logs.go:276] 0 containers: []
	W0401 19:33:20.772185   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:20.772199   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:20.772258   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:20.823339   71168 cri.go:89] found id: ""
	I0401 19:33:20.823361   71168 logs.go:276] 0 containers: []
	W0401 19:33:20.823372   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:20.823380   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:20.823463   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:20.884081   71168 cri.go:89] found id: ""
	I0401 19:33:20.884106   71168 logs.go:276] 0 containers: []
	W0401 19:33:20.884117   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:20.884124   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:20.884185   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:20.931679   71168 cri.go:89] found id: ""
	I0401 19:33:20.931703   71168 logs.go:276] 0 containers: []
	W0401 19:33:20.931713   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:20.931722   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:20.931736   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:21.016766   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:21.016797   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:21.067600   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:21.067632   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:21.136989   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:21.137045   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:21.152673   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:21.152706   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:21.250186   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:23.325222   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:25.326919   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:22.597922   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:24.602701   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:22.606659   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:24.606811   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:26.608185   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:23.750565   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:23.768458   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:23.768534   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:23.814489   71168 cri.go:89] found id: ""
	I0401 19:33:23.814534   71168 logs.go:276] 0 containers: []
	W0401 19:33:23.814555   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:23.814565   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:23.814632   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:23.854954   71168 cri.go:89] found id: ""
	I0401 19:33:23.854981   71168 logs.go:276] 0 containers: []
	W0401 19:33:23.854989   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:23.854995   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:23.855060   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:23.896115   71168 cri.go:89] found id: ""
	I0401 19:33:23.896148   71168 logs.go:276] 0 containers: []
	W0401 19:33:23.896159   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:23.896169   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:23.896231   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:23.941300   71168 cri.go:89] found id: ""
	I0401 19:33:23.941324   71168 logs.go:276] 0 containers: []
	W0401 19:33:23.941337   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:23.941344   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:23.941390   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:23.983955   71168 cri.go:89] found id: ""
	I0401 19:33:23.983982   71168 logs.go:276] 0 containers: []
	W0401 19:33:23.983991   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:23.983997   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:23.984056   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:24.020756   71168 cri.go:89] found id: ""
	I0401 19:33:24.020777   71168 logs.go:276] 0 containers: []
	W0401 19:33:24.020784   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:24.020789   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:24.020835   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:24.063426   71168 cri.go:89] found id: ""
	I0401 19:33:24.063454   71168 logs.go:276] 0 containers: []
	W0401 19:33:24.063462   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:24.063467   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:24.063529   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:24.110924   71168 cri.go:89] found id: ""
	I0401 19:33:24.110945   71168 logs.go:276] 0 containers: []
	W0401 19:33:24.110952   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:24.110960   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:24.110969   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:24.179200   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:24.179240   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:24.194880   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:24.194909   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:24.280555   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:24.280588   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:24.280603   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:24.359502   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:24.359534   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:26.909147   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:26.925961   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:26.926028   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:26.969502   71168 cri.go:89] found id: ""
	I0401 19:33:26.969525   71168 logs.go:276] 0 containers: []
	W0401 19:33:26.969536   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:26.969543   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:26.969604   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:27.015205   71168 cri.go:89] found id: ""
	I0401 19:33:27.015232   71168 logs.go:276] 0 containers: []
	W0401 19:33:27.015241   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:27.015246   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:27.015296   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:27.055943   71168 cri.go:89] found id: ""
	I0401 19:33:27.055968   71168 logs.go:276] 0 containers: []
	W0401 19:33:27.055977   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:27.055983   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:27.056039   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:27.095447   71168 cri.go:89] found id: ""
	I0401 19:33:27.095474   71168 logs.go:276] 0 containers: []
	W0401 19:33:27.095485   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:27.095497   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:27.095558   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:27.137912   71168 cri.go:89] found id: ""
	I0401 19:33:27.137941   71168 logs.go:276] 0 containers: []
	W0401 19:33:27.137948   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:27.137954   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:27.138008   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:27.183303   71168 cri.go:89] found id: ""
	I0401 19:33:27.183325   71168 logs.go:276] 0 containers: []
	W0401 19:33:27.183335   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:27.183344   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:27.183403   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:27.225780   71168 cri.go:89] found id: ""
	I0401 19:33:27.225804   71168 logs.go:276] 0 containers: []
	W0401 19:33:27.225814   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:27.225822   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:27.225880   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:27.268136   71168 cri.go:89] found id: ""
	I0401 19:33:27.268159   71168 logs.go:276] 0 containers: []
	W0401 19:33:27.268168   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:27.268191   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:27.268215   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:27.325527   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:27.325557   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:27.341727   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:27.341763   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:27.432369   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:27.432389   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:27.432403   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:27.523104   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:27.523135   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:27.826804   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:30.326279   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:27.099509   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:29.597830   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:31.598325   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:29.107400   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:31.107514   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:30.066147   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:30.079999   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:30.080062   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:30.121887   71168 cri.go:89] found id: ""
	I0401 19:33:30.121911   71168 logs.go:276] 0 containers: []
	W0401 19:33:30.121920   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:30.121929   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:30.121986   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:30.163939   71168 cri.go:89] found id: ""
	I0401 19:33:30.163967   71168 logs.go:276] 0 containers: []
	W0401 19:33:30.163978   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:30.163986   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:30.164051   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:30.203924   71168 cri.go:89] found id: ""
	I0401 19:33:30.203965   71168 logs.go:276] 0 containers: []
	W0401 19:33:30.203977   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:30.203985   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:30.204048   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:30.243771   71168 cri.go:89] found id: ""
	I0401 19:33:30.243798   71168 logs.go:276] 0 containers: []
	W0401 19:33:30.243809   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:30.243816   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:30.243888   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:30.284039   71168 cri.go:89] found id: ""
	I0401 19:33:30.284066   71168 logs.go:276] 0 containers: []
	W0401 19:33:30.284074   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:30.284079   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:30.284127   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:30.327549   71168 cri.go:89] found id: ""
	I0401 19:33:30.327570   71168 logs.go:276] 0 containers: []
	W0401 19:33:30.327577   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:30.327583   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:30.327630   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:30.365258   71168 cri.go:89] found id: ""
	I0401 19:33:30.365281   71168 logs.go:276] 0 containers: []
	W0401 19:33:30.365291   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:30.365297   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:30.365352   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:30.405959   71168 cri.go:89] found id: ""
	I0401 19:33:30.405984   71168 logs.go:276] 0 containers: []
	W0401 19:33:30.405992   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:30.405999   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:30.406011   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:30.480668   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:30.480692   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:30.480706   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:30.566042   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:30.566077   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:30.629250   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:30.629285   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:30.682185   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:30.682213   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:32.824844   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:34.826598   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:33.600555   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:36.100194   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:33.608315   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:36.106573   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:33.199466   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:33.213557   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:33.213630   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:33.255038   71168 cri.go:89] found id: ""
	I0401 19:33:33.255062   71168 logs.go:276] 0 containers: []
	W0401 19:33:33.255072   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:33.255079   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:33.255143   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:33.297724   71168 cri.go:89] found id: ""
	I0401 19:33:33.297751   71168 logs.go:276] 0 containers: []
	W0401 19:33:33.297761   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:33.297767   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:33.297836   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:33.340694   71168 cri.go:89] found id: ""
	I0401 19:33:33.340718   71168 logs.go:276] 0 containers: []
	W0401 19:33:33.340727   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:33.340735   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:33.340794   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:33.388857   71168 cri.go:89] found id: ""
	I0401 19:33:33.388883   71168 logs.go:276] 0 containers: []
	W0401 19:33:33.388891   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:33.388896   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:33.388940   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:33.430875   71168 cri.go:89] found id: ""
	I0401 19:33:33.430899   71168 logs.go:276] 0 containers: []
	W0401 19:33:33.430906   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:33.430911   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:33.430966   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:33.479877   71168 cri.go:89] found id: ""
	I0401 19:33:33.479905   71168 logs.go:276] 0 containers: []
	W0401 19:33:33.479917   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:33.479923   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:33.479968   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:33.522635   71168 cri.go:89] found id: ""
	I0401 19:33:33.522662   71168 logs.go:276] 0 containers: []
	W0401 19:33:33.522672   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:33.522680   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:33.522737   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:33.560497   71168 cri.go:89] found id: ""
	I0401 19:33:33.560519   71168 logs.go:276] 0 containers: []
	W0401 19:33:33.560527   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:33.560534   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:33.560549   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:33.612141   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:33.612170   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:33.665142   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:33.665170   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:33.681076   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:33.681100   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:33.755938   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:33.755966   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:33.755983   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:36.341957   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:36.359519   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:36.359586   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:36.416339   71168 cri.go:89] found id: ""
	I0401 19:33:36.416362   71168 logs.go:276] 0 containers: []
	W0401 19:33:36.416373   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:36.416381   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:36.416442   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:36.473883   71168 cri.go:89] found id: ""
	I0401 19:33:36.473906   71168 logs.go:276] 0 containers: []
	W0401 19:33:36.473918   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:36.473925   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:36.473988   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:36.521532   71168 cri.go:89] found id: ""
	I0401 19:33:36.521558   71168 logs.go:276] 0 containers: []
	W0401 19:33:36.521568   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:36.521575   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:36.521639   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:36.563420   71168 cri.go:89] found id: ""
	I0401 19:33:36.563446   71168 logs.go:276] 0 containers: []
	W0401 19:33:36.563454   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:36.563459   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:36.563520   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:36.605658   71168 cri.go:89] found id: ""
	I0401 19:33:36.605678   71168 logs.go:276] 0 containers: []
	W0401 19:33:36.605689   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:36.605697   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:36.605759   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:36.645611   71168 cri.go:89] found id: ""
	I0401 19:33:36.645631   71168 logs.go:276] 0 containers: []
	W0401 19:33:36.645638   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:36.645656   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:36.645715   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:36.685994   71168 cri.go:89] found id: ""
	I0401 19:33:36.686022   71168 logs.go:276] 0 containers: []
	W0401 19:33:36.686033   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:36.686041   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:36.686099   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:36.725573   71168 cri.go:89] found id: ""
	I0401 19:33:36.725598   71168 logs.go:276] 0 containers: []
	W0401 19:33:36.725608   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:36.725618   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:36.725630   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:36.778854   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:36.778885   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:36.795003   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:36.795036   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:36.872648   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:36.872666   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:36.872678   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:36.956648   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:36.956683   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:36.827745   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:38.830544   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:41.326012   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:38.597991   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:41.097044   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:38.107961   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:40.606475   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:39.502868   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:39.519090   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:39.519161   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:39.562347   71168 cri.go:89] found id: ""
	I0401 19:33:39.562371   71168 logs.go:276] 0 containers: []
	W0401 19:33:39.562379   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:39.562384   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:39.562442   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:39.607250   71168 cri.go:89] found id: ""
	I0401 19:33:39.607276   71168 logs.go:276] 0 containers: []
	W0401 19:33:39.607286   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:39.607293   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:39.607343   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:39.650683   71168 cri.go:89] found id: ""
	I0401 19:33:39.650704   71168 logs.go:276] 0 containers: []
	W0401 19:33:39.650712   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:39.650717   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:39.650764   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:39.694676   71168 cri.go:89] found id: ""
	I0401 19:33:39.694706   71168 logs.go:276] 0 containers: []
	W0401 19:33:39.694718   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:39.694724   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:39.694783   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:39.733873   71168 cri.go:89] found id: ""
	I0401 19:33:39.733901   71168 logs.go:276] 0 containers: []
	W0401 19:33:39.733911   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:39.733919   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:39.733980   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:39.773625   71168 cri.go:89] found id: ""
	I0401 19:33:39.773668   71168 logs.go:276] 0 containers: []
	W0401 19:33:39.773679   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:39.773686   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:39.773735   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:39.815020   71168 cri.go:89] found id: ""
	I0401 19:33:39.815053   71168 logs.go:276] 0 containers: []
	W0401 19:33:39.815064   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:39.815071   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:39.815134   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:39.855575   71168 cri.go:89] found id: ""
	I0401 19:33:39.855606   71168 logs.go:276] 0 containers: []
	W0401 19:33:39.855615   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:39.855626   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:39.855641   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:39.873827   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:39.873857   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:39.948487   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:39.948507   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:39.948521   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:40.034026   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:40.034062   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:40.077798   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:40.077828   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:42.637999   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:42.654991   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:42.655063   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:42.695920   71168 cri.go:89] found id: ""
	I0401 19:33:42.695953   71168 logs.go:276] 0 containers: []
	W0401 19:33:42.695964   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:42.695971   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:42.696030   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:42.737303   71168 cri.go:89] found id: ""
	I0401 19:33:42.737325   71168 logs.go:276] 0 containers: []
	W0401 19:33:42.737333   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:42.737341   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:42.737393   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:42.777922   71168 cri.go:89] found id: ""
	I0401 19:33:42.777953   71168 logs.go:276] 0 containers: []
	W0401 19:33:42.777965   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:42.777972   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:42.778036   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:42.818339   71168 cri.go:89] found id: ""
	I0401 19:33:42.818364   71168 logs.go:276] 0 containers: []
	W0401 19:33:42.818372   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:42.818379   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:42.818435   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:42.859470   71168 cri.go:89] found id: ""
	I0401 19:33:42.859494   71168 logs.go:276] 0 containers: []
	W0401 19:33:42.859502   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:42.859507   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:42.859556   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:42.901950   71168 cri.go:89] found id: ""
	I0401 19:33:42.901980   71168 logs.go:276] 0 containers: []
	W0401 19:33:42.901989   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:42.901996   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:42.902063   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:42.947230   71168 cri.go:89] found id: ""
	I0401 19:33:42.947258   71168 logs.go:276] 0 containers: []
	W0401 19:33:42.947268   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:42.947275   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:42.947351   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:43.827204   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:46.325749   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:43.098252   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:45.098316   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:42.607590   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:44.607666   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:47.107837   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:42.988997   71168 cri.go:89] found id: ""
	I0401 19:33:42.989022   71168 logs.go:276] 0 containers: []
	W0401 19:33:42.989032   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:42.989049   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:42.989066   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:43.075323   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:43.075352   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:43.075363   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:43.164445   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:43.164479   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:43.215852   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:43.215885   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:43.271301   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:43.271334   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:45.786705   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:45.804389   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:45.804445   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:45.849838   71168 cri.go:89] found id: ""
	I0401 19:33:45.849872   71168 logs.go:276] 0 containers: []
	W0401 19:33:45.849883   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:45.849891   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:45.849950   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:45.890603   71168 cri.go:89] found id: ""
	I0401 19:33:45.890625   71168 logs.go:276] 0 containers: []
	W0401 19:33:45.890635   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:45.890642   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:45.890703   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:45.929189   71168 cri.go:89] found id: ""
	I0401 19:33:45.929210   71168 logs.go:276] 0 containers: []
	W0401 19:33:45.929218   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:45.929223   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:45.929268   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:45.968266   71168 cri.go:89] found id: ""
	I0401 19:33:45.968292   71168 logs.go:276] 0 containers: []
	W0401 19:33:45.968303   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:45.968310   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:45.968365   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:46.007114   71168 cri.go:89] found id: ""
	I0401 19:33:46.007135   71168 logs.go:276] 0 containers: []
	W0401 19:33:46.007143   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:46.007148   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:46.007195   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:46.046067   71168 cri.go:89] found id: ""
	I0401 19:33:46.046088   71168 logs.go:276] 0 containers: []
	W0401 19:33:46.046095   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:46.046101   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:46.046186   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:46.083604   71168 cri.go:89] found id: ""
	I0401 19:33:46.083630   71168 logs.go:276] 0 containers: []
	W0401 19:33:46.083644   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:46.083651   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:46.083709   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:46.125435   71168 cri.go:89] found id: ""
	I0401 19:33:46.125457   71168 logs.go:276] 0 containers: []
	W0401 19:33:46.125464   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:46.125472   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:46.125483   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:46.179060   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:46.179092   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:46.195139   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:46.195179   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:46.275876   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:46.275903   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:46.275914   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:46.365430   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:46.365465   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:48.825540   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:50.827204   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:47.099197   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:49.105260   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:51.597808   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:49.108344   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:51.607079   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:48.908390   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:48.924357   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:48.924416   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:48.969325   71168 cri.go:89] found id: ""
	I0401 19:33:48.969351   71168 logs.go:276] 0 containers: []
	W0401 19:33:48.969359   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:48.969364   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:48.969421   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:49.006702   71168 cri.go:89] found id: ""
	I0401 19:33:49.006724   71168 logs.go:276] 0 containers: []
	W0401 19:33:49.006731   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:49.006736   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:49.006785   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:49.051196   71168 cri.go:89] found id: ""
	I0401 19:33:49.051229   71168 logs.go:276] 0 containers: []
	W0401 19:33:49.051241   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:49.051260   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:49.051336   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:49.098123   71168 cri.go:89] found id: ""
	I0401 19:33:49.098150   71168 logs.go:276] 0 containers: []
	W0401 19:33:49.098159   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:49.098166   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:49.098225   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:49.138203   71168 cri.go:89] found id: ""
	I0401 19:33:49.138232   71168 logs.go:276] 0 containers: []
	W0401 19:33:49.138239   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:49.138244   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:49.138290   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:49.185441   71168 cri.go:89] found id: ""
	I0401 19:33:49.185465   71168 logs.go:276] 0 containers: []
	W0401 19:33:49.185473   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:49.185478   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:49.185537   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:49.235649   71168 cri.go:89] found id: ""
	I0401 19:33:49.235670   71168 logs.go:276] 0 containers: []
	W0401 19:33:49.235678   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:49.235683   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:49.235762   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:49.279638   71168 cri.go:89] found id: ""
	I0401 19:33:49.279662   71168 logs.go:276] 0 containers: []
	W0401 19:33:49.279673   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:49.279683   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:49.279699   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:49.340761   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:49.340798   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:49.356552   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:49.356581   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:49.441110   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:49.441129   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:49.441140   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:49.523159   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:49.523189   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:52.067710   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:52.082986   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:52.083046   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:52.128510   71168 cri.go:89] found id: ""
	I0401 19:33:52.128531   71168 logs.go:276] 0 containers: []
	W0401 19:33:52.128538   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:52.128543   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:52.128590   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:52.167767   71168 cri.go:89] found id: ""
	I0401 19:33:52.167792   71168 logs.go:276] 0 containers: []
	W0401 19:33:52.167803   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:52.167810   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:52.167871   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:52.206384   71168 cri.go:89] found id: ""
	I0401 19:33:52.206416   71168 logs.go:276] 0 containers: []
	W0401 19:33:52.206426   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:52.206433   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:52.206493   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:52.245277   71168 cri.go:89] found id: ""
	I0401 19:33:52.245301   71168 logs.go:276] 0 containers: []
	W0401 19:33:52.245309   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:52.245318   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:52.245388   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:52.283925   71168 cri.go:89] found id: ""
	I0401 19:33:52.283954   71168 logs.go:276] 0 containers: []
	W0401 19:33:52.283964   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:52.283971   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:52.284032   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:52.323944   71168 cri.go:89] found id: ""
	I0401 19:33:52.323970   71168 logs.go:276] 0 containers: []
	W0401 19:33:52.323981   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:52.323988   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:52.324045   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:52.364853   71168 cri.go:89] found id: ""
	I0401 19:33:52.364882   71168 logs.go:276] 0 containers: []
	W0401 19:33:52.364893   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:52.364901   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:52.364958   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:52.404136   71168 cri.go:89] found id: ""
	I0401 19:33:52.404158   71168 logs.go:276] 0 containers: []
	W0401 19:33:52.404165   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:52.404173   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:52.404184   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:52.459097   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:52.459129   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:52.474392   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:52.474417   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:52.551817   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:52.551843   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:52.551860   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:52.650710   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:52.650750   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:53.326050   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:55.327326   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:52.607062   70284 pod_ready.go:92] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"True"
	I0401 19:33:52.607082   70284 pod_ready.go:81] duration metric: took 43.516413537s for pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.607091   70284 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.628695   70284 pod_ready.go:92] pod "etcd-no-preload-472858" in "kube-system" namespace has status "Ready":"True"
	I0401 19:33:52.628725   70284 pod_ready.go:81] duration metric: took 21.625468ms for pod "etcd-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.628739   70284 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.643017   70284 pod_ready.go:92] pod "kube-apiserver-no-preload-472858" in "kube-system" namespace has status "Ready":"True"
	I0401 19:33:52.643044   70284 pod_ready.go:81] duration metric: took 14.296056ms for pod "kube-apiserver-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.643058   70284 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.649063   70284 pod_ready.go:92] pod "kube-controller-manager-no-preload-472858" in "kube-system" namespace has status "Ready":"True"
	I0401 19:33:52.649091   70284 pod_ready.go:81] duration metric: took 6.024238ms for pod "kube-controller-manager-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.649105   70284 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7c22p" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.654806   70284 pod_ready.go:92] pod "kube-proxy-7c22p" in "kube-system" namespace has status "Ready":"True"
	I0401 19:33:52.654829   70284 pod_ready.go:81] duration metric: took 5.709865ms for pod "kube-proxy-7c22p" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.654840   70284 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.997116   70284 pod_ready.go:92] pod "kube-scheduler-no-preload-472858" in "kube-system" namespace has status "Ready":"True"
	I0401 19:33:52.997139   70284 pod_ready.go:81] duration metric: took 342.291727ms for pod "kube-scheduler-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.997148   70284 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:55.004130   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:53.608064   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:56.106148   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:55.205689   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:55.222840   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:55.222901   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:55.263783   71168 cri.go:89] found id: ""
	I0401 19:33:55.263813   71168 logs.go:276] 0 containers: []
	W0401 19:33:55.263820   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:55.263828   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:55.263883   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:55.300788   71168 cri.go:89] found id: ""
	I0401 19:33:55.300818   71168 logs.go:276] 0 containers: []
	W0401 19:33:55.300826   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:55.300834   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:55.300888   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:55.343189   71168 cri.go:89] found id: ""
	I0401 19:33:55.343215   71168 logs.go:276] 0 containers: []
	W0401 19:33:55.343223   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:55.343229   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:55.343286   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:55.387560   71168 cri.go:89] found id: ""
	I0401 19:33:55.387587   71168 logs.go:276] 0 containers: []
	W0401 19:33:55.387597   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:55.387604   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:55.387663   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:55.428078   71168 cri.go:89] found id: ""
	I0401 19:33:55.428103   71168 logs.go:276] 0 containers: []
	W0401 19:33:55.428112   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:55.428119   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:55.428181   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:55.472696   71168 cri.go:89] found id: ""
	I0401 19:33:55.472722   71168 logs.go:276] 0 containers: []
	W0401 19:33:55.472734   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:55.472741   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:55.472797   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:55.518071   71168 cri.go:89] found id: ""
	I0401 19:33:55.518115   71168 logs.go:276] 0 containers: []
	W0401 19:33:55.518126   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:55.518136   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:55.518201   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:55.555697   71168 cri.go:89] found id: ""
	I0401 19:33:55.555717   71168 logs.go:276] 0 containers: []
	W0401 19:33:55.555724   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:55.555732   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:55.555747   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:55.637462   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:55.637492   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:55.682353   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:55.682380   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:55.735451   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:55.735484   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:55.750928   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:55.750954   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:55.824610   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:57.328228   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:59.826213   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:57.005395   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:59.505575   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:01.506107   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:58.106643   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:00.606864   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:58.325742   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:58.341022   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:58.341092   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:58.380910   71168 cri.go:89] found id: ""
	I0401 19:33:58.380932   71168 logs.go:276] 0 containers: []
	W0401 19:33:58.380940   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:58.380946   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:58.380990   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:58.420387   71168 cri.go:89] found id: ""
	I0401 19:33:58.420413   71168 logs.go:276] 0 containers: []
	W0401 19:33:58.420425   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:58.420431   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:58.420479   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:58.460470   71168 cri.go:89] found id: ""
	I0401 19:33:58.460501   71168 logs.go:276] 0 containers: []
	W0401 19:33:58.460511   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:58.460520   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:58.460580   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:58.496844   71168 cri.go:89] found id: ""
	I0401 19:33:58.496867   71168 logs.go:276] 0 containers: []
	W0401 19:33:58.496875   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:58.496881   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:58.496930   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:58.535883   71168 cri.go:89] found id: ""
	I0401 19:33:58.535905   71168 logs.go:276] 0 containers: []
	W0401 19:33:58.535915   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:58.535922   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:58.535979   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:58.576833   71168 cri.go:89] found id: ""
	I0401 19:33:58.576855   71168 logs.go:276] 0 containers: []
	W0401 19:33:58.576863   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:58.576869   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:58.576913   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:58.615057   71168 cri.go:89] found id: ""
	I0401 19:33:58.615081   71168 logs.go:276] 0 containers: []
	W0401 19:33:58.615091   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:58.615098   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:58.615156   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:58.657982   71168 cri.go:89] found id: ""
	I0401 19:33:58.658008   71168 logs.go:276] 0 containers: []
	W0401 19:33:58.658018   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:58.658028   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:58.658045   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:58.734579   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:58.734601   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:58.734616   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:58.821779   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:58.821819   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:58.894470   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:58.894506   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:58.949854   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:58.949884   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:01.465820   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:01.481929   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:01.481984   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:01.525371   71168 cri.go:89] found id: ""
	I0401 19:34:01.525397   71168 logs.go:276] 0 containers: []
	W0401 19:34:01.525407   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:01.525415   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:01.525473   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:01.571106   71168 cri.go:89] found id: ""
	I0401 19:34:01.571136   71168 logs.go:276] 0 containers: []
	W0401 19:34:01.571146   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:01.571153   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:01.571214   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:01.617666   71168 cri.go:89] found id: ""
	I0401 19:34:01.617705   71168 logs.go:276] 0 containers: []
	W0401 19:34:01.617717   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:01.617725   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:01.617787   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:01.655286   71168 cri.go:89] found id: ""
	I0401 19:34:01.655311   71168 logs.go:276] 0 containers: []
	W0401 19:34:01.655321   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:01.655328   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:01.655396   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:01.694911   71168 cri.go:89] found id: ""
	I0401 19:34:01.694940   71168 logs.go:276] 0 containers: []
	W0401 19:34:01.694950   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:01.694957   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:01.695040   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:01.734970   71168 cri.go:89] found id: ""
	I0401 19:34:01.734996   71168 logs.go:276] 0 containers: []
	W0401 19:34:01.735007   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:01.735014   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:01.735071   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:01.778846   71168 cri.go:89] found id: ""
	I0401 19:34:01.778871   71168 logs.go:276] 0 containers: []
	W0401 19:34:01.778879   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:01.778885   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:01.778958   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:01.821934   71168 cri.go:89] found id: ""
	I0401 19:34:01.821964   71168 logs.go:276] 0 containers: []
	W0401 19:34:01.821975   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:01.821986   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:01.822002   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:01.880123   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:01.880155   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:01.895178   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:01.895200   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:01.972248   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:01.972275   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:01.972290   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:02.056663   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:02.056694   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:02.325323   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:04.326474   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:06.327583   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:04.004061   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:06.004176   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:02.608516   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:05.108477   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:04.603745   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:04.619269   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:04.619344   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:04.658089   71168 cri.go:89] found id: ""
	I0401 19:34:04.658111   71168 logs.go:276] 0 containers: []
	W0401 19:34:04.658118   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:04.658123   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:04.658168   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:04.700596   71168 cri.go:89] found id: ""
	I0401 19:34:04.700622   71168 logs.go:276] 0 containers: []
	W0401 19:34:04.700634   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:04.700641   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:04.700708   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:04.744960   71168 cri.go:89] found id: ""
	I0401 19:34:04.744990   71168 logs.go:276] 0 containers: []
	W0401 19:34:04.744999   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:04.745004   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:04.745052   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:04.788239   71168 cri.go:89] found id: ""
	I0401 19:34:04.788264   71168 logs.go:276] 0 containers: []
	W0401 19:34:04.788272   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:04.788278   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:04.788343   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:04.830788   71168 cri.go:89] found id: ""
	I0401 19:34:04.830812   71168 logs.go:276] 0 containers: []
	W0401 19:34:04.830850   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:04.830859   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:04.830917   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:04.889784   71168 cri.go:89] found id: ""
	I0401 19:34:04.889815   71168 logs.go:276] 0 containers: []
	W0401 19:34:04.889826   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:04.889834   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:04.889902   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:04.931969   71168 cri.go:89] found id: ""
	I0401 19:34:04.931996   71168 logs.go:276] 0 containers: []
	W0401 19:34:04.932004   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:04.932010   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:04.932058   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:04.975668   71168 cri.go:89] found id: ""
	I0401 19:34:04.975689   71168 logs.go:276] 0 containers: []
	W0401 19:34:04.975696   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:04.975704   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:04.975715   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:05.032212   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:05.032246   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:05.047900   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:05.047924   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:05.132371   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:05.132394   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:05.132408   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:05.222591   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:05.222623   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:07.767686   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:07.784473   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:07.784542   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:07.828460   71168 cri.go:89] found id: ""
	I0401 19:34:07.828487   71168 logs.go:276] 0 containers: []
	W0401 19:34:07.828498   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:07.828505   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:07.828564   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:07.872760   71168 cri.go:89] found id: ""
	I0401 19:34:07.872786   71168 logs.go:276] 0 containers: []
	W0401 19:34:07.872797   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:07.872804   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:07.872862   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:07.914241   71168 cri.go:89] found id: ""
	I0401 19:34:07.914263   71168 logs.go:276] 0 containers: []
	W0401 19:34:07.914271   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:07.914276   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:07.914340   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:07.953757   71168 cri.go:89] found id: ""
	I0401 19:34:07.953784   71168 logs.go:276] 0 containers: []
	W0401 19:34:07.953795   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:07.953803   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:07.953869   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:08.825113   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:10.827081   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:08.504038   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:10.508973   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:07.608037   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:10.110321   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:07.994382   71168 cri.go:89] found id: ""
	I0401 19:34:07.994401   71168 logs.go:276] 0 containers: []
	W0401 19:34:07.994409   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:07.994414   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:07.994459   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:08.038178   71168 cri.go:89] found id: ""
	I0401 19:34:08.038202   71168 logs.go:276] 0 containers: []
	W0401 19:34:08.038213   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:08.038220   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:08.038282   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:08.077532   71168 cri.go:89] found id: ""
	I0401 19:34:08.077562   71168 logs.go:276] 0 containers: []
	W0401 19:34:08.077573   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:08.077580   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:08.077657   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:08.119825   71168 cri.go:89] found id: ""
	I0401 19:34:08.119845   71168 logs.go:276] 0 containers: []
	W0401 19:34:08.119855   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:08.119865   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:08.119878   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:08.207688   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:08.207724   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:08.253050   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:08.253085   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:08.309119   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:08.309152   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:08.325675   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:08.325704   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:08.410877   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:10.911211   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:10.925590   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:10.925657   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:10.964180   71168 cri.go:89] found id: ""
	I0401 19:34:10.964205   71168 logs.go:276] 0 containers: []
	W0401 19:34:10.964216   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:10.964224   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:10.964273   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:11.004492   71168 cri.go:89] found id: ""
	I0401 19:34:11.004515   71168 logs.go:276] 0 containers: []
	W0401 19:34:11.004526   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:11.004533   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:11.004588   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:11.048771   71168 cri.go:89] found id: ""
	I0401 19:34:11.048792   71168 logs.go:276] 0 containers: []
	W0401 19:34:11.048804   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:11.048810   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:11.048861   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:11.084956   71168 cri.go:89] found id: ""
	I0401 19:34:11.084982   71168 logs.go:276] 0 containers: []
	W0401 19:34:11.084992   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:11.084999   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:11.085043   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:11.128194   71168 cri.go:89] found id: ""
	I0401 19:34:11.128218   71168 logs.go:276] 0 containers: []
	W0401 19:34:11.128225   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:11.128230   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:11.128274   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:11.169884   71168 cri.go:89] found id: ""
	I0401 19:34:11.169908   71168 logs.go:276] 0 containers: []
	W0401 19:34:11.169918   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:11.169925   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:11.169988   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:11.213032   71168 cri.go:89] found id: ""
	I0401 19:34:11.213066   71168 logs.go:276] 0 containers: []
	W0401 19:34:11.213077   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:11.213084   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:11.213149   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:11.258391   71168 cri.go:89] found id: ""
	I0401 19:34:11.258414   71168 logs.go:276] 0 containers: []
	W0401 19:34:11.258422   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:11.258429   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:11.258445   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:11.341297   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:11.341328   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:11.388628   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:11.388659   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:11.442300   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:11.442326   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:11.457531   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:11.457561   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:11.561556   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:13.324598   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:15.325464   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:13.005005   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:15.505216   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:12.607201   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:14.607580   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:17.107659   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:14.062670   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:14.077384   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:14.077449   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:14.119421   71168 cri.go:89] found id: ""
	I0401 19:34:14.119444   71168 logs.go:276] 0 containers: []
	W0401 19:34:14.119455   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:14.119462   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:14.119518   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:14.158762   71168 cri.go:89] found id: ""
	I0401 19:34:14.158783   71168 logs.go:276] 0 containers: []
	W0401 19:34:14.158798   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:14.158805   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:14.158867   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:14.197024   71168 cri.go:89] found id: ""
	I0401 19:34:14.197052   71168 logs.go:276] 0 containers: []
	W0401 19:34:14.197060   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:14.197065   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:14.197115   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:14.235976   71168 cri.go:89] found id: ""
	I0401 19:34:14.236004   71168 logs.go:276] 0 containers: []
	W0401 19:34:14.236015   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:14.236021   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:14.236085   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:14.280596   71168 cri.go:89] found id: ""
	I0401 19:34:14.280623   71168 logs.go:276] 0 containers: []
	W0401 19:34:14.280635   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:14.280642   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:14.280703   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:14.322196   71168 cri.go:89] found id: ""
	I0401 19:34:14.322219   71168 logs.go:276] 0 containers: []
	W0401 19:34:14.322230   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:14.322239   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:14.322298   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:14.364572   71168 cri.go:89] found id: ""
	I0401 19:34:14.364596   71168 logs.go:276] 0 containers: []
	W0401 19:34:14.364607   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:14.364615   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:14.364662   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:14.406043   71168 cri.go:89] found id: ""
	I0401 19:34:14.406066   71168 logs.go:276] 0 containers: []
	W0401 19:34:14.406072   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:14.406082   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:14.406097   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:14.461841   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:14.461870   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:14.479960   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:14.479990   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:14.557039   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:14.557058   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:14.557070   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:14.641945   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:14.641975   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:17.192681   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:17.207913   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:17.207964   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:17.245596   71168 cri.go:89] found id: ""
	I0401 19:34:17.245618   71168 logs.go:276] 0 containers: []
	W0401 19:34:17.245625   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:17.245630   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:17.245701   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:17.310845   71168 cri.go:89] found id: ""
	I0401 19:34:17.310875   71168 logs.go:276] 0 containers: []
	W0401 19:34:17.310887   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:17.310894   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:17.310958   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:17.367726   71168 cri.go:89] found id: ""
	I0401 19:34:17.367753   71168 logs.go:276] 0 containers: []
	W0401 19:34:17.367764   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:17.367770   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:17.367833   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:17.410807   71168 cri.go:89] found id: ""
	I0401 19:34:17.410834   71168 logs.go:276] 0 containers: []
	W0401 19:34:17.410842   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:17.410847   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:17.410892   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:17.448242   71168 cri.go:89] found id: ""
	I0401 19:34:17.448268   71168 logs.go:276] 0 containers: []
	W0401 19:34:17.448278   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:17.448285   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:17.448337   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:17.486552   71168 cri.go:89] found id: ""
	I0401 19:34:17.486580   71168 logs.go:276] 0 containers: []
	W0401 19:34:17.486590   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:17.486595   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:17.486644   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:17.529947   71168 cri.go:89] found id: ""
	I0401 19:34:17.529975   71168 logs.go:276] 0 containers: []
	W0401 19:34:17.529986   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:17.529993   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:17.530052   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:17.571617   71168 cri.go:89] found id: ""
	I0401 19:34:17.571640   71168 logs.go:276] 0 containers: []
	W0401 19:34:17.571648   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:17.571656   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:17.571673   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:17.627326   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:17.627354   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:17.643409   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:17.643431   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:17.723772   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:17.723798   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:17.723811   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:17.803383   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:17.803414   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:17.325836   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:19.328447   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:17.509486   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:20.004341   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:19.606840   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:21.607646   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:20.348949   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:20.363311   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:20.363385   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:20.401558   71168 cri.go:89] found id: ""
	I0401 19:34:20.401585   71168 logs.go:276] 0 containers: []
	W0401 19:34:20.401595   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:20.401603   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:20.401686   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:20.445979   71168 cri.go:89] found id: ""
	I0401 19:34:20.446004   71168 logs.go:276] 0 containers: []
	W0401 19:34:20.446011   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:20.446016   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:20.446060   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:20.487819   71168 cri.go:89] found id: ""
	I0401 19:34:20.487844   71168 logs.go:276] 0 containers: []
	W0401 19:34:20.487854   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:20.487862   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:20.487921   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:20.532107   71168 cri.go:89] found id: ""
	I0401 19:34:20.532131   71168 logs.go:276] 0 containers: []
	W0401 19:34:20.532154   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:20.532186   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:20.532247   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:20.577727   71168 cri.go:89] found id: ""
	I0401 19:34:20.577749   71168 logs.go:276] 0 containers: []
	W0401 19:34:20.577756   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:20.577762   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:20.577841   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:20.616774   71168 cri.go:89] found id: ""
	I0401 19:34:20.616805   71168 logs.go:276] 0 containers: []
	W0401 19:34:20.616816   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:20.616824   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:20.616887   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:20.656122   71168 cri.go:89] found id: ""
	I0401 19:34:20.656150   71168 logs.go:276] 0 containers: []
	W0401 19:34:20.656160   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:20.656167   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:20.656226   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:20.701249   71168 cri.go:89] found id: ""
	I0401 19:34:20.701274   71168 logs.go:276] 0 containers: []
	W0401 19:34:20.701285   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:20.701295   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:20.701310   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:20.746979   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:20.747003   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:20.799197   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:20.799226   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:20.815771   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:20.815808   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:20.895179   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:20.895202   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:20.895218   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:21.826671   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:24.325896   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:26.326569   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:22.503727   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:24.503877   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:26.506643   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:24.107702   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:26.607285   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:23.481911   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:23.496820   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:23.496889   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:23.538292   71168 cri.go:89] found id: ""
	I0401 19:34:23.538314   71168 logs.go:276] 0 containers: []
	W0401 19:34:23.538322   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:23.538327   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:23.538372   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:23.579171   71168 cri.go:89] found id: ""
	I0401 19:34:23.579200   71168 logs.go:276] 0 containers: []
	W0401 19:34:23.579209   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:23.579214   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:23.579269   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:23.620377   71168 cri.go:89] found id: ""
	I0401 19:34:23.620399   71168 logs.go:276] 0 containers: []
	W0401 19:34:23.620410   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:23.620417   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:23.620477   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:23.663309   71168 cri.go:89] found id: ""
	I0401 19:34:23.663329   71168 logs.go:276] 0 containers: []
	W0401 19:34:23.663337   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:23.663342   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:23.663392   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:23.702724   71168 cri.go:89] found id: ""
	I0401 19:34:23.702755   71168 logs.go:276] 0 containers: []
	W0401 19:34:23.702772   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:23.702778   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:23.702836   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:23.742797   71168 cri.go:89] found id: ""
	I0401 19:34:23.742827   71168 logs.go:276] 0 containers: []
	W0401 19:34:23.742837   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:23.742845   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:23.742913   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:23.781299   71168 cri.go:89] found id: ""
	I0401 19:34:23.781350   71168 logs.go:276] 0 containers: []
	W0401 19:34:23.781367   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:23.781375   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:23.781440   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:23.828244   71168 cri.go:89] found id: ""
	I0401 19:34:23.828270   71168 logs.go:276] 0 containers: []
	W0401 19:34:23.828277   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:23.828284   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:23.828298   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:23.914758   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:23.914782   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:23.914797   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:23.993300   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:23.993332   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:24.037388   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:24.037424   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:24.090157   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:24.090198   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:26.609062   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:26.624241   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:26.624309   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:26.665813   71168 cri.go:89] found id: ""
	I0401 19:34:26.665840   71168 logs.go:276] 0 containers: []
	W0401 19:34:26.665848   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:26.665857   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:26.665917   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:26.709571   71168 cri.go:89] found id: ""
	I0401 19:34:26.709593   71168 logs.go:276] 0 containers: []
	W0401 19:34:26.709600   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:26.709606   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:26.709680   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:26.757286   71168 cri.go:89] found id: ""
	I0401 19:34:26.757309   71168 logs.go:276] 0 containers: []
	W0401 19:34:26.757319   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:26.757325   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:26.757386   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:26.795715   71168 cri.go:89] found id: ""
	I0401 19:34:26.795768   71168 logs.go:276] 0 containers: []
	W0401 19:34:26.795781   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:26.795788   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:26.795839   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:26.835985   71168 cri.go:89] found id: ""
	I0401 19:34:26.836011   71168 logs.go:276] 0 containers: []
	W0401 19:34:26.836022   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:26.836029   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:26.836094   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:26.878890   71168 cri.go:89] found id: ""
	I0401 19:34:26.878918   71168 logs.go:276] 0 containers: []
	W0401 19:34:26.878929   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:26.878936   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:26.878991   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:26.920161   71168 cri.go:89] found id: ""
	I0401 19:34:26.920189   71168 logs.go:276] 0 containers: []
	W0401 19:34:26.920199   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:26.920206   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:26.920262   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:26.961597   71168 cri.go:89] found id: ""
	I0401 19:34:26.961626   71168 logs.go:276] 0 containers: []
	W0401 19:34:26.961637   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:26.961663   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:26.961679   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:27.019814   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:27.019847   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:27.035535   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:27.035564   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:27.111755   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:27.111776   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:27.111790   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:27.194932   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:27.194964   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:28.827702   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:31.325488   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:29.005830   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:31.007294   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:29.107097   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:31.109807   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:29.738592   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:29.752851   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:29.752913   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:29.791808   71168 cri.go:89] found id: ""
	I0401 19:34:29.791863   71168 logs.go:276] 0 containers: []
	W0401 19:34:29.791875   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:29.791883   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:29.791944   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:29.836113   71168 cri.go:89] found id: ""
	I0401 19:34:29.836132   71168 logs.go:276] 0 containers: []
	W0401 19:34:29.836139   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:29.836144   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:29.836200   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:29.879005   71168 cri.go:89] found id: ""
	I0401 19:34:29.879039   71168 logs.go:276] 0 containers: []
	W0401 19:34:29.879050   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:29.879059   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:29.879122   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:29.919349   71168 cri.go:89] found id: ""
	I0401 19:34:29.919383   71168 logs.go:276] 0 containers: []
	W0401 19:34:29.919394   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:29.919400   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:29.919454   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:29.957252   71168 cri.go:89] found id: ""
	I0401 19:34:29.957275   71168 logs.go:276] 0 containers: []
	W0401 19:34:29.957287   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:29.957294   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:29.957354   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:30.003220   71168 cri.go:89] found id: ""
	I0401 19:34:30.003245   71168 logs.go:276] 0 containers: []
	W0401 19:34:30.003256   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:30.003263   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:30.003311   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:30.043873   71168 cri.go:89] found id: ""
	I0401 19:34:30.043900   71168 logs.go:276] 0 containers: []
	W0401 19:34:30.043921   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:30.043928   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:30.043989   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:30.082215   71168 cri.go:89] found id: ""
	I0401 19:34:30.082242   71168 logs.go:276] 0 containers: []
	W0401 19:34:30.082253   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:30.082263   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:30.082277   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:30.098676   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:30.098701   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:30.180857   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:30.180879   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:30.180897   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:30.269982   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:30.270016   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:30.317933   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:30.317967   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:32.874312   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:32.888687   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:32.888742   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:32.926222   71168 cri.go:89] found id: ""
	I0401 19:34:32.926244   71168 logs.go:276] 0 containers: []
	W0401 19:34:32.926252   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:32.926257   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:32.926307   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:32.964838   71168 cri.go:89] found id: ""
	I0401 19:34:32.964858   71168 logs.go:276] 0 containers: []
	W0401 19:34:32.964865   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:32.964870   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:32.964914   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:33.327670   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:35.826387   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:33.504338   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:36.005240   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:33.606596   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:35.607014   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:33.006903   71168 cri.go:89] found id: ""
	I0401 19:34:33.006920   71168 logs.go:276] 0 containers: []
	W0401 19:34:33.006927   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:33.006933   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:33.006983   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:33.045663   71168 cri.go:89] found id: ""
	I0401 19:34:33.045691   71168 logs.go:276] 0 containers: []
	W0401 19:34:33.045701   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:33.045709   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:33.045770   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:33.086262   71168 cri.go:89] found id: ""
	I0401 19:34:33.086290   71168 logs.go:276] 0 containers: []
	W0401 19:34:33.086298   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:33.086303   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:33.086368   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:33.128302   71168 cri.go:89] found id: ""
	I0401 19:34:33.128327   71168 logs.go:276] 0 containers: []
	W0401 19:34:33.128335   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:33.128341   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:33.128402   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:33.171155   71168 cri.go:89] found id: ""
	I0401 19:34:33.171189   71168 logs.go:276] 0 containers: []
	W0401 19:34:33.171200   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:33.171207   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:33.171270   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:33.210793   71168 cri.go:89] found id: ""
	I0401 19:34:33.210820   71168 logs.go:276] 0 containers: []
	W0401 19:34:33.210838   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:33.210848   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:33.210870   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:33.295035   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:33.295072   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:33.345381   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:33.345417   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:33.401082   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:33.401120   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:33.417029   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:33.417055   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:33.497027   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:35.997632   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:36.013106   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:36.013161   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:36.053013   71168 cri.go:89] found id: ""
	I0401 19:34:36.053040   71168 logs.go:276] 0 containers: []
	W0401 19:34:36.053050   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:36.053059   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:36.053116   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:36.092268   71168 cri.go:89] found id: ""
	I0401 19:34:36.092297   71168 logs.go:276] 0 containers: []
	W0401 19:34:36.092308   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:36.092315   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:36.092389   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:36.131347   71168 cri.go:89] found id: ""
	I0401 19:34:36.131391   71168 logs.go:276] 0 containers: []
	W0401 19:34:36.131402   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:36.131409   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:36.131468   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:36.171402   71168 cri.go:89] found id: ""
	I0401 19:34:36.171432   71168 logs.go:276] 0 containers: []
	W0401 19:34:36.171443   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:36.171449   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:36.171511   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:36.211239   71168 cri.go:89] found id: ""
	I0401 19:34:36.211272   71168 logs.go:276] 0 containers: []
	W0401 19:34:36.211283   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:36.211290   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:36.211354   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:36.251246   71168 cri.go:89] found id: ""
	I0401 19:34:36.251275   71168 logs.go:276] 0 containers: []
	W0401 19:34:36.251287   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:36.251294   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:36.251354   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:36.293140   71168 cri.go:89] found id: ""
	I0401 19:34:36.293162   71168 logs.go:276] 0 containers: []
	W0401 19:34:36.293169   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:36.293174   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:36.293231   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:36.330281   71168 cri.go:89] found id: ""
	I0401 19:34:36.330308   71168 logs.go:276] 0 containers: []
	W0401 19:34:36.330318   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:36.330328   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:36.330342   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:36.421753   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:36.421790   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:36.467555   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:36.467581   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:36.524747   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:36.524778   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:36.540946   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:36.540976   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:36.622452   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:38.326341   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:40.327267   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:38.503641   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:40.504555   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:38.107732   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:40.608535   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:39.122969   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:39.139092   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:39.139157   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:39.177337   71168 cri.go:89] found id: ""
	I0401 19:34:39.177368   71168 logs.go:276] 0 containers: []
	W0401 19:34:39.177379   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:39.177387   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:39.177449   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:39.216471   71168 cri.go:89] found id: ""
	I0401 19:34:39.216498   71168 logs.go:276] 0 containers: []
	W0401 19:34:39.216507   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:39.216512   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:39.216558   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:39.255526   71168 cri.go:89] found id: ""
	I0401 19:34:39.255550   71168 logs.go:276] 0 containers: []
	W0401 19:34:39.255557   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:39.255563   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:39.255623   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:39.294682   71168 cri.go:89] found id: ""
	I0401 19:34:39.294711   71168 logs.go:276] 0 containers: []
	W0401 19:34:39.294723   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:39.294735   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:39.294798   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:39.337416   71168 cri.go:89] found id: ""
	I0401 19:34:39.337437   71168 logs.go:276] 0 containers: []
	W0401 19:34:39.337444   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:39.337449   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:39.337510   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:39.384560   71168 cri.go:89] found id: ""
	I0401 19:34:39.384586   71168 logs.go:276] 0 containers: []
	W0401 19:34:39.384598   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:39.384608   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:39.384671   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:39.421459   71168 cri.go:89] found id: ""
	I0401 19:34:39.421480   71168 logs.go:276] 0 containers: []
	W0401 19:34:39.421488   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:39.421493   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:39.421540   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:39.460221   71168 cri.go:89] found id: ""
	I0401 19:34:39.460246   71168 logs.go:276] 0 containers: []
	W0401 19:34:39.460256   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:39.460264   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:39.460275   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:39.543800   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:39.543835   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:39.591012   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:39.591038   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:39.645994   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:39.646025   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:39.662223   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:39.662250   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:39.741574   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:42.242541   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:42.256933   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:42.257006   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:42.294268   71168 cri.go:89] found id: ""
	I0401 19:34:42.294297   71168 logs.go:276] 0 containers: []
	W0401 19:34:42.294308   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:42.294315   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:42.294370   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:42.331978   71168 cri.go:89] found id: ""
	I0401 19:34:42.331999   71168 logs.go:276] 0 containers: []
	W0401 19:34:42.332005   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:42.332013   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:42.332078   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:42.369858   71168 cri.go:89] found id: ""
	I0401 19:34:42.369885   71168 logs.go:276] 0 containers: []
	W0401 19:34:42.369895   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:42.369903   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:42.369989   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:42.412688   71168 cri.go:89] found id: ""
	I0401 19:34:42.412708   71168 logs.go:276] 0 containers: []
	W0401 19:34:42.412715   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:42.412720   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:42.412776   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:42.449180   71168 cri.go:89] found id: ""
	I0401 19:34:42.449209   71168 logs.go:276] 0 containers: []
	W0401 19:34:42.449217   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:42.449225   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:42.449283   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:42.488582   71168 cri.go:89] found id: ""
	I0401 19:34:42.488606   71168 logs.go:276] 0 containers: []
	W0401 19:34:42.488613   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:42.488618   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:42.488665   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:42.527883   71168 cri.go:89] found id: ""
	I0401 19:34:42.527915   71168 logs.go:276] 0 containers: []
	W0401 19:34:42.527924   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:42.527931   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:42.527993   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:42.564372   71168 cri.go:89] found id: ""
	I0401 19:34:42.564394   71168 logs.go:276] 0 containers: []
	W0401 19:34:42.564401   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:42.564408   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:42.564419   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:42.646940   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:42.646974   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:42.689323   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:42.689354   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:42.744996   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:42.745024   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:42.761404   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:42.761429   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:42.836643   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:42.825895   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:45.325856   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:42.504642   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:45.004315   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:43.110114   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:45.607093   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:45.337809   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:45.352936   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:45.353029   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:45.395073   71168 cri.go:89] found id: ""
	I0401 19:34:45.395098   71168 logs.go:276] 0 containers: []
	W0401 19:34:45.395106   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:45.395112   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:45.395160   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:45.433537   71168 cri.go:89] found id: ""
	I0401 19:34:45.433567   71168 logs.go:276] 0 containers: []
	W0401 19:34:45.433578   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:45.433586   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:45.433658   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:45.477108   71168 cri.go:89] found id: ""
	I0401 19:34:45.477138   71168 logs.go:276] 0 containers: []
	W0401 19:34:45.477150   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:45.477157   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:45.477217   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:45.520350   71168 cri.go:89] found id: ""
	I0401 19:34:45.520389   71168 logs.go:276] 0 containers: []
	W0401 19:34:45.520401   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:45.520408   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:45.520466   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:45.562871   71168 cri.go:89] found id: ""
	I0401 19:34:45.562901   71168 logs.go:276] 0 containers: []
	W0401 19:34:45.562911   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:45.562918   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:45.562988   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:45.619214   71168 cri.go:89] found id: ""
	I0401 19:34:45.619237   71168 logs.go:276] 0 containers: []
	W0401 19:34:45.619248   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:45.619255   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:45.619317   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:45.664361   71168 cri.go:89] found id: ""
	I0401 19:34:45.664387   71168 logs.go:276] 0 containers: []
	W0401 19:34:45.664398   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:45.664405   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:45.664463   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:45.701087   71168 cri.go:89] found id: ""
	I0401 19:34:45.701110   71168 logs.go:276] 0 containers: []
	W0401 19:34:45.701120   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:45.701128   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:45.701139   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:45.716839   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:45.716863   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:45.794609   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:45.794630   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:45.794642   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:45.883428   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:45.883464   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:45.934342   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:45.934374   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:47.825597   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:50.326528   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:47.505036   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:49.505287   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:51.505884   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:47.609038   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:50.106705   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:52.107802   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:48.492128   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:48.508674   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:48.508746   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:48.549522   71168 cri.go:89] found id: ""
	I0401 19:34:48.549545   71168 logs.go:276] 0 containers: []
	W0401 19:34:48.549555   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:48.549561   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:48.549619   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:48.587014   71168 cri.go:89] found id: ""
	I0401 19:34:48.587037   71168 logs.go:276] 0 containers: []
	W0401 19:34:48.587045   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:48.587051   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:48.587108   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:48.629591   71168 cri.go:89] found id: ""
	I0401 19:34:48.629620   71168 logs.go:276] 0 containers: []
	W0401 19:34:48.629630   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:48.629636   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:48.629707   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:48.669335   71168 cri.go:89] found id: ""
	I0401 19:34:48.669363   71168 logs.go:276] 0 containers: []
	W0401 19:34:48.669383   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:48.669400   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:48.669455   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:48.708322   71168 cri.go:89] found id: ""
	I0401 19:34:48.708350   71168 logs.go:276] 0 containers: []
	W0401 19:34:48.708356   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:48.708362   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:48.708407   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:48.750680   71168 cri.go:89] found id: ""
	I0401 19:34:48.750708   71168 logs.go:276] 0 containers: []
	W0401 19:34:48.750718   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:48.750726   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:48.750791   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:48.790946   71168 cri.go:89] found id: ""
	I0401 19:34:48.790974   71168 logs.go:276] 0 containers: []
	W0401 19:34:48.790984   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:48.790998   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:48.791055   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:48.828849   71168 cri.go:89] found id: ""
	I0401 19:34:48.828871   71168 logs.go:276] 0 containers: []
	W0401 19:34:48.828880   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:48.828889   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:48.828904   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:48.909182   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:48.909212   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:48.954285   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:48.954315   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:49.010340   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:49.010372   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:49.026493   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:49.026516   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:49.099662   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:51.599905   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:51.618094   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:51.618168   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:51.657003   71168 cri.go:89] found id: ""
	I0401 19:34:51.657028   71168 logs.go:276] 0 containers: []
	W0401 19:34:51.657038   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:51.657046   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:51.657104   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:51.696415   71168 cri.go:89] found id: ""
	I0401 19:34:51.696441   71168 logs.go:276] 0 containers: []
	W0401 19:34:51.696451   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:51.696456   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:51.696515   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:51.734416   71168 cri.go:89] found id: ""
	I0401 19:34:51.734445   71168 logs.go:276] 0 containers: []
	W0401 19:34:51.734457   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:51.734465   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:51.734523   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:51.774895   71168 cri.go:89] found id: ""
	I0401 19:34:51.774918   71168 logs.go:276] 0 containers: []
	W0401 19:34:51.774925   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:51.774931   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:51.774980   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:51.814602   71168 cri.go:89] found id: ""
	I0401 19:34:51.814623   71168 logs.go:276] 0 containers: []
	W0401 19:34:51.814631   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:51.814637   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:51.814687   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:51.856035   71168 cri.go:89] found id: ""
	I0401 19:34:51.856061   71168 logs.go:276] 0 containers: []
	W0401 19:34:51.856071   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:51.856078   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:51.856132   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:51.897415   71168 cri.go:89] found id: ""
	I0401 19:34:51.897440   71168 logs.go:276] 0 containers: []
	W0401 19:34:51.897451   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:51.897457   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:51.897516   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:51.937406   71168 cri.go:89] found id: ""
	I0401 19:34:51.937428   71168 logs.go:276] 0 containers: []
	W0401 19:34:51.937436   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:51.937443   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:51.937456   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:51.981508   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:51.981535   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:52.039956   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:52.039995   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:52.066403   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:52.066429   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:52.172509   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:52.172530   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:52.172541   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:52.827950   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:55.331369   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:54.004625   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:56.503197   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:54.607359   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:57.108257   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:54.761459   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:54.776972   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:54.777030   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:54.822945   71168 cri.go:89] found id: ""
	I0401 19:34:54.822983   71168 logs.go:276] 0 containers: []
	W0401 19:34:54.822996   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:54.823004   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:54.823066   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:54.861602   71168 cri.go:89] found id: ""
	I0401 19:34:54.861629   71168 logs.go:276] 0 containers: []
	W0401 19:34:54.861639   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:54.861662   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:54.861727   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:54.901283   71168 cri.go:89] found id: ""
	I0401 19:34:54.901309   71168 logs.go:276] 0 containers: []
	W0401 19:34:54.901319   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:54.901327   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:54.901385   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:54.940071   71168 cri.go:89] found id: ""
	I0401 19:34:54.940103   71168 logs.go:276] 0 containers: []
	W0401 19:34:54.940114   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:54.940121   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:54.940179   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:54.978447   71168 cri.go:89] found id: ""
	I0401 19:34:54.978474   71168 logs.go:276] 0 containers: []
	W0401 19:34:54.978485   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:54.978493   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:54.978563   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:55.021786   71168 cri.go:89] found id: ""
	I0401 19:34:55.021810   71168 logs.go:276] 0 containers: []
	W0401 19:34:55.021819   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:55.021827   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:55.021886   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:55.059861   71168 cri.go:89] found id: ""
	I0401 19:34:55.059889   71168 logs.go:276] 0 containers: []
	W0401 19:34:55.059899   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:55.059907   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:55.059963   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:55.104484   71168 cri.go:89] found id: ""
	I0401 19:34:55.104516   71168 logs.go:276] 0 containers: []
	W0401 19:34:55.104527   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:55.104537   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:55.104551   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:55.152197   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:55.152221   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:55.203900   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:55.203942   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:55.221553   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:55.221580   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:55.299651   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:55.299668   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:55.299680   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:57.877382   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:57.899186   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:57.899260   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:57.948146   71168 cri.go:89] found id: ""
	I0401 19:34:57.948182   71168 logs.go:276] 0 containers: []
	W0401 19:34:57.948192   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:57.948203   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:57.948270   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:57.826282   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:59.826598   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:58.504492   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:01.003480   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:59.607646   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:02.107162   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:58.017121   71168 cri.go:89] found id: ""
	I0401 19:34:58.017150   71168 logs.go:276] 0 containers: []
	W0401 19:34:58.017161   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:58.017168   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:58.017230   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:58.073881   71168 cri.go:89] found id: ""
	I0401 19:34:58.073905   71168 logs.go:276] 0 containers: []
	W0401 19:34:58.073916   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:58.073923   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:58.073979   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:58.115410   71168 cri.go:89] found id: ""
	I0401 19:34:58.115435   71168 logs.go:276] 0 containers: []
	W0401 19:34:58.115445   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:58.115452   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:58.115512   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:58.155452   71168 cri.go:89] found id: ""
	I0401 19:34:58.155481   71168 logs.go:276] 0 containers: []
	W0401 19:34:58.155492   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:58.155500   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:58.155562   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:58.197335   71168 cri.go:89] found id: ""
	I0401 19:34:58.197376   71168 logs.go:276] 0 containers: []
	W0401 19:34:58.197397   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:58.197407   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:58.197469   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:58.239782   71168 cri.go:89] found id: ""
	I0401 19:34:58.239808   71168 logs.go:276] 0 containers: []
	W0401 19:34:58.239815   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:58.239820   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:58.239870   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:58.280936   71168 cri.go:89] found id: ""
	I0401 19:34:58.280961   71168 logs.go:276] 0 containers: []
	W0401 19:34:58.280971   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:58.280982   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:58.280998   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:58.368357   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:58.368401   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:58.415104   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:58.415132   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:58.474719   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:58.474749   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:58.491004   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:58.491031   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:58.573999   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:01.074865   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:01.091751   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:01.091822   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:01.140053   71168 cri.go:89] found id: ""
	I0401 19:35:01.140079   71168 logs.go:276] 0 containers: []
	W0401 19:35:01.140089   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:01.140096   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:01.140154   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:01.184046   71168 cri.go:89] found id: ""
	I0401 19:35:01.184078   71168 logs.go:276] 0 containers: []
	W0401 19:35:01.184089   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:01.184096   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:01.184161   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:01.225962   71168 cri.go:89] found id: ""
	I0401 19:35:01.225989   71168 logs.go:276] 0 containers: []
	W0401 19:35:01.225999   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:01.226006   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:01.226072   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:01.267212   71168 cri.go:89] found id: ""
	I0401 19:35:01.267234   71168 logs.go:276] 0 containers: []
	W0401 19:35:01.267242   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:01.267247   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:01.267308   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:01.307039   71168 cri.go:89] found id: ""
	I0401 19:35:01.307066   71168 logs.go:276] 0 containers: []
	W0401 19:35:01.307074   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:01.307080   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:01.307132   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:01.347856   71168 cri.go:89] found id: ""
	I0401 19:35:01.347886   71168 logs.go:276] 0 containers: []
	W0401 19:35:01.347898   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:01.347905   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:01.347962   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:01.385893   71168 cri.go:89] found id: ""
	I0401 19:35:01.385923   71168 logs.go:276] 0 containers: []
	W0401 19:35:01.385933   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:01.385940   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:01.385999   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:01.422983   71168 cri.go:89] found id: ""
	I0401 19:35:01.423012   71168 logs.go:276] 0 containers: []
	W0401 19:35:01.423022   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:01.423033   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:01.423048   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:01.469842   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:01.469875   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:01.527536   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:01.527566   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:01.542332   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:01.542357   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:01.617252   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:01.617270   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:01.617284   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:02.325502   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:04.326603   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:06.328115   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:03.005979   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:05.504470   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:04.107681   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:06.607619   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:04.195171   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:04.211963   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:04.212015   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:04.252298   71168 cri.go:89] found id: ""
	I0401 19:35:04.252324   71168 logs.go:276] 0 containers: []
	W0401 19:35:04.252334   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:04.252342   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:04.252396   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:04.299619   71168 cri.go:89] found id: ""
	I0401 19:35:04.299649   71168 logs.go:276] 0 containers: []
	W0401 19:35:04.299659   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:04.299667   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:04.299725   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:04.347386   71168 cri.go:89] found id: ""
	I0401 19:35:04.347409   71168 logs.go:276] 0 containers: []
	W0401 19:35:04.347416   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:04.347426   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:04.347473   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:04.385902   71168 cri.go:89] found id: ""
	I0401 19:35:04.385929   71168 logs.go:276] 0 containers: []
	W0401 19:35:04.385937   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:04.385943   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:04.385993   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:04.425235   71168 cri.go:89] found id: ""
	I0401 19:35:04.425258   71168 logs.go:276] 0 containers: []
	W0401 19:35:04.425266   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:04.425271   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:04.425325   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:04.463849   71168 cri.go:89] found id: ""
	I0401 19:35:04.463881   71168 logs.go:276] 0 containers: []
	W0401 19:35:04.463891   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:04.463899   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:04.463974   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:04.501983   71168 cri.go:89] found id: ""
	I0401 19:35:04.502003   71168 logs.go:276] 0 containers: []
	W0401 19:35:04.502010   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:04.502016   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:04.502072   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:04.544082   71168 cri.go:89] found id: ""
	I0401 19:35:04.544103   71168 logs.go:276] 0 containers: []
	W0401 19:35:04.544113   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:04.544124   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:04.544141   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:04.600545   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:04.600578   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:04.617049   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:04.617075   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:04.696927   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:04.696945   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:04.696957   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:04.780024   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:04.780056   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:07.323161   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:07.339368   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:07.339432   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:07.379407   71168 cri.go:89] found id: ""
	I0401 19:35:07.379429   71168 logs.go:276] 0 containers: []
	W0401 19:35:07.379440   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:07.379452   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:07.379497   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:07.418700   71168 cri.go:89] found id: ""
	I0401 19:35:07.418728   71168 logs.go:276] 0 containers: []
	W0401 19:35:07.418737   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:07.418743   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:07.418788   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:07.457580   71168 cri.go:89] found id: ""
	I0401 19:35:07.457606   71168 logs.go:276] 0 containers: []
	W0401 19:35:07.457617   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:07.457624   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:07.457696   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:07.498211   71168 cri.go:89] found id: ""
	I0401 19:35:07.498240   71168 logs.go:276] 0 containers: []
	W0401 19:35:07.498249   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:07.498256   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:07.498318   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:07.539659   71168 cri.go:89] found id: ""
	I0401 19:35:07.539681   71168 logs.go:276] 0 containers: []
	W0401 19:35:07.539692   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:07.539699   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:07.539759   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:07.577414   71168 cri.go:89] found id: ""
	I0401 19:35:07.577440   71168 logs.go:276] 0 containers: []
	W0401 19:35:07.577450   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:07.577456   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:07.577520   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:07.623318   71168 cri.go:89] found id: ""
	I0401 19:35:07.623340   71168 logs.go:276] 0 containers: []
	W0401 19:35:07.623352   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:07.623358   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:07.623416   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:07.664791   71168 cri.go:89] found id: ""
	I0401 19:35:07.664823   71168 logs.go:276] 0 containers: []
	W0401 19:35:07.664834   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:07.664842   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:07.664854   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:07.722158   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:07.722186   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:07.737838   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:07.737876   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:07.813694   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:07.813717   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:07.813728   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:07.899698   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:07.899740   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:08.825778   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:10.825935   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:07.505933   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:10.003529   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:09.107076   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:11.108917   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:10.446184   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:10.460860   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:10.460927   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:10.505656   71168 cri.go:89] found id: ""
	I0401 19:35:10.505685   71168 logs.go:276] 0 containers: []
	W0401 19:35:10.505692   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:10.505698   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:10.505742   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:10.547771   71168 cri.go:89] found id: ""
	I0401 19:35:10.547796   71168 logs.go:276] 0 containers: []
	W0401 19:35:10.547814   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:10.547820   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:10.547876   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:10.584625   71168 cri.go:89] found id: ""
	I0401 19:35:10.584652   71168 logs.go:276] 0 containers: []
	W0401 19:35:10.584664   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:10.584671   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:10.584737   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:10.625512   71168 cri.go:89] found id: ""
	I0401 19:35:10.625541   71168 logs.go:276] 0 containers: []
	W0401 19:35:10.625552   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:10.625559   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:10.625618   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:10.664905   71168 cri.go:89] found id: ""
	I0401 19:35:10.664936   71168 logs.go:276] 0 containers: []
	W0401 19:35:10.664949   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:10.664955   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:10.665015   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:10.703043   71168 cri.go:89] found id: ""
	I0401 19:35:10.703071   71168 logs.go:276] 0 containers: []
	W0401 19:35:10.703082   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:10.703090   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:10.703149   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:10.747750   71168 cri.go:89] found id: ""
	I0401 19:35:10.747777   71168 logs.go:276] 0 containers: []
	W0401 19:35:10.747790   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:10.747796   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:10.747841   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:10.792944   71168 cri.go:89] found id: ""
	I0401 19:35:10.792970   71168 logs.go:276] 0 containers: []
	W0401 19:35:10.792980   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:10.792989   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:10.793004   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:10.854029   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:10.854058   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:10.868968   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:10.868991   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:10.940537   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:10.940564   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:10.940579   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:11.018201   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:11.018231   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:12.826117   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:14.826387   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:12.003995   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:14.503258   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:16.504686   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:13.608777   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:16.108992   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:13.562139   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:13.579370   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:13.579435   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:13.620811   71168 cri.go:89] found id: ""
	I0401 19:35:13.620838   71168 logs.go:276] 0 containers: []
	W0401 19:35:13.620847   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:13.620859   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:13.620919   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:13.661377   71168 cri.go:89] found id: ""
	I0401 19:35:13.661408   71168 logs.go:276] 0 containers: []
	W0401 19:35:13.661419   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:13.661427   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:13.661489   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:13.702413   71168 cri.go:89] found id: ""
	I0401 19:35:13.702436   71168 logs.go:276] 0 containers: []
	W0401 19:35:13.702445   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:13.702453   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:13.702519   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:13.748760   71168 cri.go:89] found id: ""
	I0401 19:35:13.748788   71168 logs.go:276] 0 containers: []
	W0401 19:35:13.748796   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:13.748803   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:13.748874   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:13.795438   71168 cri.go:89] found id: ""
	I0401 19:35:13.795460   71168 logs.go:276] 0 containers: []
	W0401 19:35:13.795472   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:13.795479   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:13.795537   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:13.835572   71168 cri.go:89] found id: ""
	I0401 19:35:13.835601   71168 logs.go:276] 0 containers: []
	W0401 19:35:13.835612   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:13.835619   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:13.835677   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:13.874301   71168 cri.go:89] found id: ""
	I0401 19:35:13.874327   71168 logs.go:276] 0 containers: []
	W0401 19:35:13.874336   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:13.874342   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:13.874387   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:13.914847   71168 cri.go:89] found id: ""
	I0401 19:35:13.914876   71168 logs.go:276] 0 containers: []
	W0401 19:35:13.914883   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:13.914891   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:13.914904   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:13.929329   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:13.929355   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:14.004332   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:14.004358   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:14.004373   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:14.084901   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:14.084935   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:14.134471   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:14.134500   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:16.693432   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:16.710258   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:16.710332   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:16.757213   71168 cri.go:89] found id: ""
	I0401 19:35:16.757243   71168 logs.go:276] 0 containers: []
	W0401 19:35:16.757254   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:16.757261   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:16.757320   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:16.797134   71168 cri.go:89] found id: ""
	I0401 19:35:16.797174   71168 logs.go:276] 0 containers: []
	W0401 19:35:16.797182   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:16.797188   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:16.797233   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:16.839502   71168 cri.go:89] found id: ""
	I0401 19:35:16.839530   71168 logs.go:276] 0 containers: []
	W0401 19:35:16.839541   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:16.839549   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:16.839609   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:16.881380   71168 cri.go:89] found id: ""
	I0401 19:35:16.881406   71168 logs.go:276] 0 containers: []
	W0401 19:35:16.881413   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:16.881419   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:16.881472   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:16.922968   71168 cri.go:89] found id: ""
	I0401 19:35:16.922991   71168 logs.go:276] 0 containers: []
	W0401 19:35:16.923002   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:16.923009   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:16.923069   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:16.961262   71168 cri.go:89] found id: ""
	I0401 19:35:16.961290   71168 logs.go:276] 0 containers: []
	W0401 19:35:16.961301   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:16.961310   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:16.961369   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:16.996901   71168 cri.go:89] found id: ""
	I0401 19:35:16.996929   71168 logs.go:276] 0 containers: []
	W0401 19:35:16.996940   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:16.996947   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:16.997004   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:17.038447   71168 cri.go:89] found id: ""
	I0401 19:35:17.038473   71168 logs.go:276] 0 containers: []
	W0401 19:35:17.038481   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:17.038489   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:17.038500   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:17.079979   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:17.080013   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:17.136973   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:17.137010   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:17.153083   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:17.153108   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:17.232055   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:17.232078   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:17.232096   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:17.326246   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:19.326903   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:20.818889   70687 pod_ready.go:81] duration metric: took 4m0.000381983s for pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace to be "Ready" ...
	E0401 19:35:20.818918   70687 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace to be "Ready" (will not retry!)
	I0401 19:35:20.818938   70687 pod_ready.go:38] duration metric: took 4m5.525170808s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:35:20.818967   70687 kubeadm.go:591] duration metric: took 4m13.404699267s to restartPrimaryControlPlane
	W0401 19:35:20.819026   70687 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0401 19:35:20.819059   70687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0401 19:35:19.004932   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:21.504514   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:18.607067   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:20.609619   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:19.813327   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:19.830168   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:19.830229   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:19.875502   71168 cri.go:89] found id: ""
	I0401 19:35:19.875524   71168 logs.go:276] 0 containers: []
	W0401 19:35:19.875532   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:19.875537   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:19.875591   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:19.916084   71168 cri.go:89] found id: ""
	I0401 19:35:19.916107   71168 logs.go:276] 0 containers: []
	W0401 19:35:19.916117   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:19.916125   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:19.916188   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:19.960673   71168 cri.go:89] found id: ""
	I0401 19:35:19.960699   71168 logs.go:276] 0 containers: []
	W0401 19:35:19.960710   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:19.960717   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:19.960796   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:19.998736   71168 cri.go:89] found id: ""
	I0401 19:35:19.998760   71168 logs.go:276] 0 containers: []
	W0401 19:35:19.998768   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:19.998776   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:19.998840   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:20.043382   71168 cri.go:89] found id: ""
	I0401 19:35:20.043408   71168 logs.go:276] 0 containers: []
	W0401 19:35:20.043418   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:20.043425   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:20.043492   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:20.086132   71168 cri.go:89] found id: ""
	I0401 19:35:20.086158   71168 logs.go:276] 0 containers: []
	W0401 19:35:20.086171   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:20.086178   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:20.086239   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:20.131052   71168 cri.go:89] found id: ""
	I0401 19:35:20.131074   71168 logs.go:276] 0 containers: []
	W0401 19:35:20.131081   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:20.131091   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:20.131151   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:20.174668   71168 cri.go:89] found id: ""
	I0401 19:35:20.174693   71168 logs.go:276] 0 containers: []
	W0401 19:35:20.174699   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:20.174707   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:20.174718   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:20.266503   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:20.266521   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:20.266534   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:20.351555   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:20.351586   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:20.400261   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:20.400289   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:20.455149   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:20.455183   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:23.510048   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:26.005267   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:23.109720   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:25.608633   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:22.972675   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:22.987481   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:22.987555   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:23.032429   71168 cri.go:89] found id: ""
	I0401 19:35:23.032453   71168 logs.go:276] 0 containers: []
	W0401 19:35:23.032461   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:23.032467   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:23.032522   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:23.073286   71168 cri.go:89] found id: ""
	I0401 19:35:23.073313   71168 logs.go:276] 0 containers: []
	W0401 19:35:23.073322   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:23.073330   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:23.073397   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:23.115424   71168 cri.go:89] found id: ""
	I0401 19:35:23.115447   71168 logs.go:276] 0 containers: []
	W0401 19:35:23.115454   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:23.115459   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:23.115506   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:23.164883   71168 cri.go:89] found id: ""
	I0401 19:35:23.164908   71168 logs.go:276] 0 containers: []
	W0401 19:35:23.164918   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:23.164925   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:23.164985   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:23.213617   71168 cri.go:89] found id: ""
	I0401 19:35:23.213656   71168 logs.go:276] 0 containers: []
	W0401 19:35:23.213668   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:23.213675   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:23.213787   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:23.264846   71168 cri.go:89] found id: ""
	I0401 19:35:23.264874   71168 logs.go:276] 0 containers: []
	W0401 19:35:23.264886   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:23.264893   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:23.264958   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:23.306467   71168 cri.go:89] found id: ""
	I0401 19:35:23.306495   71168 logs.go:276] 0 containers: []
	W0401 19:35:23.306506   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:23.306514   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:23.306566   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:23.358574   71168 cri.go:89] found id: ""
	I0401 19:35:23.358597   71168 logs.go:276] 0 containers: []
	W0401 19:35:23.358608   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:23.358619   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:23.358634   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:23.437486   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:23.437510   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:23.437525   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:23.555307   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:23.555350   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:23.601776   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:23.601808   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:23.666654   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:23.666688   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:26.184503   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:26.199924   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:26.199997   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:26.252151   71168 cri.go:89] found id: ""
	I0401 19:35:26.252181   71168 logs.go:276] 0 containers: []
	W0401 19:35:26.252192   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:26.252199   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:26.252266   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:26.299094   71168 cri.go:89] found id: ""
	I0401 19:35:26.299126   71168 logs.go:276] 0 containers: []
	W0401 19:35:26.299134   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:26.299139   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:26.299194   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:26.340483   71168 cri.go:89] found id: ""
	I0401 19:35:26.340516   71168 logs.go:276] 0 containers: []
	W0401 19:35:26.340533   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:26.340540   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:26.340599   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:26.387153   71168 cri.go:89] found id: ""
	I0401 19:35:26.387180   71168 logs.go:276] 0 containers: []
	W0401 19:35:26.387188   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:26.387194   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:26.387261   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:26.430746   71168 cri.go:89] found id: ""
	I0401 19:35:26.430773   71168 logs.go:276] 0 containers: []
	W0401 19:35:26.430781   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:26.430787   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:26.430854   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:26.478412   71168 cri.go:89] found id: ""
	I0401 19:35:26.478440   71168 logs.go:276] 0 containers: []
	W0401 19:35:26.478451   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:26.478458   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:26.478523   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:26.521120   71168 cri.go:89] found id: ""
	I0401 19:35:26.521150   71168 logs.go:276] 0 containers: []
	W0401 19:35:26.521161   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:26.521168   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:26.521229   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:26.564678   71168 cri.go:89] found id: ""
	I0401 19:35:26.564721   71168 logs.go:276] 0 containers: []
	W0401 19:35:26.564731   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:26.564742   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:26.564757   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:26.625271   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:26.625308   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:26.640505   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:26.640529   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:26.722753   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:26.722777   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:26.722795   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:26.830507   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:26.830551   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:28.505100   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:31.004387   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:28.107396   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:30.108080   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:29.386655   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:29.401232   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:29.401308   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:29.440479   71168 cri.go:89] found id: ""
	I0401 19:35:29.440511   71168 logs.go:276] 0 containers: []
	W0401 19:35:29.440522   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:29.440530   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:29.440590   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:29.479022   71168 cri.go:89] found id: ""
	I0401 19:35:29.479049   71168 logs.go:276] 0 containers: []
	W0401 19:35:29.479057   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:29.479062   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:29.479119   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:29.518179   71168 cri.go:89] found id: ""
	I0401 19:35:29.518208   71168 logs.go:276] 0 containers: []
	W0401 19:35:29.518216   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:29.518222   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:29.518281   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:29.556654   71168 cri.go:89] found id: ""
	I0401 19:35:29.556682   71168 logs.go:276] 0 containers: []
	W0401 19:35:29.556692   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:29.556712   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:29.556772   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:29.593258   71168 cri.go:89] found id: ""
	I0401 19:35:29.593287   71168 logs.go:276] 0 containers: []
	W0401 19:35:29.593295   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:29.593301   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:29.593349   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:29.637215   71168 cri.go:89] found id: ""
	I0401 19:35:29.637243   71168 logs.go:276] 0 containers: []
	W0401 19:35:29.637253   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:29.637261   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:29.637321   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:29.683052   71168 cri.go:89] found id: ""
	I0401 19:35:29.683090   71168 logs.go:276] 0 containers: []
	W0401 19:35:29.683100   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:29.683108   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:29.683164   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:29.730948   71168 cri.go:89] found id: ""
	I0401 19:35:29.730979   71168 logs.go:276] 0 containers: []
	W0401 19:35:29.730991   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:29.731001   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:29.731014   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:29.781969   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:29.782001   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:29.800700   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:29.800729   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:29.877200   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:29.877225   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:29.877244   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:29.958110   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:29.958144   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:32.501060   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:32.519551   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:32.519619   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:32.579776   71168 cri.go:89] found id: ""
	I0401 19:35:32.579802   71168 logs.go:276] 0 containers: []
	W0401 19:35:32.579813   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:32.579824   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:32.579886   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:32.643271   71168 cri.go:89] found id: ""
	I0401 19:35:32.643300   71168 logs.go:276] 0 containers: []
	W0401 19:35:32.643312   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:32.643322   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:32.643387   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:32.688576   71168 cri.go:89] found id: ""
	I0401 19:35:32.688605   71168 logs.go:276] 0 containers: []
	W0401 19:35:32.688614   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:32.688619   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:32.688678   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:32.729867   71168 cri.go:89] found id: ""
	I0401 19:35:32.729890   71168 logs.go:276] 0 containers: []
	W0401 19:35:32.729898   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:32.729906   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:32.729962   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:32.771485   71168 cri.go:89] found id: ""
	I0401 19:35:32.771508   71168 logs.go:276] 0 containers: []
	W0401 19:35:32.771515   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:32.771521   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:32.771574   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:32.809362   71168 cri.go:89] found id: ""
	I0401 19:35:32.809385   71168 logs.go:276] 0 containers: []
	W0401 19:35:32.809393   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:32.809398   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:32.809458   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:32.844916   71168 cri.go:89] found id: ""
	I0401 19:35:32.844941   71168 logs.go:276] 0 containers: []
	W0401 19:35:32.844950   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:32.844955   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:32.845000   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:32.884638   71168 cri.go:89] found id: ""
	I0401 19:35:32.884660   71168 logs.go:276] 0 containers: []
	W0401 19:35:32.884670   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:32.884680   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:32.884695   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:32.937462   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:32.937489   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:32.952842   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:32.952871   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0401 19:35:33.005516   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:35.504755   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:32.608051   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:35.106708   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:37.108135   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	W0401 19:35:33.035254   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:33.035278   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:33.035294   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:33.114963   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:33.114994   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:35.662190   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:35.675960   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:35.676016   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:35.717300   71168 cri.go:89] found id: ""
	I0401 19:35:35.717329   71168 logs.go:276] 0 containers: []
	W0401 19:35:35.717340   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:35.717347   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:35.717409   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:35.756687   71168 cri.go:89] found id: ""
	I0401 19:35:35.756713   71168 logs.go:276] 0 containers: []
	W0401 19:35:35.756723   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:35.756730   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:35.756788   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:35.796995   71168 cri.go:89] found id: ""
	I0401 19:35:35.797017   71168 logs.go:276] 0 containers: []
	W0401 19:35:35.797025   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:35.797030   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:35.797083   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:35.840419   71168 cri.go:89] found id: ""
	I0401 19:35:35.840444   71168 logs.go:276] 0 containers: []
	W0401 19:35:35.840455   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:35.840462   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:35.840523   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:35.880059   71168 cri.go:89] found id: ""
	I0401 19:35:35.880093   71168 logs.go:276] 0 containers: []
	W0401 19:35:35.880107   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:35.880113   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:35.880171   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:35.929491   71168 cri.go:89] found id: ""
	I0401 19:35:35.929515   71168 logs.go:276] 0 containers: []
	W0401 19:35:35.929523   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:35.929530   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:35.929584   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:35.968745   71168 cri.go:89] found id: ""
	I0401 19:35:35.968771   71168 logs.go:276] 0 containers: []
	W0401 19:35:35.968778   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:35.968784   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:35.968833   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:36.014294   71168 cri.go:89] found id: ""
	I0401 19:35:36.014318   71168 logs.go:276] 0 containers: []
	W0401 19:35:36.014328   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:36.014338   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:36.014359   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:36.068418   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:36.068450   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:36.086343   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:36.086367   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:36.172027   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:36.172053   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:36.172067   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:36.250046   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:36.250080   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:38.004007   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:40.004138   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:39.607714   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:42.107775   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:38.794261   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:38.809535   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:38.809597   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:38.849139   71168 cri.go:89] found id: ""
	I0401 19:35:38.849167   71168 logs.go:276] 0 containers: []
	W0401 19:35:38.849176   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:38.849181   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:38.849238   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:38.886787   71168 cri.go:89] found id: ""
	I0401 19:35:38.886811   71168 logs.go:276] 0 containers: []
	W0401 19:35:38.886821   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:38.886828   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:38.886891   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:38.923388   71168 cri.go:89] found id: ""
	I0401 19:35:38.923419   71168 logs.go:276] 0 containers: []
	W0401 19:35:38.923431   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:38.923438   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:38.923497   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:38.959583   71168 cri.go:89] found id: ""
	I0401 19:35:38.959608   71168 logs.go:276] 0 containers: []
	W0401 19:35:38.959619   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:38.959626   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:38.959682   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:38.998201   71168 cri.go:89] found id: ""
	I0401 19:35:38.998226   71168 logs.go:276] 0 containers: []
	W0401 19:35:38.998233   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:38.998238   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:38.998294   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:39.039669   71168 cri.go:89] found id: ""
	I0401 19:35:39.039692   71168 logs.go:276] 0 containers: []
	W0401 19:35:39.039703   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:39.039710   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:39.039767   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:39.077331   71168 cri.go:89] found id: ""
	I0401 19:35:39.077358   71168 logs.go:276] 0 containers: []
	W0401 19:35:39.077366   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:39.077371   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:39.077423   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:39.125999   71168 cri.go:89] found id: ""
	I0401 19:35:39.126021   71168 logs.go:276] 0 containers: []
	W0401 19:35:39.126031   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:39.126041   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:39.126054   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:39.183579   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:39.183612   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:39.201200   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:39.201227   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:39.282262   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:39.282280   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:39.282291   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:39.365340   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:39.365370   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:41.914909   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:41.929243   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:41.929317   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:41.975594   71168 cri.go:89] found id: ""
	I0401 19:35:41.975622   71168 logs.go:276] 0 containers: []
	W0401 19:35:41.975632   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:41.975639   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:41.975701   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:42.023558   71168 cri.go:89] found id: ""
	I0401 19:35:42.023585   71168 logs.go:276] 0 containers: []
	W0401 19:35:42.023596   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:42.023602   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:42.023662   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:42.074242   71168 cri.go:89] found id: ""
	I0401 19:35:42.074266   71168 logs.go:276] 0 containers: []
	W0401 19:35:42.074276   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:42.074283   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:42.074340   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:42.123327   71168 cri.go:89] found id: ""
	I0401 19:35:42.123358   71168 logs.go:276] 0 containers: []
	W0401 19:35:42.123370   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:42.123378   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:42.123452   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:42.168931   71168 cri.go:89] found id: ""
	I0401 19:35:42.168961   71168 logs.go:276] 0 containers: []
	W0401 19:35:42.168972   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:42.168980   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:42.169037   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:42.211747   71168 cri.go:89] found id: ""
	I0401 19:35:42.211774   71168 logs.go:276] 0 containers: []
	W0401 19:35:42.211784   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:42.211793   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:42.211849   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:42.251809   71168 cri.go:89] found id: ""
	I0401 19:35:42.251830   71168 logs.go:276] 0 containers: []
	W0401 19:35:42.251841   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:42.251849   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:42.251908   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:42.293266   71168 cri.go:89] found id: ""
	I0401 19:35:42.293361   71168 logs.go:276] 0 containers: []
	W0401 19:35:42.293377   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:42.293388   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:42.293405   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:42.364502   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:42.364553   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:42.381147   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:42.381180   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:42.464219   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:42.464238   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:42.464249   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:42.544564   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:42.544594   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:42.006061   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:44.504700   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:46.505615   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:44.606915   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:46.100004   70962 pod_ready.go:81] duration metric: took 4m0.000146584s for pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace to be "Ready" ...
	E0401 19:35:46.100029   70962 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace to be "Ready" (will not retry!)
	I0401 19:35:46.100044   70962 pod_ready.go:38] duration metric: took 4m10.491414096s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:35:46.100088   70962 kubeadm.go:591] duration metric: took 4m18.223285856s to restartPrimaryControlPlane
	W0401 19:35:46.100141   70962 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0401 19:35:46.100164   70962 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0401 19:35:45.105777   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:45.119911   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:45.119976   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:45.161871   71168 cri.go:89] found id: ""
	I0401 19:35:45.161890   71168 logs.go:276] 0 containers: []
	W0401 19:35:45.161897   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:45.161902   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:45.161949   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:45.198677   71168 cri.go:89] found id: ""
	I0401 19:35:45.198702   71168 logs.go:276] 0 containers: []
	W0401 19:35:45.198710   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:45.198715   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:45.198776   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:45.236938   71168 cri.go:89] found id: ""
	I0401 19:35:45.236972   71168 logs.go:276] 0 containers: []
	W0401 19:35:45.236983   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:45.236990   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:45.237052   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:45.280621   71168 cri.go:89] found id: ""
	I0401 19:35:45.280650   71168 logs.go:276] 0 containers: []
	W0401 19:35:45.280661   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:45.280668   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:45.280727   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:45.326794   71168 cri.go:89] found id: ""
	I0401 19:35:45.326818   71168 logs.go:276] 0 containers: []
	W0401 19:35:45.326827   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:45.326834   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:45.326892   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:45.369405   71168 cri.go:89] found id: ""
	I0401 19:35:45.369431   71168 logs.go:276] 0 containers: []
	W0401 19:35:45.369441   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:45.369446   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:45.369501   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:45.407609   71168 cri.go:89] found id: ""
	I0401 19:35:45.407635   71168 logs.go:276] 0 containers: []
	W0401 19:35:45.407643   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:45.407648   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:45.407720   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:45.444848   71168 cri.go:89] found id: ""
	I0401 19:35:45.444871   71168 logs.go:276] 0 containers: []
	W0401 19:35:45.444881   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:45.444891   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:45.444911   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:45.531938   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:45.531957   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:45.531972   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:45.617109   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:45.617141   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:45.663559   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:45.663591   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:45.717622   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:45.717670   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:49.004037   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:51.004650   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:48.234834   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:48.250543   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:48.250606   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:48.294396   71168 cri.go:89] found id: ""
	I0401 19:35:48.294423   71168 logs.go:276] 0 containers: []
	W0401 19:35:48.294432   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:48.294439   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:48.294504   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:48.336866   71168 cri.go:89] found id: ""
	I0401 19:35:48.336892   71168 logs.go:276] 0 containers: []
	W0401 19:35:48.336902   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:48.336908   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:48.336965   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:48.376031   71168 cri.go:89] found id: ""
	I0401 19:35:48.376065   71168 logs.go:276] 0 containers: []
	W0401 19:35:48.376076   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:48.376084   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:48.376142   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:48.414975   71168 cri.go:89] found id: ""
	I0401 19:35:48.414995   71168 logs.go:276] 0 containers: []
	W0401 19:35:48.415003   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:48.415008   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:48.415058   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:48.453484   71168 cri.go:89] found id: ""
	I0401 19:35:48.453513   71168 logs.go:276] 0 containers: []
	W0401 19:35:48.453524   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:48.453532   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:48.453593   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:48.487712   71168 cri.go:89] found id: ""
	I0401 19:35:48.487739   71168 logs.go:276] 0 containers: []
	W0401 19:35:48.487749   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:48.487757   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:48.487815   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:48.533331   71168 cri.go:89] found id: ""
	I0401 19:35:48.533364   71168 logs.go:276] 0 containers: []
	W0401 19:35:48.533375   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:48.533383   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:48.533442   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:48.574103   71168 cri.go:89] found id: ""
	I0401 19:35:48.574131   71168 logs.go:276] 0 containers: []
	W0401 19:35:48.574139   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:48.574147   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:48.574160   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:48.632068   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:48.632098   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:48.649342   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:48.649369   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:48.721799   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:48.721822   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:48.721836   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:48.821549   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:48.821584   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:51.364852   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:51.380281   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:51.380362   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:51.423383   71168 cri.go:89] found id: ""
	I0401 19:35:51.423412   71168 logs.go:276] 0 containers: []
	W0401 19:35:51.423422   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:51.423430   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:51.423490   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:51.470331   71168 cri.go:89] found id: ""
	I0401 19:35:51.470359   71168 logs.go:276] 0 containers: []
	W0401 19:35:51.470370   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:51.470378   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:51.470441   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:51.520310   71168 cri.go:89] found id: ""
	I0401 19:35:51.520339   71168 logs.go:276] 0 containers: []
	W0401 19:35:51.520350   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:51.520358   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:51.520414   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:51.568681   71168 cri.go:89] found id: ""
	I0401 19:35:51.568706   71168 logs.go:276] 0 containers: []
	W0401 19:35:51.568716   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:51.568724   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:51.568843   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:51.615146   71168 cri.go:89] found id: ""
	I0401 19:35:51.615174   71168 logs.go:276] 0 containers: []
	W0401 19:35:51.615185   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:51.615193   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:51.615256   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:51.658678   71168 cri.go:89] found id: ""
	I0401 19:35:51.658703   71168 logs.go:276] 0 containers: []
	W0401 19:35:51.658712   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:51.658720   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:51.658791   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:51.700071   71168 cri.go:89] found id: ""
	I0401 19:35:51.700097   71168 logs.go:276] 0 containers: []
	W0401 19:35:51.700108   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:51.700114   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:51.700177   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:51.746772   71168 cri.go:89] found id: ""
	I0401 19:35:51.746798   71168 logs.go:276] 0 containers: []
	W0401 19:35:51.746809   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:51.746826   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:51.746849   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:51.762321   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:51.762350   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:51.843300   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:51.843322   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:51.843337   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:51.919059   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:51.919090   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:51.965899   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:51.965925   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:53.564613   70687 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.745530657s)
	I0401 19:35:53.564696   70687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:35:53.582161   70687 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:35:53.593313   70687 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:35:53.604441   70687 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:35:53.604460   70687 kubeadm.go:156] found existing configuration files:
	
	I0401 19:35:53.604502   70687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 19:35:53.615367   70687 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:35:53.615426   70687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:35:53.626375   70687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 19:35:53.636924   70687 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:35:53.636975   70687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:35:53.647493   70687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 19:35:53.657319   70687 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:35:53.657373   70687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:35:53.667422   70687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 19:35:53.677235   70687 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:35:53.677308   70687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:35:53.688043   70687 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 19:35:53.894204   70687 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 19:35:53.504486   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:55.505966   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:54.523484   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:54.542004   71168 kubeadm.go:591] duration metric: took 4m4.024054342s to restartPrimaryControlPlane
	W0401 19:35:54.542067   71168 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0401 19:35:54.542088   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0401 19:35:55.179619   71168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:35:55.196424   71168 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:35:55.209517   71168 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:35:55.222643   71168 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:35:55.222664   71168 kubeadm.go:156] found existing configuration files:
	
	I0401 19:35:55.222714   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 19:35:55.234756   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:35:55.234813   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:35:55.246725   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 19:35:55.258440   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:35:55.258499   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:35:55.270106   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 19:35:55.280724   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:35:55.280776   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:35:55.293630   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 19:35:55.305588   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:35:55.305660   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:35:55.318308   71168 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 19:35:55.574896   71168 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 19:35:58.004494   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:00.505168   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:02.622337   70687 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0401 19:36:02.622433   70687 kubeadm.go:309] [preflight] Running pre-flight checks
	I0401 19:36:02.622548   70687 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 19:36:02.622659   70687 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 19:36:02.622794   70687 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 19:36:02.622883   70687 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 19:36:02.624550   70687 out.go:204]   - Generating certificates and keys ...
	I0401 19:36:02.624640   70687 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0401 19:36:02.624734   70687 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0401 19:36:02.624861   70687 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0401 19:36:02.624952   70687 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0401 19:36:02.625042   70687 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0401 19:36:02.625114   70687 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0401 19:36:02.625206   70687 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0401 19:36:02.625271   70687 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0401 19:36:02.625337   70687 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0401 19:36:02.625398   70687 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0401 19:36:02.625430   70687 kubeadm.go:309] [certs] Using the existing "sa" key
	I0401 19:36:02.625475   70687 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 19:36:02.625519   70687 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 19:36:02.625567   70687 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 19:36:02.625630   70687 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 19:36:02.625744   70687 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 19:36:02.625825   70687 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 19:36:02.625938   70687 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 19:36:02.626041   70687 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 19:36:02.627616   70687 out.go:204]   - Booting up control plane ...
	I0401 19:36:02.627744   70687 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 19:36:02.627812   70687 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 19:36:02.627878   70687 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 19:36:02.627976   70687 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 19:36:02.628046   70687 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 19:36:02.628098   70687 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0401 19:36:02.628273   70687 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 19:36:02.628354   70687 kubeadm.go:309] [apiclient] All control plane components are healthy after 5.502318 seconds
	I0401 19:36:02.628467   70687 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 19:36:02.628587   70687 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 19:36:02.628642   70687 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 19:36:02.628800   70687 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-882095 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 19:36:02.628849   70687 kubeadm.go:309] [bootstrap-token] Using token: 821cxx.fac41nwqi8u5mwgu
	I0401 19:36:02.630202   70687 out.go:204]   - Configuring RBAC rules ...
	I0401 19:36:02.630328   70687 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 19:36:02.630413   70687 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 19:36:02.630593   70687 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 19:36:02.630794   70687 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 19:36:02.630941   70687 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 19:36:02.631049   70687 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 19:36:02.631205   70687 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 19:36:02.631255   70687 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0401 19:36:02.631318   70687 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0401 19:36:02.631326   70687 kubeadm.go:309] 
	I0401 19:36:02.631412   70687 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0401 19:36:02.631421   70687 kubeadm.go:309] 
	I0401 19:36:02.631527   70687 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0401 19:36:02.631534   70687 kubeadm.go:309] 
	I0401 19:36:02.631560   70687 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0401 19:36:02.631649   70687 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 19:36:02.631721   70687 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 19:36:02.631731   70687 kubeadm.go:309] 
	I0401 19:36:02.631810   70687 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0401 19:36:02.631822   70687 kubeadm.go:309] 
	I0401 19:36:02.631896   70687 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 19:36:02.631910   70687 kubeadm.go:309] 
	I0401 19:36:02.631986   70687 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0401 19:36:02.632088   70687 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 19:36:02.632181   70687 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 19:36:02.632190   70687 kubeadm.go:309] 
	I0401 19:36:02.632319   70687 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 19:36:02.632427   70687 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0401 19:36:02.632437   70687 kubeadm.go:309] 
	I0401 19:36:02.632532   70687 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 821cxx.fac41nwqi8u5mwgu \
	I0401 19:36:02.632695   70687 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b8a0197ad47aa27a5800307c57228d22e61e4d31af785fa8a896f2b7fab267b8 \
	I0401 19:36:02.632726   70687 kubeadm.go:309] 	--control-plane 
	I0401 19:36:02.632736   70687 kubeadm.go:309] 
	I0401 19:36:02.632860   70687 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0401 19:36:02.632875   70687 kubeadm.go:309] 
	I0401 19:36:02.632983   70687 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 821cxx.fac41nwqi8u5mwgu \
	I0401 19:36:02.633118   70687 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b8a0197ad47aa27a5800307c57228d22e61e4d31af785fa8a896f2b7fab267b8 
	I0401 19:36:02.633132   70687 cni.go:84] Creating CNI manager for ""
	I0401 19:36:02.633138   70687 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:36:02.634595   70687 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0401 19:36:02.635812   70687 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0401 19:36:02.671750   70687 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0401 19:36:02.705562   70687 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 19:36:02.705657   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:02.705671   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-882095 minikube.k8s.io/updated_at=2024_04_01T19_36_02_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2 minikube.k8s.io/name=embed-certs-882095 minikube.k8s.io/primary=true
	I0401 19:36:02.762626   70687 ops.go:34] apiserver oom_adj: -16
	I0401 19:36:03.065957   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:03.566513   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:04.066178   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:04.566321   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:05.066798   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:05.566877   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:06.066520   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:03.004878   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:05.505057   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:06.566982   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:07.066931   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:07.566107   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:08.066843   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:08.566186   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:09.066550   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:09.566205   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:10.066287   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:10.566902   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:11.066656   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:08.005380   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:10.504026   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:11.566894   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:12.066235   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:12.566599   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:13.066132   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:13.566865   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:14.066759   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:14.566435   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:15.066907   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:15.566851   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:16.066880   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:16.158125   70687 kubeadm.go:1107] duration metric: took 13.452541301s to wait for elevateKubeSystemPrivileges
	W0401 19:36:16.158168   70687 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0401 19:36:16.158176   70687 kubeadm.go:393] duration metric: took 5m8.800288084s to StartCluster
	I0401 19:36:16.158195   70687 settings.go:142] acquiring lock: {Name:mk5cd3d9600680d3808ad7ff6310a5e71b09e71d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:36:16.158268   70687 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 19:36:16.159976   70687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/kubeconfig: {Name:mkbd988e40ba29769e9f8a43c4d876f38e957f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:36:16.160254   70687 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.190 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 19:36:16.162239   70687 out.go:177] * Verifying Kubernetes components...
	I0401 19:36:16.160346   70687 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0401 19:36:16.162276   70687 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-882095"
	I0401 19:36:16.162311   70687 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-882095"
	W0401 19:36:16.162320   70687 addons.go:243] addon storage-provisioner should already be in state true
	I0401 19:36:16.162339   70687 addons.go:69] Setting default-storageclass=true in profile "embed-certs-882095"
	I0401 19:36:16.162348   70687 addons.go:69] Setting metrics-server=true in profile "embed-certs-882095"
	I0401 19:36:16.162363   70687 addons.go:234] Setting addon metrics-server=true in "embed-certs-882095"
	W0401 19:36:16.162371   70687 addons.go:243] addon metrics-server should already be in state true
	I0401 19:36:16.162377   70687 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-882095"
	I0401 19:36:16.162384   70687 host.go:66] Checking if "embed-certs-882095" exists ...
	I0401 19:36:16.162345   70687 host.go:66] Checking if "embed-certs-882095" exists ...
	I0401 19:36:16.163767   70687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:36:16.160484   70687 config.go:182] Loaded profile config "embed-certs-882095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 19:36:16.162673   70687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:16.162687   70687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:16.163886   70687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:16.163900   70687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:16.162704   70687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:16.163963   70687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:16.180743   70687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41647
	I0401 19:36:16.180759   70687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46707
	I0401 19:36:16.180746   70687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44419
	I0401 19:36:16.181334   70687 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:16.181342   70687 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:16.181369   70687 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:16.181830   70687 main.go:141] libmachine: Using API Version  1
	I0401 19:36:16.181848   70687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:16.181973   70687 main.go:141] libmachine: Using API Version  1
	I0401 19:36:16.181991   70687 main.go:141] libmachine: Using API Version  1
	I0401 19:36:16.182001   70687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:16.182007   70687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:16.182187   70687 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:16.182360   70687 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:16.182393   70687 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:16.182592   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetState
	I0401 19:36:16.182726   70687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:16.182753   70687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:16.182829   70687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:16.182871   70687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:16.186198   70687 addons.go:234] Setting addon default-storageclass=true in "embed-certs-882095"
	W0401 19:36:16.186226   70687 addons.go:243] addon default-storageclass should already be in state true
	I0401 19:36:16.186258   70687 host.go:66] Checking if "embed-certs-882095" exists ...
	I0401 19:36:16.186603   70687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:16.186636   70687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:16.198494   70687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36915
	I0401 19:36:16.198862   70687 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:16.199298   70687 main.go:141] libmachine: Using API Version  1
	I0401 19:36:16.199315   70687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:16.199777   70687 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:16.200056   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetState
	I0401 19:36:16.201955   70687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39769
	I0401 19:36:16.202167   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:36:16.202416   70687 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:16.204728   70687 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:36:16.202891   70687 main.go:141] libmachine: Using API Version  1
	I0401 19:36:16.205309   70687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35751
	I0401 19:36:16.207964   70687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:16.208022   70687 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 19:36:16.208038   70687 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 19:36:16.208057   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:36:16.208345   70687 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:16.208482   70687 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:16.208550   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetState
	I0401 19:36:16.209106   70687 main.go:141] libmachine: Using API Version  1
	I0401 19:36:16.209121   70687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:16.209764   70687 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:16.210220   70687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:16.210258   70687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:16.211015   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:36:16.213549   70687 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 19:36:16.212105   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:36:16.215606   70687 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 19:36:16.213577   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:36:16.215625   70687 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 19:36:16.215632   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:36:16.212867   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:36:16.215647   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:36:16.215791   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:36:16.215913   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:36:16.216028   70687 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/embed-certs-882095/id_rsa Username:docker}
	I0401 19:36:16.218302   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:36:16.218924   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:36:16.218948   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:36:16.219174   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:36:16.219340   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:36:16.219496   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:36:16.219818   70687 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/embed-certs-882095/id_rsa Username:docker}
	I0401 19:36:16.227813   70687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35001
	I0401 19:36:16.228198   70687 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:16.228612   70687 main.go:141] libmachine: Using API Version  1
	I0401 19:36:16.228635   70687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:16.228989   70687 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:16.229159   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetState
	I0401 19:36:16.230712   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:36:16.230969   70687 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 19:36:16.230987   70687 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 19:36:16.231003   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:36:16.233712   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:36:16.234102   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:36:16.234126   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:36:16.234273   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:36:16.234435   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:36:16.234593   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:36:16.234753   70687 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/embed-certs-882095/id_rsa Username:docker}
	I0401 19:36:16.332504   70687 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:36:16.354423   70687 node_ready.go:35] waiting up to 6m0s for node "embed-certs-882095" to be "Ready" ...
	I0401 19:36:16.363527   70687 node_ready.go:49] node "embed-certs-882095" has status "Ready":"True"
	I0401 19:36:16.363555   70687 node_ready.go:38] duration metric: took 9.10669ms for node "embed-certs-882095" to be "Ready" ...
	I0401 19:36:16.363567   70687 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:36:16.369606   70687 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-fx6hf" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:16.435769   70687 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 19:36:16.435793   70687 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 19:36:16.450934   70687 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 19:36:16.468137   70687 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 19:36:16.474209   70687 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 19:36:16.474233   70687 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 19:36:13.003028   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:15.004924   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:16.530201   70687 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 19:36:16.530222   70687 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0401 19:36:16.607557   70687 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 19:36:17.044156   70687 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:17.044183   70687 main.go:141] libmachine: (embed-certs-882095) Calling .Close
	I0401 19:36:17.044165   70687 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:17.044244   70687 main.go:141] libmachine: (embed-certs-882095) Calling .Close
	I0401 19:36:17.044569   70687 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:17.044606   70687 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:17.044617   70687 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:17.044624   70687 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:17.044630   70687 main.go:141] libmachine: (embed-certs-882095) Calling .Close
	I0401 19:36:17.044639   70687 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:17.044656   70687 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:17.044657   70687 main.go:141] libmachine: (embed-certs-882095) DBG | Closing plugin on server side
	I0401 19:36:17.044670   70687 main.go:141] libmachine: (embed-certs-882095) Calling .Close
	I0401 19:36:17.044616   70687 main.go:141] libmachine: (embed-certs-882095) DBG | Closing plugin on server side
	I0401 19:36:17.044947   70687 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:17.044963   70687 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:17.044964   70687 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:17.044973   70687 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:17.045019   70687 main.go:141] libmachine: (embed-certs-882095) DBG | Closing plugin on server side
	I0401 19:36:17.058441   70687 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:17.058469   70687 main.go:141] libmachine: (embed-certs-882095) Calling .Close
	I0401 19:36:17.058718   70687 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:17.058735   70687 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:17.276263   70687 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:17.276283   70687 main.go:141] libmachine: (embed-certs-882095) Calling .Close
	I0401 19:36:17.276548   70687 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:17.276562   70687 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:17.276571   70687 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:17.276584   70687 main.go:141] libmachine: (embed-certs-882095) Calling .Close
	I0401 19:36:17.276823   70687 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:17.276837   70687 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:17.276852   70687 addons.go:470] Verifying addon metrics-server=true in "embed-certs-882095"
	I0401 19:36:17.278536   70687 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0401 19:36:17.279740   70687 addons.go:505] duration metric: took 1.119396s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0401 19:36:18.412746   70687 pod_ready.go:102] pod "coredns-76f75df574-fx6hf" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:19.378799   70687 pod_ready.go:92] pod "coredns-76f75df574-fx6hf" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:19.378819   70687 pod_ready.go:81] duration metric: took 3.009189982s for pod "coredns-76f75df574-fx6hf" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.378828   70687 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-hwbw6" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.384482   70687 pod_ready.go:92] pod "coredns-76f75df574-hwbw6" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:19.384498   70687 pod_ready.go:81] duration metric: took 5.664781ms for pod "coredns-76f75df574-hwbw6" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.384507   70687 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.390258   70687 pod_ready.go:92] pod "etcd-embed-certs-882095" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:19.390274   70687 pod_ready.go:81] duration metric: took 5.761319ms for pod "etcd-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.390281   70687 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.395592   70687 pod_ready.go:92] pod "kube-apiserver-embed-certs-882095" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:19.395611   70687 pod_ready.go:81] duration metric: took 5.323181ms for pod "kube-apiserver-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.395622   70687 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.400979   70687 pod_ready.go:92] pod "kube-controller-manager-embed-certs-882095" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:19.400994   70687 pod_ready.go:81] duration metric: took 5.365282ms for pod "kube-controller-manager-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.401002   70687 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mbs4m" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.775009   70687 pod_ready.go:92] pod "kube-proxy-mbs4m" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:19.775036   70687 pod_ready.go:81] duration metric: took 374.027521ms for pod "kube-proxy-mbs4m" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.775047   70687 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:20.174962   70687 pod_ready.go:92] pod "kube-scheduler-embed-certs-882095" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:20.174986   70687 pod_ready.go:81] duration metric: took 399.930828ms for pod "kube-scheduler-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:20.174994   70687 pod_ready.go:38] duration metric: took 3.811414774s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:36:20.175006   70687 api_server.go:52] waiting for apiserver process to appear ...
	I0401 19:36:20.175064   70687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:36:20.191452   70687 api_server.go:72] duration metric: took 4.031156406s to wait for apiserver process to appear ...
	I0401 19:36:20.191477   70687 api_server.go:88] waiting for apiserver healthz status ...
	I0401 19:36:20.191498   70687 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0401 19:36:20.196706   70687 api_server.go:279] https://192.168.39.190:8443/healthz returned 200:
	ok
	I0401 19:36:20.197772   70687 api_server.go:141] control plane version: v1.29.3
	I0401 19:36:20.197791   70687 api_server.go:131] duration metric: took 6.308074ms to wait for apiserver health ...
	I0401 19:36:20.197799   70687 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 19:36:20.380616   70687 system_pods.go:59] 9 kube-system pods found
	I0401 19:36:20.380645   70687 system_pods.go:61] "coredns-76f75df574-fx6hf" [1c07b740-3374-4a54-a786-784b23ec6b83] Running
	I0401 19:36:20.380651   70687 system_pods.go:61] "coredns-76f75df574-hwbw6" [7b12145a-2689-47e9-9724-d80790ed079c] Running
	I0401 19:36:20.380657   70687 system_pods.go:61] "etcd-embed-certs-882095" [3848d128-2fde-42f5-9543-b8d0343ba15b] Running
	I0401 19:36:20.380663   70687 system_pods.go:61] "kube-apiserver-embed-certs-882095" [116c5cd1-2d04-4a85-96e9-bd1e6af4cba4] Running
	I0401 19:36:20.380668   70687 system_pods.go:61] "kube-controller-manager-embed-certs-882095" [8a2282cf-2a87-4cee-a482-355e92048642] Running
	I0401 19:36:20.380672   70687 system_pods.go:61] "kube-proxy-mbs4m" [ffccbae0-7538-4a75-a6ce-afce49865f07] Running
	I0401 19:36:20.380676   70687 system_pods.go:61] "kube-scheduler-embed-certs-882095" [d2554007-1c9c-4238-809a-72aae1fb7de3] Running
	I0401 19:36:20.380684   70687 system_pods.go:61] "metrics-server-57f55c9bc5-dktr6" [c6adfcab-c746-4ad8-abe2-8b300389a4f5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:36:20.380689   70687 system_pods.go:61] "storage-provisioner" [bcff0d1d-a555-4b25-9aa5-7ab1188c21fd] Running
	I0401 19:36:20.380700   70687 system_pods.go:74] duration metric: took 182.895079ms to wait for pod list to return data ...
	I0401 19:36:20.380711   70687 default_sa.go:34] waiting for default service account to be created ...
	I0401 19:36:20.574739   70687 default_sa.go:45] found service account: "default"
	I0401 19:36:20.574771   70687 default_sa.go:55] duration metric: took 194.049249ms for default service account to be created ...
	I0401 19:36:20.574785   70687 system_pods.go:116] waiting for k8s-apps to be running ...
	I0401 19:36:20.781600   70687 system_pods.go:86] 9 kube-system pods found
	I0401 19:36:20.781630   70687 system_pods.go:89] "coredns-76f75df574-fx6hf" [1c07b740-3374-4a54-a786-784b23ec6b83] Running
	I0401 19:36:20.781638   70687 system_pods.go:89] "coredns-76f75df574-hwbw6" [7b12145a-2689-47e9-9724-d80790ed079c] Running
	I0401 19:36:20.781658   70687 system_pods.go:89] "etcd-embed-certs-882095" [3848d128-2fde-42f5-9543-b8d0343ba15b] Running
	I0401 19:36:20.781664   70687 system_pods.go:89] "kube-apiserver-embed-certs-882095" [116c5cd1-2d04-4a85-96e9-bd1e6af4cba4] Running
	I0401 19:36:20.781672   70687 system_pods.go:89] "kube-controller-manager-embed-certs-882095" [8a2282cf-2a87-4cee-a482-355e92048642] Running
	I0401 19:36:20.781678   70687 system_pods.go:89] "kube-proxy-mbs4m" [ffccbae0-7538-4a75-a6ce-afce49865f07] Running
	I0401 19:36:20.781686   70687 system_pods.go:89] "kube-scheduler-embed-certs-882095" [d2554007-1c9c-4238-809a-72aae1fb7de3] Running
	I0401 19:36:20.781695   70687 system_pods.go:89] "metrics-server-57f55c9bc5-dktr6" [c6adfcab-c746-4ad8-abe2-8b300389a4f5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:36:20.781705   70687 system_pods.go:89] "storage-provisioner" [bcff0d1d-a555-4b25-9aa5-7ab1188c21fd] Running
	I0401 19:36:20.781722   70687 system_pods.go:126] duration metric: took 206.928658ms to wait for k8s-apps to be running ...
	I0401 19:36:20.781738   70687 system_svc.go:44] waiting for kubelet service to be running ....
	I0401 19:36:20.781789   70687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:36:20.798910   70687 system_svc.go:56] duration metric: took 17.163227ms WaitForService to wait for kubelet
	I0401 19:36:20.798940   70687 kubeadm.go:576] duration metric: took 4.638649198s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 19:36:20.798962   70687 node_conditions.go:102] verifying NodePressure condition ...
	I0401 19:36:20.975011   70687 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 19:36:20.975034   70687 node_conditions.go:123] node cpu capacity is 2
	I0401 19:36:20.975045   70687 node_conditions.go:105] duration metric: took 176.077669ms to run NodePressure ...
	I0401 19:36:20.975055   70687 start.go:240] waiting for startup goroutines ...
	I0401 19:36:20.975061   70687 start.go:245] waiting for cluster config update ...
	I0401 19:36:20.975070   70687 start.go:254] writing updated cluster config ...
	I0401 19:36:20.975313   70687 ssh_runner.go:195] Run: rm -f paused
	I0401 19:36:21.024261   70687 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0401 19:36:21.026583   70687 out.go:177] * Done! kubectl is now configured to use "embed-certs-882095" cluster and "default" namespace by default
	I0401 19:36:17.504621   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:20.003964   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:18.623277   70962 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.523094705s)
	I0401 19:36:18.623344   70962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:36:18.640939   70962 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:36:18.653983   70962 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:36:18.666162   70962 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:36:18.666182   70962 kubeadm.go:156] found existing configuration files:
	
	I0401 19:36:18.666233   70962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0401 19:36:18.679043   70962 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:36:18.679092   70962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:36:18.690185   70962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0401 19:36:18.703017   70962 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:36:18.703078   70962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:36:18.714986   70962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0401 19:36:18.727138   70962 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:36:18.727188   70962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:36:18.737886   70962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0401 19:36:18.748013   70962 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:36:18.748064   70962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:36:18.758552   70962 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 19:36:18.988309   70962 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 19:36:22.004400   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:24.004510   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:26.504264   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:28.053408   70962 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0401 19:36:28.053478   70962 kubeadm.go:309] [preflight] Running pre-flight checks
	I0401 19:36:28.053544   70962 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 19:36:28.053677   70962 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 19:36:28.053837   70962 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 19:36:28.053953   70962 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 19:36:28.055426   70962 out.go:204]   - Generating certificates and keys ...
	I0401 19:36:28.055513   70962 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0401 19:36:28.055614   70962 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0401 19:36:28.055742   70962 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0401 19:36:28.055834   70962 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0401 19:36:28.055942   70962 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0401 19:36:28.056022   70962 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0401 19:36:28.056104   70962 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0401 19:36:28.056167   70962 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0401 19:36:28.056250   70962 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0401 19:36:28.056331   70962 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0401 19:36:28.056371   70962 kubeadm.go:309] [certs] Using the existing "sa" key
	I0401 19:36:28.056449   70962 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 19:36:28.056531   70962 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 19:36:28.056600   70962 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 19:36:28.056677   70962 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 19:36:28.056772   70962 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 19:36:28.056870   70962 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 19:36:28.057006   70962 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 19:36:28.057100   70962 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 19:36:28.058575   70962 out.go:204]   - Booting up control plane ...
	I0401 19:36:28.058693   70962 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 19:36:28.058773   70962 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 19:36:28.058830   70962 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 19:36:28.058923   70962 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 19:36:28.058998   70962 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 19:36:28.059032   70962 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0401 19:36:28.059201   70962 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 19:36:28.059307   70962 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003148 seconds
	I0401 19:36:28.059432   70962 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 19:36:28.059592   70962 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 19:36:28.059665   70962 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 19:36:28.059892   70962 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-734648 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 19:36:28.059966   70962 kubeadm.go:309] [bootstrap-token] Using token: x76swh.zbuhmc8jrh5hodf9
	I0401 19:36:28.061321   70962 out.go:204]   - Configuring RBAC rules ...
	I0401 19:36:28.061450   70962 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 19:36:28.061577   70962 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 19:36:28.061803   70962 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 19:36:28.061993   70962 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 19:36:28.062153   70962 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 19:36:28.062252   70962 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 19:36:28.062363   70962 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 19:36:28.062422   70962 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0401 19:36:28.062481   70962 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0401 19:36:28.062493   70962 kubeadm.go:309] 
	I0401 19:36:28.062556   70962 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0401 19:36:28.062569   70962 kubeadm.go:309] 
	I0401 19:36:28.062686   70962 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0401 19:36:28.062697   70962 kubeadm.go:309] 
	I0401 19:36:28.062727   70962 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0401 19:36:28.062805   70962 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 19:36:28.062872   70962 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 19:36:28.062886   70962 kubeadm.go:309] 
	I0401 19:36:28.062959   70962 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0401 19:36:28.062969   70962 kubeadm.go:309] 
	I0401 19:36:28.063050   70962 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 19:36:28.063061   70962 kubeadm.go:309] 
	I0401 19:36:28.063103   70962 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0401 19:36:28.063172   70962 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 19:36:28.063234   70962 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 19:36:28.063240   70962 kubeadm.go:309] 
	I0401 19:36:28.063337   70962 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 19:36:28.063440   70962 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0401 19:36:28.063453   70962 kubeadm.go:309] 
	I0401 19:36:28.063559   70962 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token x76swh.zbuhmc8jrh5hodf9 \
	I0401 19:36:28.063676   70962 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b8a0197ad47aa27a5800307c57228d22e61e4d31af785fa8a896f2b7fab267b8 \
	I0401 19:36:28.063725   70962 kubeadm.go:309] 	--control-plane 
	I0401 19:36:28.063734   70962 kubeadm.go:309] 
	I0401 19:36:28.063835   70962 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0401 19:36:28.063844   70962 kubeadm.go:309] 
	I0401 19:36:28.063955   70962 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token x76swh.zbuhmc8jrh5hodf9 \
	I0401 19:36:28.064092   70962 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b8a0197ad47aa27a5800307c57228d22e61e4d31af785fa8a896f2b7fab267b8 
	I0401 19:36:28.064105   70962 cni.go:84] Creating CNI manager for ""
	I0401 19:36:28.064114   70962 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:36:28.065560   70962 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0401 19:36:28.505029   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:31.005436   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:28.066823   70962 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0401 19:36:28.089595   70962 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0401 19:36:28.150074   70962 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 19:36:28.150195   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:28.150206   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-734648 minikube.k8s.io/updated_at=2024_04_01T19_36_28_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2 minikube.k8s.io/name=default-k8s-diff-port-734648 minikube.k8s.io/primary=true
	I0401 19:36:28.494391   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:28.529148   70962 ops.go:34] apiserver oom_adj: -16
	I0401 19:36:28.994780   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:29.494976   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:29.994627   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:30.495192   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:30.995334   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:31.494861   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:31.994576   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:33.505264   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:35.506298   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:32.495185   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:32.995090   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:33.494755   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:33.994758   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:34.494609   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:34.995423   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:35.495219   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:35.994557   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:36.495175   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:36.994857   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:37.494725   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:37.994846   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:38.494687   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:38.994615   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:39.494929   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:39.994514   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:40.494838   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:40.994846   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:41.105036   70962 kubeadm.go:1107] duration metric: took 12.954907711s to wait for elevateKubeSystemPrivileges
	W0401 19:36:41.105072   70962 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0401 19:36:41.105080   70962 kubeadm.go:393] duration metric: took 5m13.291890816s to StartCluster
	I0401 19:36:41.105098   70962 settings.go:142] acquiring lock: {Name:mk5cd3d9600680d3808ad7ff6310a5e71b09e71d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:36:41.105193   70962 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 19:36:41.107226   70962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/kubeconfig: {Name:mkbd988e40ba29769e9f8a43c4d876f38e957f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:36:41.107451   70962 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.145 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 19:36:41.109245   70962 out.go:177] * Verifying Kubernetes components...
	I0401 19:36:41.107543   70962 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0401 19:36:41.107682   70962 config.go:182] Loaded profile config "default-k8s-diff-port-734648": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 19:36:41.110583   70962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:36:41.110596   70962 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-734648"
	I0401 19:36:41.110621   70962 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-734648"
	I0401 19:36:41.110620   70962 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-734648"
	I0401 19:36:41.110652   70962 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-734648"
	I0401 19:36:41.110588   70962 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-734648"
	W0401 19:36:41.110665   70962 addons.go:243] addon metrics-server should already be in state true
	I0401 19:36:41.110685   70962 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-734648"
	W0401 19:36:41.110699   70962 addons.go:243] addon storage-provisioner should already be in state true
	I0401 19:36:41.110700   70962 host.go:66] Checking if "default-k8s-diff-port-734648" exists ...
	I0401 19:36:41.110727   70962 host.go:66] Checking if "default-k8s-diff-port-734648" exists ...
	I0401 19:36:41.111032   70962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:41.111039   70962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:41.111062   70962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:41.111098   70962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:41.111126   70962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:41.111158   70962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:41.129376   70962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46657
	I0401 19:36:41.130833   70962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38623
	I0401 19:36:41.131158   70962 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:41.131258   70962 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:41.131761   70962 main.go:141] libmachine: Using API Version  1
	I0401 19:36:41.131786   70962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:41.132119   70962 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:41.132313   70962 main.go:141] libmachine: Using API Version  1
	I0401 19:36:41.132437   70962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:41.132477   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetState
	I0401 19:36:41.133129   70962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36213
	I0401 19:36:41.133449   70962 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:41.133456   70962 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:41.133871   70962 main.go:141] libmachine: Using API Version  1
	I0401 19:36:41.133894   70962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:41.133990   70962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:41.134021   70962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:41.134159   70962 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:41.134572   70962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:41.134609   70962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:41.143808   70962 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-734648"
	W0401 19:36:41.143829   70962 addons.go:243] addon default-storageclass should already be in state true
	I0401 19:36:41.143858   70962 host.go:66] Checking if "default-k8s-diff-port-734648" exists ...
	I0401 19:36:41.144202   70962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:41.144241   70962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:41.154009   70962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38703
	I0401 19:36:41.156112   70962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45449
	I0401 19:36:41.156579   70962 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:41.157085   70962 main.go:141] libmachine: Using API Version  1
	I0401 19:36:41.157112   70962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:41.157458   70962 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:41.157631   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetState
	I0401 19:36:41.157891   70962 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:41.158593   70962 main.go:141] libmachine: Using API Version  1
	I0401 19:36:41.158615   70962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:41.158924   70962 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:41.159123   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetState
	I0401 19:36:41.160683   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:36:41.162801   70962 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 19:36:41.164275   70962 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 19:36:41.164292   70962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 19:36:41.164310   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:36:41.162762   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:36:41.163321   70962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39643
	I0401 19:36:41.166161   70962 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:36:38.004666   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:40.005118   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:41.164866   70962 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:41.167473   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:36:41.167806   70962 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 19:36:41.167833   70962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 19:36:41.167850   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:36:41.168056   70962 main.go:141] libmachine: Using API Version  1
	I0401 19:36:41.168074   70962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:41.168145   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:36:41.168163   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:36:41.168194   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:36:41.168353   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:36:41.168429   70962 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:41.168583   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:36:41.168723   70962 sshutil.go:53] new ssh client: &{IP:192.168.61.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/default-k8s-diff-port-734648/id_rsa Username:docker}
	I0401 19:36:41.169323   70962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:41.169374   70962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:41.170857   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:36:41.171269   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:36:41.171323   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:36:41.171412   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:36:41.171576   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:36:41.171723   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:36:41.171860   70962 sshutil.go:53] new ssh client: &{IP:192.168.61.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/default-k8s-diff-port-734648/id_rsa Username:docker}
	I0401 19:36:41.191280   70962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42133
	I0401 19:36:41.191576   70962 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:41.192122   70962 main.go:141] libmachine: Using API Version  1
	I0401 19:36:41.192152   70962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:41.192511   70962 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:41.192673   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetState
	I0401 19:36:41.194286   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:36:41.194528   70962 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 19:36:41.194546   70962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 19:36:41.194564   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:36:41.197639   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:36:41.198235   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:36:41.198259   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:36:41.198296   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:36:41.198491   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:36:41.198670   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:36:41.198857   70962 sshutil.go:53] new ssh client: &{IP:192.168.61.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/default-k8s-diff-port-734648/id_rsa Username:docker}
	I0401 19:36:41.308472   70962 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:36:41.334121   70962 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-734648" to be "Ready" ...
	I0401 19:36:41.343898   70962 node_ready.go:49] node "default-k8s-diff-port-734648" has status "Ready":"True"
	I0401 19:36:41.343943   70962 node_ready.go:38] duration metric: took 9.780821ms for node "default-k8s-diff-port-734648" to be "Ready" ...
	I0401 19:36:41.343952   70962 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:36:41.352294   70962 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:41.362318   70962 pod_ready.go:92] pod "etcd-default-k8s-diff-port-734648" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:41.362345   70962 pod_ready.go:81] duration metric: took 10.020335ms for pod "etcd-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:41.362358   70962 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:41.367338   70962 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-734648" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:41.367356   70962 pod_ready.go:81] duration metric: took 4.990987ms for pod "kube-apiserver-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:41.367364   70962 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:41.372379   70962 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-734648" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:41.372401   70962 pod_ready.go:81] duration metric: took 5.030239ms for pod "kube-controller-manager-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:41.372412   70962 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:41.377862   70962 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:41.377881   70962 pod_ready.go:81] duration metric: took 5.460968ms for pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:41.377891   70962 pod_ready.go:38] duration metric: took 33.929349ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:36:41.377915   70962 api_server.go:52] waiting for apiserver process to appear ...
	I0401 19:36:41.377965   70962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:36:41.396518   70962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 19:36:41.407024   70962 api_server.go:72] duration metric: took 299.545156ms to wait for apiserver process to appear ...
	I0401 19:36:41.407049   70962 api_server.go:88] waiting for apiserver healthz status ...
	I0401 19:36:41.407068   70962 api_server.go:253] Checking apiserver healthz at https://192.168.61.145:8444/healthz ...
	I0401 19:36:41.411429   70962 api_server.go:279] https://192.168.61.145:8444/healthz returned 200:
	ok
	I0401 19:36:41.412620   70962 api_server.go:141] control plane version: v1.29.3
	I0401 19:36:41.412640   70962 api_server.go:131] duration metric: took 5.58478ms to wait for apiserver health ...
	I0401 19:36:41.412646   70962 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 19:36:41.426474   70962 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 19:36:41.426500   70962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 19:36:41.447003   70962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 19:36:41.470135   70962 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 19:36:41.470153   70962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 19:36:41.526684   70962 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 19:36:41.526710   70962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0401 19:36:41.540871   70962 system_pods.go:59] 4 kube-system pods found
	I0401 19:36:41.540894   70962 system_pods.go:61] "etcd-default-k8s-diff-port-734648" [7b60f629-8a15-420e-936c-872a0d55ce74] Running
	I0401 19:36:41.540900   70962 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-734648" [811a3391-02c8-43dd-9129-3fc50a4fab41] Running
	I0401 19:36:41.540905   70962 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-734648" [4b57b14a-5f46-482f-8661-8fa500db5390] Running
	I0401 19:36:41.540908   70962 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-734648" [e0fb5e6b-aaa8-45ba-9df9-be947cbbdb80] Running
	I0401 19:36:41.540914   70962 system_pods.go:74] duration metric: took 128.262683ms to wait for pod list to return data ...
	I0401 19:36:41.540920   70962 default_sa.go:34] waiting for default service account to be created ...
	I0401 19:36:41.625507   70962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 19:36:41.750232   70962 default_sa.go:45] found service account: "default"
	I0401 19:36:41.750261   70962 default_sa.go:55] duration metric: took 209.334562ms for default service account to be created ...
	I0401 19:36:41.750273   70962 system_pods.go:116] waiting for k8s-apps to be running ...
	I0401 19:36:41.968623   70962 system_pods.go:86] 7 kube-system pods found
	I0401 19:36:41.968651   70962 system_pods.go:89] "coredns-76f75df574-lwsms" [9f432161-c5e3-42fa-8857-8e61959511b0] Pending
	I0401 19:36:41.968657   70962 system_pods.go:89] "coredns-76f75df574-ws9cc" [65660abf-9856-4df4-a07b-854cfd8e3fc6] Pending
	I0401 19:36:41.968663   70962 system_pods.go:89] "etcd-default-k8s-diff-port-734648" [7b60f629-8a15-420e-936c-872a0d55ce74] Running
	I0401 19:36:41.968669   70962 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-734648" [811a3391-02c8-43dd-9129-3fc50a4fab41] Running
	I0401 19:36:41.968675   70962 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-734648" [4b57b14a-5f46-482f-8661-8fa500db5390] Running
	I0401 19:36:41.968683   70962 system_pods.go:89] "kube-proxy-p8wrc" [2f6b37e6-b3f9-44b6-8ff9-e8fd781ef1a3] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:36:41.968690   70962 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-734648" [e0fb5e6b-aaa8-45ba-9df9-be947cbbdb80] Running
	I0401 19:36:41.968712   70962 retry.go:31] will retry after 288.42332ms: missing components: kube-dns, kube-proxy
	I0401 19:36:42.231814   70962 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:42.231848   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .Close
	I0401 19:36:42.231904   70962 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:42.231925   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .Close
	I0401 19:36:42.232160   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | Closing plugin on server side
	I0401 19:36:42.232161   70962 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:42.232179   70962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:42.232187   70962 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:42.232191   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | Closing plugin on server side
	I0401 19:36:42.232199   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .Close
	I0401 19:36:42.232223   70962 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:42.232235   70962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:42.232244   70962 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:42.232255   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .Close
	I0401 19:36:42.232431   70962 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:42.232478   70962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:42.232578   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | Closing plugin on server side
	I0401 19:36:42.232612   70962 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:42.232629   70962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:42.251515   70962 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:42.251538   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .Close
	I0401 19:36:42.251795   70962 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:42.251809   70962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:42.267102   70962 system_pods.go:86] 8 kube-system pods found
	I0401 19:36:42.267135   70962 system_pods.go:89] "coredns-76f75df574-lwsms" [9f432161-c5e3-42fa-8857-8e61959511b0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:36:42.267148   70962 system_pods.go:89] "coredns-76f75df574-ws9cc" [65660abf-9856-4df4-a07b-854cfd8e3fc6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:36:42.267163   70962 system_pods.go:89] "etcd-default-k8s-diff-port-734648" [7b60f629-8a15-420e-936c-872a0d55ce74] Running
	I0401 19:36:42.267181   70962 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-734648" [811a3391-02c8-43dd-9129-3fc50a4fab41] Running
	I0401 19:36:42.267187   70962 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-734648" [4b57b14a-5f46-482f-8661-8fa500db5390] Running
	I0401 19:36:42.267196   70962 system_pods.go:89] "kube-proxy-p8wrc" [2f6b37e6-b3f9-44b6-8ff9-e8fd781ef1a3] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:36:42.267204   70962 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-734648" [e0fb5e6b-aaa8-45ba-9df9-be947cbbdb80] Running
	I0401 19:36:42.267222   70962 system_pods.go:89] "storage-provisioner" [8509e661-1b53-4018-b6b0-b6a5e242768d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:36:42.267244   70962 retry.go:31] will retry after 336.906399ms: missing components: kube-dns, kube-proxy
	I0401 19:36:42.632180   70962 system_pods.go:86] 9 kube-system pods found
	I0401 19:36:42.632212   70962 system_pods.go:89] "coredns-76f75df574-lwsms" [9f432161-c5e3-42fa-8857-8e61959511b0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:36:42.632223   70962 system_pods.go:89] "coredns-76f75df574-ws9cc" [65660abf-9856-4df4-a07b-854cfd8e3fc6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:36:42.632232   70962 system_pods.go:89] "etcd-default-k8s-diff-port-734648" [7b60f629-8a15-420e-936c-872a0d55ce74] Running
	I0401 19:36:42.632240   70962 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-734648" [811a3391-02c8-43dd-9129-3fc50a4fab41] Running
	I0401 19:36:42.632247   70962 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-734648" [4b57b14a-5f46-482f-8661-8fa500db5390] Running
	I0401 19:36:42.632257   70962 system_pods.go:89] "kube-proxy-p8wrc" [2f6b37e6-b3f9-44b6-8ff9-e8fd781ef1a3] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:36:42.632264   70962 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-734648" [e0fb5e6b-aaa8-45ba-9df9-be947cbbdb80] Running
	I0401 19:36:42.632275   70962 system_pods.go:89] "metrics-server-57f55c9bc5-fj5x5" [e25fa51c-d80e-4ddc-898f-3b9903746537] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:36:42.632289   70962 system_pods.go:89] "storage-provisioner" [8509e661-1b53-4018-b6b0-b6a5e242768d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:36:42.632313   70962 retry.go:31] will retry after 406.571029ms: missing components: kube-dns, kube-proxy
	I0401 19:36:42.739308   70962 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.113759645s)
	I0401 19:36:42.739364   70962 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:42.739383   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .Close
	I0401 19:36:42.739822   70962 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:42.739842   70962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:42.739859   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | Closing plugin on server side
	I0401 19:36:42.739867   70962 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:42.739890   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .Close
	I0401 19:36:42.740171   70962 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:42.740186   70962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:42.740198   70962 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-734648"
	I0401 19:36:42.742233   70962 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0401 19:36:42.743265   70962 addons.go:505] duration metric: took 1.635721448s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0401 19:36:43.053149   70962 system_pods.go:86] 9 kube-system pods found
	I0401 19:36:43.053183   70962 system_pods.go:89] "coredns-76f75df574-lwsms" [9f432161-c5e3-42fa-8857-8e61959511b0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:36:43.053195   70962 system_pods.go:89] "coredns-76f75df574-ws9cc" [65660abf-9856-4df4-a07b-854cfd8e3fc6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:36:43.053205   70962 system_pods.go:89] "etcd-default-k8s-diff-port-734648" [7b60f629-8a15-420e-936c-872a0d55ce74] Running
	I0401 19:36:43.053215   70962 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-734648" [811a3391-02c8-43dd-9129-3fc50a4fab41] Running
	I0401 19:36:43.053223   70962 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-734648" [4b57b14a-5f46-482f-8661-8fa500db5390] Running
	I0401 19:36:43.053235   70962 system_pods.go:89] "kube-proxy-p8wrc" [2f6b37e6-b3f9-44b6-8ff9-e8fd781ef1a3] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:36:43.053240   70962 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-734648" [e0fb5e6b-aaa8-45ba-9df9-be947cbbdb80] Running
	I0401 19:36:43.053249   70962 system_pods.go:89] "metrics-server-57f55c9bc5-fj5x5" [e25fa51c-d80e-4ddc-898f-3b9903746537] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:36:43.053258   70962 system_pods.go:89] "storage-provisioner" [8509e661-1b53-4018-b6b0-b6a5e242768d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:36:43.053275   70962 retry.go:31] will retry after 524.250739ms: missing components: kube-dns, kube-proxy
	I0401 19:36:43.591419   70962 system_pods.go:86] 9 kube-system pods found
	I0401 19:36:43.591451   70962 system_pods.go:89] "coredns-76f75df574-lwsms" [9f432161-c5e3-42fa-8857-8e61959511b0] Running
	I0401 19:36:43.591463   70962 system_pods.go:89] "coredns-76f75df574-ws9cc" [65660abf-9856-4df4-a07b-854cfd8e3fc6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:36:43.591471   70962 system_pods.go:89] "etcd-default-k8s-diff-port-734648" [7b60f629-8a15-420e-936c-872a0d55ce74] Running
	I0401 19:36:43.591480   70962 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-734648" [811a3391-02c8-43dd-9129-3fc50a4fab41] Running
	I0401 19:36:43.591487   70962 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-734648" [4b57b14a-5f46-482f-8661-8fa500db5390] Running
	I0401 19:36:43.591493   70962 system_pods.go:89] "kube-proxy-p8wrc" [2f6b37e6-b3f9-44b6-8ff9-e8fd781ef1a3] Running
	I0401 19:36:43.591498   70962 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-734648" [e0fb5e6b-aaa8-45ba-9df9-be947cbbdb80] Running
	I0401 19:36:43.591508   70962 system_pods.go:89] "metrics-server-57f55c9bc5-fj5x5" [e25fa51c-d80e-4ddc-898f-3b9903746537] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:36:43.591517   70962 system_pods.go:89] "storage-provisioner" [8509e661-1b53-4018-b6b0-b6a5e242768d] Running
	I0401 19:36:43.591529   70962 system_pods.go:126] duration metric: took 1.841248999s to wait for k8s-apps to be running ...
	I0401 19:36:43.591561   70962 system_svc.go:44] waiting for kubelet service to be running ....
	I0401 19:36:43.591613   70962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:36:43.611873   70962 system_svc.go:56] duration metric: took 20.296001ms WaitForService to wait for kubelet
	I0401 19:36:43.611907   70962 kubeadm.go:576] duration metric: took 2.504430824s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 19:36:43.611930   70962 node_conditions.go:102] verifying NodePressure condition ...
	I0401 19:36:43.617697   70962 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 19:36:43.617720   70962 node_conditions.go:123] node cpu capacity is 2
	I0401 19:36:43.617732   70962 node_conditions.go:105] duration metric: took 5.796357ms to run NodePressure ...
	I0401 19:36:43.617745   70962 start.go:240] waiting for startup goroutines ...
	I0401 19:36:43.617754   70962 start.go:245] waiting for cluster config update ...
	I0401 19:36:43.617765   70962 start.go:254] writing updated cluster config ...
	I0401 19:36:43.618023   70962 ssh_runner.go:195] Run: rm -f paused
	I0401 19:36:43.666581   70962 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0401 19:36:43.668685   70962 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-734648" cluster and "default" namespace by default
	I0401 19:36:42.505149   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:45.003855   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:47.004247   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:49.504898   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:51.505403   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:54.005163   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:56.503395   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:58.503791   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:00.504001   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:02.504193   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:05.003540   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:07.003582   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:09.503975   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:12.005037   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:14.503460   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:16.504630   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:19.004307   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:21.004909   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:23.503286   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:25.503469   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:27.503520   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:30.004792   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:32.503693   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:35.005137   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:37.504848   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:39.504961   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:41.510644   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:44.004680   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:46.005118   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:51.561231   71168 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0401 19:37:51.561356   71168 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0401 19:37:51.563350   71168 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0401 19:37:51.563417   71168 kubeadm.go:309] [preflight] Running pre-flight checks
	I0401 19:37:51.563497   71168 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 19:37:51.563596   71168 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 19:37:51.563711   71168 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 19:37:51.563797   71168 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 19:37:51.565710   71168 out.go:204]   - Generating certificates and keys ...
	I0401 19:37:51.565809   71168 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0401 19:37:51.565908   71168 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0401 19:37:51.566051   71168 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0401 19:37:51.566136   71168 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0401 19:37:51.566230   71168 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0401 19:37:51.566325   71168 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0401 19:37:51.566402   71168 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0401 19:37:51.566464   71168 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0401 19:37:51.566580   71168 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0401 19:37:51.566688   71168 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0401 19:37:51.566727   71168 kubeadm.go:309] [certs] Using the existing "sa" key
	I0401 19:37:51.566774   71168 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 19:37:51.566822   71168 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 19:37:51.566917   71168 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 19:37:51.567001   71168 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 19:37:51.567068   71168 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 19:37:51.567210   71168 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 19:37:51.567314   71168 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 19:37:51.567371   71168 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0401 19:37:51.567473   71168 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 19:37:48.504708   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:51.005355   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:51.569285   71168 out.go:204]   - Booting up control plane ...
	I0401 19:37:51.569394   71168 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 19:37:51.569498   71168 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 19:37:51.569568   71168 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 19:37:51.569661   71168 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 19:37:51.569802   71168 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 19:37:51.569866   71168 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0401 19:37:51.569957   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:37:51.570195   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:37:51.570287   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:37:51.570514   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:37:51.570589   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:37:51.570769   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:37:51.570859   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:37:51.571033   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:37:51.571134   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:37:51.571342   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:37:51.571351   71168 kubeadm.go:309] 
	I0401 19:37:51.571394   71168 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0401 19:37:51.571453   71168 kubeadm.go:309] 		timed out waiting for the condition
	I0401 19:37:51.571475   71168 kubeadm.go:309] 
	I0401 19:37:51.571521   71168 kubeadm.go:309] 	This error is likely caused by:
	I0401 19:37:51.571558   71168 kubeadm.go:309] 		- The kubelet is not running
	I0401 19:37:51.571676   71168 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0401 19:37:51.571687   71168 kubeadm.go:309] 
	I0401 19:37:51.571824   71168 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0401 19:37:51.571880   71168 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0401 19:37:51.571921   71168 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0401 19:37:51.571931   71168 kubeadm.go:309] 
	I0401 19:37:51.572077   71168 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0401 19:37:51.572198   71168 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0401 19:37:51.572209   71168 kubeadm.go:309] 
	I0401 19:37:51.572359   71168 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0401 19:37:51.572477   71168 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0401 19:37:51.572576   71168 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0401 19:37:51.572676   71168 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0401 19:37:51.572731   71168 kubeadm.go:309] 
	W0401 19:37:51.572793   71168 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0401 19:37:51.572851   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0401 19:37:52.428554   71168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:37:52.445151   71168 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:37:52.456989   71168 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:37:52.457010   71168 kubeadm.go:156] found existing configuration files:
	
	I0401 19:37:52.457053   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 19:37:52.468305   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:37:52.468375   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:37:52.479305   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 19:37:52.489703   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:37:52.489753   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:37:52.501023   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 19:37:52.512418   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:37:52.512480   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:37:52.523850   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 19:37:52.534358   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:37:52.534425   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:37:52.546135   71168 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 19:37:52.779427   71168 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 19:37:52.997253   70284 pod_ready.go:81] duration metric: took 4m0.000092266s for pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace to be "Ready" ...
	E0401 19:37:52.997287   70284 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace to be "Ready" (will not retry!)
	I0401 19:37:52.997309   70284 pod_ready.go:38] duration metric: took 4m43.911595731s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:37:52.997333   70284 kubeadm.go:591] duration metric: took 5m31.840082505s to restartPrimaryControlPlane
	W0401 19:37:52.997393   70284 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0401 19:37:52.997421   70284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0401 19:38:25.458760   70284 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.46129187s)
	I0401 19:38:25.458845   70284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:38:25.476633   70284 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:38:25.487615   70284 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:38:25.498590   70284 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:38:25.498616   70284 kubeadm.go:156] found existing configuration files:
	
	I0401 19:38:25.498701   70284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 19:38:25.509063   70284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:38:25.509128   70284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:38:25.519806   70284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 19:38:25.530433   70284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:38:25.530488   70284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:38:25.540979   70284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 19:38:25.550786   70284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:38:25.550847   70284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:38:25.561979   70284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 19:38:25.571832   70284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:38:25.571898   70284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:38:25.582501   70284 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 19:38:25.646956   70284 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0-rc.0
	I0401 19:38:25.647046   70284 kubeadm.go:309] [preflight] Running pre-flight checks
	I0401 19:38:25.825328   70284 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 19:38:25.825459   70284 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 19:38:25.825574   70284 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 19:38:26.066201   70284 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 19:38:26.069071   70284 out.go:204]   - Generating certificates and keys ...
	I0401 19:38:26.069170   70284 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0401 19:38:26.069260   70284 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0401 19:38:26.069402   70284 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0401 19:38:26.069493   70284 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0401 19:38:26.069588   70284 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0401 19:38:26.069703   70284 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0401 19:38:26.069765   70284 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0401 19:38:26.069822   70284 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0401 19:38:26.069986   70284 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0401 19:38:26.070644   70284 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0401 19:38:26.071149   70284 kubeadm.go:309] [certs] Using the existing "sa" key
	I0401 19:38:26.071308   70284 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 19:38:26.204651   70284 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 19:38:26.368926   70284 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 19:38:26.586004   70284 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 19:38:26.710851   70284 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 19:38:26.858015   70284 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 19:38:26.858741   70284 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 19:38:26.863879   70284 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 19:38:26.865794   70284 out.go:204]   - Booting up control plane ...
	I0401 19:38:26.865898   70284 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 19:38:26.865984   70284 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 19:38:26.866081   70284 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 19:38:26.886171   70284 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 19:38:26.887118   70284 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 19:38:26.887177   70284 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0401 19:38:27.021053   70284 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0401 19:38:27.021142   70284 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0401 19:38:28.023462   70284 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002303634s
	I0401 19:38:28.023549   70284 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0401 19:38:34.026967   70284 kubeadm.go:309] [api-check] The API server is healthy after 6.003391014s
	I0401 19:38:34.044095   70284 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 19:38:34.061716   70284 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 19:38:34.092708   70284 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 19:38:34.093037   70284 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-472858 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 19:38:34.111758   70284 kubeadm.go:309] [bootstrap-token] Using token: 45cmca.rj16278sw3ueq3us
	I0401 19:38:34.113211   70284 out.go:204]   - Configuring RBAC rules ...
	I0401 19:38:34.113333   70284 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 19:38:34.122292   70284 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 19:38:34.133114   70284 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 19:38:34.138441   70284 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 19:38:34.143964   70284 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 19:38:34.148675   70284 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 19:38:34.438167   70284 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 19:38:34.885250   70284 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0401 19:38:35.439990   70284 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0401 19:38:35.441439   70284 kubeadm.go:309] 
	I0401 19:38:35.441532   70284 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0401 19:38:35.441545   70284 kubeadm.go:309] 
	I0401 19:38:35.441659   70284 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0401 19:38:35.441690   70284 kubeadm.go:309] 
	I0401 19:38:35.441752   70284 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0401 19:38:35.441845   70284 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 19:38:35.441930   70284 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 19:38:35.441938   70284 kubeadm.go:309] 
	I0401 19:38:35.442014   70284 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0401 19:38:35.442028   70284 kubeadm.go:309] 
	I0401 19:38:35.442067   70284 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 19:38:35.442073   70284 kubeadm.go:309] 
	I0401 19:38:35.442120   70284 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0401 19:38:35.442186   70284 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 19:38:35.442295   70284 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 19:38:35.442307   70284 kubeadm.go:309] 
	I0401 19:38:35.442426   70284 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 19:38:35.442552   70284 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0401 19:38:35.442565   70284 kubeadm.go:309] 
	I0401 19:38:35.442643   70284 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 45cmca.rj16278sw3ueq3us \
	I0401 19:38:35.442766   70284 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b8a0197ad47aa27a5800307c57228d22e61e4d31af785fa8a896f2b7fab267b8 \
	I0401 19:38:35.442803   70284 kubeadm.go:309] 	--control-plane 
	I0401 19:38:35.442813   70284 kubeadm.go:309] 
	I0401 19:38:35.442922   70284 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0401 19:38:35.442936   70284 kubeadm.go:309] 
	I0401 19:38:35.443008   70284 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 45cmca.rj16278sw3ueq3us \
	I0401 19:38:35.443097   70284 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b8a0197ad47aa27a5800307c57228d22e61e4d31af785fa8a896f2b7fab267b8 
	I0401 19:38:35.443436   70284 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 19:38:35.443530   70284 cni.go:84] Creating CNI manager for ""
	I0401 19:38:35.443546   70284 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:38:35.445089   70284 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0401 19:38:35.446328   70284 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0401 19:38:35.459788   70284 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0401 19:38:35.486202   70284 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 19:38:35.486300   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:35.486308   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-472858 minikube.k8s.io/updated_at=2024_04_01T19_38_35_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2 minikube.k8s.io/name=no-preload-472858 minikube.k8s.io/primary=true
	I0401 19:38:35.700677   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:35.731567   70284 ops.go:34] apiserver oom_adj: -16
	I0401 19:38:36.200955   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:36.701003   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:37.201632   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:37.700719   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:38.201316   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:38.701334   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:39.201609   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:39.701034   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:40.201771   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:40.700786   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:41.201750   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:41.701709   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:42.201682   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:42.700838   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:43.201123   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:43.701587   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:44.200860   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:44.700795   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:45.200850   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:45.701273   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:46.201701   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:46.701450   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:47.201496   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:47.701351   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:47.800239   70284 kubeadm.go:1107] duration metric: took 12.313994383s to wait for elevateKubeSystemPrivileges
	W0401 19:38:47.800287   70284 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0401 19:38:47.800298   70284 kubeadm.go:393] duration metric: took 6m26.705086714s to StartCluster
	I0401 19:38:47.800320   70284 settings.go:142] acquiring lock: {Name:mk5cd3d9600680d3808ad7ff6310a5e71b09e71d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:38:47.800410   70284 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 19:38:47.802818   70284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/kubeconfig: {Name:mkbd988e40ba29769e9f8a43c4d876f38e957f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:38:47.803132   70284 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.119 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 19:38:47.805445   70284 out.go:177] * Verifying Kubernetes components...
	I0401 19:38:47.803273   70284 config.go:182] Loaded profile config "no-preload-472858": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0401 19:38:47.803252   70284 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0401 19:38:47.806734   70284 addons.go:69] Setting storage-provisioner=true in profile "no-preload-472858"
	I0401 19:38:47.806761   70284 addons.go:69] Setting default-storageclass=true in profile "no-preload-472858"
	I0401 19:38:47.806774   70284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:38:47.806777   70284 addons.go:69] Setting metrics-server=true in profile "no-preload-472858"
	I0401 19:38:47.806802   70284 addons.go:234] Setting addon metrics-server=true in "no-preload-472858"
	W0401 19:38:47.806815   70284 addons.go:243] addon metrics-server should already be in state true
	I0401 19:38:47.806850   70284 host.go:66] Checking if "no-preload-472858" exists ...
	I0401 19:38:47.806802   70284 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-472858"
	I0401 19:38:47.806768   70284 addons.go:234] Setting addon storage-provisioner=true in "no-preload-472858"
	W0401 19:38:47.807229   70284 addons.go:243] addon storage-provisioner should already be in state true
	I0401 19:38:47.807257   70284 host.go:66] Checking if "no-preload-472858" exists ...
	I0401 19:38:47.807289   70284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:38:47.807332   70284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:38:47.807340   70284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:38:47.807366   70284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:38:47.807620   70284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:38:47.807690   70284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:38:47.823665   70284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38305
	I0401 19:38:47.823684   70284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35487
	I0401 19:38:47.824174   70284 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:38:47.824205   70284 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:38:47.824709   70284 main.go:141] libmachine: Using API Version  1
	I0401 19:38:47.824732   70284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:38:47.824838   70284 main.go:141] libmachine: Using API Version  1
	I0401 19:38:47.824867   70284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:38:47.825094   70284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:38:47.825276   70284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:38:47.825700   70284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:38:47.825746   70284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:38:47.825844   70284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:38:47.825866   70284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:38:47.826415   70284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38845
	I0401 19:38:47.826845   70284 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:38:47.827305   70284 main.go:141] libmachine: Using API Version  1
	I0401 19:38:47.827330   70284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:38:47.827800   70284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:38:47.828004   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetState
	I0401 19:38:47.831735   70284 addons.go:234] Setting addon default-storageclass=true in "no-preload-472858"
	W0401 19:38:47.831760   70284 addons.go:243] addon default-storageclass should already be in state true
	I0401 19:38:47.831791   70284 host.go:66] Checking if "no-preload-472858" exists ...
	I0401 19:38:47.832170   70284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:38:47.832218   70284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:38:47.842050   70284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42037
	I0401 19:38:47.842479   70284 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:38:47.842963   70284 main.go:141] libmachine: Using API Version  1
	I0401 19:38:47.842983   70284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:38:47.843354   70284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:38:47.843513   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetState
	I0401 19:38:47.845360   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:38:47.845430   70284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33357
	I0401 19:38:47.847622   70284 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:38:47.845959   70284 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:38:47.847568   70284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38785
	I0401 19:38:47.849255   70284 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 19:38:47.849283   70284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 19:38:47.849303   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:38:47.849356   70284 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:38:47.849524   70284 main.go:141] libmachine: Using API Version  1
	I0401 19:38:47.849536   70284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:38:47.850173   70284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:38:47.850228   70284 main.go:141] libmachine: Using API Version  1
	I0401 19:38:47.850238   70284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:38:47.850362   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetState
	I0401 19:38:47.851206   70284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:38:47.851773   70284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:38:47.851803   70284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:38:47.852404   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:38:47.854167   70284 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 19:38:47.853141   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:38:47.853926   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:38:47.855729   70284 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 19:38:47.855746   70284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 19:38:47.855763   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:38:47.855728   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:38:47.855809   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:38:47.855854   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:38:47.856000   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:38:47.856160   70284 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/no-preload-472858/id_rsa Username:docker}
	I0401 19:38:47.858726   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:38:47.859782   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:38:47.859826   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:38:47.859948   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:38:47.860138   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:38:47.860310   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:38:47.860593   70284 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/no-preload-472858/id_rsa Username:docker}
	I0401 19:38:47.870182   70284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34517
	I0401 19:38:47.870616   70284 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:38:47.871182   70284 main.go:141] libmachine: Using API Version  1
	I0401 19:38:47.871203   70284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:38:47.871561   70284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:38:47.871947   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetState
	I0401 19:38:47.873606   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:38:47.873931   70284 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 19:38:47.873949   70284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 19:38:47.873967   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:38:47.876826   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:38:47.877259   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:38:47.877286   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:38:47.877389   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:38:47.877672   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:38:47.877816   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:38:47.877974   70284 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/no-preload-472858/id_rsa Username:docker}
	I0401 19:38:48.053731   70284 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:38:48.081160   70284 node_ready.go:35] waiting up to 6m0s for node "no-preload-472858" to be "Ready" ...
	I0401 19:38:48.107976   70284 node_ready.go:49] node "no-preload-472858" has status "Ready":"True"
	I0401 19:38:48.107998   70284 node_ready.go:38] duration metric: took 26.793115ms for node "no-preload-472858" to be "Ready" ...
	I0401 19:38:48.108009   70284 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:38:48.115968   70284 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:38:48.158349   70284 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 19:38:48.158383   70284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 19:38:48.166047   70284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 19:38:48.181902   70284 pod_ready.go:92] pod "etcd-no-preload-472858" in "kube-system" namespace has status "Ready":"True"
	I0401 19:38:48.181922   70284 pod_ready.go:81] duration metric: took 65.920299ms for pod "etcd-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:38:48.181935   70284 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:38:48.199372   70284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 19:38:48.232110   70284 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 19:38:48.232140   70284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 19:38:48.251891   70284 pod_ready.go:92] pod "kube-apiserver-no-preload-472858" in "kube-system" namespace has status "Ready":"True"
	I0401 19:38:48.251914   70284 pod_ready.go:81] duration metric: took 69.970077ms for pod "kube-apiserver-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:38:48.251929   70284 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:38:48.309605   70284 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 19:38:48.309627   70284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0401 19:38:48.325907   70284 pod_ready.go:92] pod "kube-controller-manager-no-preload-472858" in "kube-system" namespace has status "Ready":"True"
	I0401 19:38:48.325928   70284 pod_ready.go:81] duration metric: took 73.991711ms for pod "kube-controller-manager-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:38:48.325938   70284 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:38:48.373418   70284 pod_ready.go:92] pod "kube-scheduler-no-preload-472858" in "kube-system" namespace has status "Ready":"True"
	I0401 19:38:48.373448   70284 pod_ready.go:81] duration metric: took 47.503272ms for pod "kube-scheduler-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:38:48.373456   70284 pod_ready.go:38] duration metric: took 265.436317ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:38:48.373479   70284 api_server.go:52] waiting for apiserver process to appear ...
	I0401 19:38:48.373543   70284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:38:48.396444   70284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 19:38:48.564838   70284 main.go:141] libmachine: Making call to close driver server
	I0401 19:38:48.564860   70284 main.go:141] libmachine: (no-preload-472858) Calling .Close
	I0401 19:38:48.565180   70284 main.go:141] libmachine: (no-preload-472858) DBG | Closing plugin on server side
	I0401 19:38:48.565197   70284 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:38:48.565227   70284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:38:48.565247   70284 main.go:141] libmachine: Making call to close driver server
	I0401 19:38:48.565258   70284 main.go:141] libmachine: (no-preload-472858) Calling .Close
	I0401 19:38:48.565489   70284 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:38:48.565506   70284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:38:48.579332   70284 main.go:141] libmachine: Making call to close driver server
	I0401 19:38:48.579355   70284 main.go:141] libmachine: (no-preload-472858) Calling .Close
	I0401 19:38:48.579599   70284 main.go:141] libmachine: (no-preload-472858) DBG | Closing plugin on server side
	I0401 19:38:48.579637   70284 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:38:48.579645   70284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:38:48.884887   70284 main.go:141] libmachine: Making call to close driver server
	I0401 19:38:48.884920   70284 main.go:141] libmachine: (no-preload-472858) Calling .Close
	I0401 19:38:48.884938   70284 api_server.go:72] duration metric: took 1.08176251s to wait for apiserver process to appear ...
	I0401 19:38:48.884958   70284 api_server.go:88] waiting for apiserver healthz status ...
	I0401 19:38:48.885018   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:38:48.885232   70284 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:38:48.885252   70284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:38:48.885260   70284 main.go:141] libmachine: Making call to close driver server
	I0401 19:38:48.885269   70284 main.go:141] libmachine: (no-preload-472858) Calling .Close
	I0401 19:38:48.885236   70284 main.go:141] libmachine: (no-preload-472858) DBG | Closing plugin on server side
	I0401 19:38:48.885519   70284 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:38:48.887182   70284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:38:48.885555   70284 main.go:141] libmachine: (no-preload-472858) DBG | Closing plugin on server side
	I0401 19:38:48.895737   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 200:
	ok
	I0401 19:38:48.899521   70284 api_server.go:141] control plane version: v1.30.0-rc.0
	I0401 19:38:48.899539   70284 api_server.go:131] duration metric: took 14.574989ms to wait for apiserver health ...
	I0401 19:38:48.899547   70284 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 19:38:48.914064   70284 system_pods.go:59] 8 kube-system pods found
	I0401 19:38:48.914090   70284 system_pods.go:61] "coredns-7db6d8ff4d-8285w" [c450ac4a-974e-4322-9857-fb65792a142b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:48.914106   70284 system_pods.go:61] "coredns-7db6d8ff4d-wmbsp" [7a73f081-42f4-4854-8785-25e54eb0a391] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:48.914112   70284 system_pods.go:61] "etcd-no-preload-472858" [d96862c6-4b97-4239-a79a-e877f2825eb6] Running
	I0401 19:38:48.914117   70284 system_pods.go:61] "kube-apiserver-no-preload-472858" [78418540-b912-4457-98ef-94cf57cf9379] Running
	I0401 19:38:48.914122   70284 system_pods.go:61] "kube-controller-manager-no-preload-472858" [4a48aaa7-c47f-4d1f-aace-f02d2f24c791] Running
	I0401 19:38:48.914126   70284 system_pods.go:61] "kube-proxy-5dmtl" [c243321b-b01a-4fd5-895a-888d18ee8527] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:38:48.914134   70284 system_pods.go:61] "kube-scheduler-no-preload-472858" [3564e7d0-f6cc-4584-a2cc-39fc6f884836] Running
	I0401 19:38:48.914138   70284 system_pods.go:61] "storage-provisioner" [844e010a-3bee-4fd1-942f-10fa50306617] Pending
	I0401 19:38:48.914146   70284 system_pods.go:74] duration metric: took 14.594359ms to wait for pod list to return data ...
	I0401 19:38:48.914156   70284 default_sa.go:34] waiting for default service account to be created ...
	I0401 19:38:48.924790   70284 default_sa.go:45] found service account: "default"
	I0401 19:38:48.924814   70284 default_sa.go:55] duration metric: took 10.649887ms for default service account to be created ...
	I0401 19:38:48.924825   70284 system_pods.go:116] waiting for k8s-apps to be running ...
	I0401 19:38:48.930993   70284 system_pods.go:86] 8 kube-system pods found
	I0401 19:38:48.931020   70284 system_pods.go:89] "coredns-7db6d8ff4d-8285w" [c450ac4a-974e-4322-9857-fb65792a142b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:48.931037   70284 system_pods.go:89] "coredns-7db6d8ff4d-wmbsp" [7a73f081-42f4-4854-8785-25e54eb0a391] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:48.931047   70284 system_pods.go:89] "etcd-no-preload-472858" [d96862c6-4b97-4239-a79a-e877f2825eb6] Running
	I0401 19:38:48.931056   70284 system_pods.go:89] "kube-apiserver-no-preload-472858" [78418540-b912-4457-98ef-94cf57cf9379] Running
	I0401 19:38:48.931066   70284 system_pods.go:89] "kube-controller-manager-no-preload-472858" [4a48aaa7-c47f-4d1f-aace-f02d2f24c791] Running
	I0401 19:38:48.931074   70284 system_pods.go:89] "kube-proxy-5dmtl" [c243321b-b01a-4fd5-895a-888d18ee8527] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:38:48.931089   70284 system_pods.go:89] "kube-scheduler-no-preload-472858" [3564e7d0-f6cc-4584-a2cc-39fc6f884836] Running
	I0401 19:38:48.931098   70284 system_pods.go:89] "storage-provisioner" [844e010a-3bee-4fd1-942f-10fa50306617] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:38:48.931117   70284 retry.go:31] will retry after 297.45527ms: missing components: kube-dns, kube-proxy
	I0401 19:38:49.123999   70284 main.go:141] libmachine: Making call to close driver server
	I0401 19:38:49.124019   70284 main.go:141] libmachine: (no-preload-472858) Calling .Close
	I0401 19:38:49.124344   70284 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:38:49.124394   70284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:38:49.124406   70284 main.go:141] libmachine: Making call to close driver server
	I0401 19:38:49.124414   70284 main.go:141] libmachine: (no-preload-472858) Calling .Close
	I0401 19:38:49.124356   70284 main.go:141] libmachine: (no-preload-472858) DBG | Closing plugin on server side
	I0401 19:38:49.124627   70284 main.go:141] libmachine: (no-preload-472858) DBG | Closing plugin on server side
	I0401 19:38:49.124661   70284 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:38:49.124677   70284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:38:49.124690   70284 addons.go:470] Verifying addon metrics-server=true in "no-preload-472858"
	I0401 19:38:49.127415   70284 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0401 19:38:49.129047   70284 addons.go:505] duration metric: took 1.325796036s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0401 19:38:49.236094   70284 system_pods.go:86] 9 kube-system pods found
	I0401 19:38:49.236127   70284 system_pods.go:89] "coredns-7db6d8ff4d-8285w" [c450ac4a-974e-4322-9857-fb65792a142b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:49.236136   70284 system_pods.go:89] "coredns-7db6d8ff4d-wmbsp" [7a73f081-42f4-4854-8785-25e54eb0a391] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:49.236145   70284 system_pods.go:89] "etcd-no-preload-472858" [d96862c6-4b97-4239-a79a-e877f2825eb6] Running
	I0401 19:38:49.236152   70284 system_pods.go:89] "kube-apiserver-no-preload-472858" [78418540-b912-4457-98ef-94cf57cf9379] Running
	I0401 19:38:49.236159   70284 system_pods.go:89] "kube-controller-manager-no-preload-472858" [4a48aaa7-c47f-4d1f-aace-f02d2f24c791] Running
	I0401 19:38:49.236168   70284 system_pods.go:89] "kube-proxy-5dmtl" [c243321b-b01a-4fd5-895a-888d18ee8527] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:38:49.236175   70284 system_pods.go:89] "kube-scheduler-no-preload-472858" [3564e7d0-f6cc-4584-a2cc-39fc6f884836] Running
	I0401 19:38:49.236185   70284 system_pods.go:89] "metrics-server-569cc877fc-wj2tt" [5259722c-3d0b-468f-b941-419806e91177] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:38:49.236198   70284 system_pods.go:89] "storage-provisioner" [844e010a-3bee-4fd1-942f-10fa50306617] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:38:49.236218   70284 retry.go:31] will retry after 287.299528ms: missing components: kube-dns, kube-proxy
	I0401 19:38:49.530606   70284 system_pods.go:86] 9 kube-system pods found
	I0401 19:38:49.530643   70284 system_pods.go:89] "coredns-7db6d8ff4d-8285w" [c450ac4a-974e-4322-9857-fb65792a142b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:49.530654   70284 system_pods.go:89] "coredns-7db6d8ff4d-wmbsp" [7a73f081-42f4-4854-8785-25e54eb0a391] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:49.530663   70284 system_pods.go:89] "etcd-no-preload-472858" [d96862c6-4b97-4239-a79a-e877f2825eb6] Running
	I0401 19:38:49.530670   70284 system_pods.go:89] "kube-apiserver-no-preload-472858" [78418540-b912-4457-98ef-94cf57cf9379] Running
	I0401 19:38:49.530678   70284 system_pods.go:89] "kube-controller-manager-no-preload-472858" [4a48aaa7-c47f-4d1f-aace-f02d2f24c791] Running
	I0401 19:38:49.530687   70284 system_pods.go:89] "kube-proxy-5dmtl" [c243321b-b01a-4fd5-895a-888d18ee8527] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:38:49.530697   70284 system_pods.go:89] "kube-scheduler-no-preload-472858" [3564e7d0-f6cc-4584-a2cc-39fc6f884836] Running
	I0401 19:38:49.530711   70284 system_pods.go:89] "metrics-server-569cc877fc-wj2tt" [5259722c-3d0b-468f-b941-419806e91177] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:38:49.530721   70284 system_pods.go:89] "storage-provisioner" [844e010a-3bee-4fd1-942f-10fa50306617] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:38:49.530744   70284 retry.go:31] will retry after 435.286919ms: missing components: kube-dns, kube-proxy
	I0401 19:38:49.974049   70284 system_pods.go:86] 9 kube-system pods found
	I0401 19:38:49.974090   70284 system_pods.go:89] "coredns-7db6d8ff4d-8285w" [c450ac4a-974e-4322-9857-fb65792a142b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:49.974103   70284 system_pods.go:89] "coredns-7db6d8ff4d-wmbsp" [7a73f081-42f4-4854-8785-25e54eb0a391] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:49.974113   70284 system_pods.go:89] "etcd-no-preload-472858" [d96862c6-4b97-4239-a79a-e877f2825eb6] Running
	I0401 19:38:49.974121   70284 system_pods.go:89] "kube-apiserver-no-preload-472858" [78418540-b912-4457-98ef-94cf57cf9379] Running
	I0401 19:38:49.974128   70284 system_pods.go:89] "kube-controller-manager-no-preload-472858" [4a48aaa7-c47f-4d1f-aace-f02d2f24c791] Running
	I0401 19:38:49.974142   70284 system_pods.go:89] "kube-proxy-5dmtl" [c243321b-b01a-4fd5-895a-888d18ee8527] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:38:49.974153   70284 system_pods.go:89] "kube-scheduler-no-preload-472858" [3564e7d0-f6cc-4584-a2cc-39fc6f884836] Running
	I0401 19:38:49.974168   70284 system_pods.go:89] "metrics-server-569cc877fc-wj2tt" [5259722c-3d0b-468f-b941-419806e91177] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:38:49.974181   70284 system_pods.go:89] "storage-provisioner" [844e010a-3bee-4fd1-942f-10fa50306617] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:38:49.974203   70284 retry.go:31] will retry after 577.959209ms: missing components: kube-dns, kube-proxy
	I0401 19:38:50.558750   70284 system_pods.go:86] 9 kube-system pods found
	I0401 19:38:50.558780   70284 system_pods.go:89] "coredns-7db6d8ff4d-8285w" [c450ac4a-974e-4322-9857-fb65792a142b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:50.558787   70284 system_pods.go:89] "coredns-7db6d8ff4d-wmbsp" [7a73f081-42f4-4854-8785-25e54eb0a391] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:50.558795   70284 system_pods.go:89] "etcd-no-preload-472858" [d96862c6-4b97-4239-a79a-e877f2825eb6] Running
	I0401 19:38:50.558805   70284 system_pods.go:89] "kube-apiserver-no-preload-472858" [78418540-b912-4457-98ef-94cf57cf9379] Running
	I0401 19:38:50.558812   70284 system_pods.go:89] "kube-controller-manager-no-preload-472858" [4a48aaa7-c47f-4d1f-aace-f02d2f24c791] Running
	I0401 19:38:50.558820   70284 system_pods.go:89] "kube-proxy-5dmtl" [c243321b-b01a-4fd5-895a-888d18ee8527] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:38:50.558833   70284 system_pods.go:89] "kube-scheduler-no-preload-472858" [3564e7d0-f6cc-4584-a2cc-39fc6f884836] Running
	I0401 19:38:50.558840   70284 system_pods.go:89] "metrics-server-569cc877fc-wj2tt" [5259722c-3d0b-468f-b941-419806e91177] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:38:50.558846   70284 system_pods.go:89] "storage-provisioner" [844e010a-3bee-4fd1-942f-10fa50306617] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:38:50.558863   70284 retry.go:31] will retry after 723.380101ms: missing components: kube-dns, kube-proxy
	I0401 19:38:51.291450   70284 system_pods.go:86] 9 kube-system pods found
	I0401 19:38:51.291487   70284 system_pods.go:89] "coredns-7db6d8ff4d-8285w" [c450ac4a-974e-4322-9857-fb65792a142b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:51.291498   70284 system_pods.go:89] "coredns-7db6d8ff4d-wmbsp" [7a73f081-42f4-4854-8785-25e54eb0a391] Running
	I0401 19:38:51.291508   70284 system_pods.go:89] "etcd-no-preload-472858" [d96862c6-4b97-4239-a79a-e877f2825eb6] Running
	I0401 19:38:51.291514   70284 system_pods.go:89] "kube-apiserver-no-preload-472858" [78418540-b912-4457-98ef-94cf57cf9379] Running
	I0401 19:38:51.291521   70284 system_pods.go:89] "kube-controller-manager-no-preload-472858" [4a48aaa7-c47f-4d1f-aace-f02d2f24c791] Running
	I0401 19:38:51.291527   70284 system_pods.go:89] "kube-proxy-5dmtl" [c243321b-b01a-4fd5-895a-888d18ee8527] Running
	I0401 19:38:51.291532   70284 system_pods.go:89] "kube-scheduler-no-preload-472858" [3564e7d0-f6cc-4584-a2cc-39fc6f884836] Running
	I0401 19:38:51.291543   70284 system_pods.go:89] "metrics-server-569cc877fc-wj2tt" [5259722c-3d0b-468f-b941-419806e91177] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:38:51.291551   70284 system_pods.go:89] "storage-provisioner" [844e010a-3bee-4fd1-942f-10fa50306617] Running
	I0401 19:38:51.291559   70284 system_pods.go:126] duration metric: took 2.366728733s to wait for k8s-apps to be running ...
	I0401 19:38:51.291576   70284 system_svc.go:44] waiting for kubelet service to be running ....
	I0401 19:38:51.291622   70284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:38:51.310224   70284 system_svc.go:56] duration metric: took 18.63923ms WaitForService to wait for kubelet
	I0401 19:38:51.310250   70284 kubeadm.go:576] duration metric: took 3.50708191s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 19:38:51.310269   70284 node_conditions.go:102] verifying NodePressure condition ...
	I0401 19:38:51.312899   70284 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 19:38:51.312919   70284 node_conditions.go:123] node cpu capacity is 2
	I0401 19:38:51.312930   70284 node_conditions.go:105] duration metric: took 2.654739ms to run NodePressure ...
	I0401 19:38:51.312945   70284 start.go:240] waiting for startup goroutines ...
	I0401 19:38:51.312958   70284 start.go:245] waiting for cluster config update ...
	I0401 19:38:51.312985   70284 start.go:254] writing updated cluster config ...
	I0401 19:38:51.313269   70284 ssh_runner.go:195] Run: rm -f paused
	I0401 19:38:51.365041   70284 start.go:600] kubectl: 1.29.3, cluster: 1.30.0-rc.0 (minor skew: 1)
	I0401 19:38:51.367173   70284 out.go:177] * Done! kubectl is now configured to use "no-preload-472858" cluster and "default" namespace by default
	I0401 19:39:48.856665   71168 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0401 19:39:48.856779   71168 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0401 19:39:48.858840   71168 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0401 19:39:48.858896   71168 kubeadm.go:309] [preflight] Running pre-flight checks
	I0401 19:39:48.858987   71168 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 19:39:48.859122   71168 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 19:39:48.859222   71168 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 19:39:48.859314   71168 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 19:39:48.861104   71168 out.go:204]   - Generating certificates and keys ...
	I0401 19:39:48.861202   71168 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0401 19:39:48.861277   71168 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0401 19:39:48.861381   71168 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0401 19:39:48.861492   71168 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0401 19:39:48.861596   71168 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0401 19:39:48.861699   71168 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0401 19:39:48.861791   71168 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0401 19:39:48.861897   71168 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0401 19:39:48.862009   71168 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0401 19:39:48.862118   71168 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0401 19:39:48.862176   71168 kubeadm.go:309] [certs] Using the existing "sa" key
	I0401 19:39:48.862260   71168 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 19:39:48.862338   71168 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 19:39:48.862420   71168 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 19:39:48.862480   71168 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 19:39:48.862527   71168 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 19:39:48.862618   71168 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 19:39:48.862693   71168 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 19:39:48.862734   71168 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0401 19:39:48.862804   71168 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 19:39:48.864199   71168 out.go:204]   - Booting up control plane ...
	I0401 19:39:48.864291   71168 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 19:39:48.864359   71168 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 19:39:48.864420   71168 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 19:39:48.864504   71168 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 19:39:48.864712   71168 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 19:39:48.864788   71168 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0401 19:39:48.864871   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:39:48.865069   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:39:48.865153   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:39:48.865344   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:39:48.865453   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:39:48.865674   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:39:48.865755   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:39:48.865989   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:39:48.866095   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:39:48.866269   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:39:48.866285   71168 kubeadm.go:309] 
	I0401 19:39:48.866343   71168 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0401 19:39:48.866402   71168 kubeadm.go:309] 		timed out waiting for the condition
	I0401 19:39:48.866414   71168 kubeadm.go:309] 
	I0401 19:39:48.866458   71168 kubeadm.go:309] 	This error is likely caused by:
	I0401 19:39:48.866506   71168 kubeadm.go:309] 		- The kubelet is not running
	I0401 19:39:48.866651   71168 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0401 19:39:48.866665   71168 kubeadm.go:309] 
	I0401 19:39:48.866816   71168 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0401 19:39:48.866865   71168 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0401 19:39:48.866895   71168 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0401 19:39:48.866901   71168 kubeadm.go:309] 
	I0401 19:39:48.866989   71168 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0401 19:39:48.867061   71168 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0401 19:39:48.867070   71168 kubeadm.go:309] 
	I0401 19:39:48.867194   71168 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0401 19:39:48.867327   71168 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0401 19:39:48.867417   71168 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0401 19:39:48.867526   71168 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0401 19:39:48.867555   71168 kubeadm.go:309] 
	I0401 19:39:48.867633   71168 kubeadm.go:393] duration metric: took 7m58.404831893s to StartCluster
	I0401 19:39:48.867702   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:39:48.867764   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:39:48.922329   71168 cri.go:89] found id: ""
	I0401 19:39:48.922359   71168 logs.go:276] 0 containers: []
	W0401 19:39:48.922369   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:39:48.922377   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:39:48.922435   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:39:48.966212   71168 cri.go:89] found id: ""
	I0401 19:39:48.966235   71168 logs.go:276] 0 containers: []
	W0401 19:39:48.966243   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:39:48.966248   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:39:48.966309   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:39:49.015141   71168 cri.go:89] found id: ""
	I0401 19:39:49.015171   71168 logs.go:276] 0 containers: []
	W0401 19:39:49.015182   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:39:49.015189   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:39:49.015249   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:39:49.053042   71168 cri.go:89] found id: ""
	I0401 19:39:49.053067   71168 logs.go:276] 0 containers: []
	W0401 19:39:49.053077   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:39:49.053085   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:39:49.053144   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:39:49.093880   71168 cri.go:89] found id: ""
	I0401 19:39:49.093906   71168 logs.go:276] 0 containers: []
	W0401 19:39:49.093914   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:39:49.093923   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:39:49.093976   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:39:49.129730   71168 cri.go:89] found id: ""
	I0401 19:39:49.129752   71168 logs.go:276] 0 containers: []
	W0401 19:39:49.129760   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:39:49.129766   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:39:49.129818   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:39:49.171075   71168 cri.go:89] found id: ""
	I0401 19:39:49.171107   71168 logs.go:276] 0 containers: []
	W0401 19:39:49.171118   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:39:49.171125   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:39:49.171204   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:39:49.208279   71168 cri.go:89] found id: ""
	I0401 19:39:49.208308   71168 logs.go:276] 0 containers: []
	W0401 19:39:49.208319   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:39:49.208330   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:39:49.208345   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:39:49.294128   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:39:49.294148   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:39:49.294162   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:39:49.400930   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:39:49.400963   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:39:49.443111   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:39:49.443140   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:39:49.501382   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:39:49.501417   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0401 19:39:49.516418   71168 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0401 19:39:49.516461   71168 out.go:239] * 
	W0401 19:39:49.516521   71168 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0401 19:39:49.516591   71168 out.go:239] * 
	W0401 19:39:49.517377   71168 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 19:39:49.520389   71168 out.go:177] 
	W0401 19:39:49.521593   71168 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0401 19:39:49.521639   71168 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0401 19:39:49.521686   71168 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0401 19:39:49.523181   71168 out.go:177] 
	
	
	==> CRI-O <==
	Apr 01 19:45:23 embed-certs-882095 crio[694]: time="2024-04-01 19:45:23.135810340Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712000723135787334,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9f08d65a-cbd5-402e-8470-a67fea9cc5b9 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:45:23 embed-certs-882095 crio[694]: time="2024-04-01 19:45:23.136311627Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a338b383-18ff-4881-9178-5292dc86e6b1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:45:23 embed-certs-882095 crio[694]: time="2024-04-01 19:45:23.136392606Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a338b383-18ff-4881-9178-5292dc86e6b1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:45:23 embed-certs-882095 crio[694]: time="2024-04-01 19:45:23.136573486Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b8e68f339de5aabe221346199d822c1b5ddea21d7db127a33649a98290d7828,PodSandboxId:05a1ecbece859bb687f1f7c87b81d94bcc34d8c4cfc2ce964a1af6767cac0980,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712000178213437449,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-fx6hf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c07b740-3374-4a54-a786-784b23ec6b83,},Annotations:map[string]string{io.kubernetes.container.hash: 4d9197d6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b48da0daaed361d9b2ac31516ceefe1d139fefed8bd29120857dbb518cd0b37c,PodSandboxId:7efef89ca0ece46a2d45c5f3e7b1fbbc0b0b1c7bc7165d5b391eb8c6ca6160eb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712000178113220801,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-hwbw6,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 7b12145a-2689-47e9-9724-d80790ed079c,},Annotations:map[string]string{io.kubernetes.container.hash: 16a80cfd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0843b91761c5daeaab1269368aaf342feaccd94a7c047a6e1a440c82a308249f,PodSandboxId:eb961299b2b5969ecf7d07ffcee4669a43569f76212f1b554ad7365a69bd200f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNI
NG,CreatedAt:1712000177901575392,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mbs4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffccbae0-7538-4a75-a6ce-afce49865f07,},Annotations:map[string]string{io.kubernetes.container.hash: 5cb0570b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3945afdef36c0a42c6c2597baed81cb27663f89912da17b7c026add868d0b02e,PodSandboxId:b10c1f3540bdb9f2555f329c4806c77af88fe248106bc9ab2f5e036d610f0d20,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:17120001774
49418688,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcff0d1d-a555-4b25-9aa5-7ab1188c21fd,},Annotations:map[string]string{io.kubernetes.container.hash: b997ae06,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68ca30006b9112dfd948bf271564137d17fdc5584ca52aa74b709acffa7651b9,PodSandboxId:55c2751e91071735f77e489fc672e2c953faa6474ab08045e6a8bc00dd36745f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712000156698353392,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-882095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1cc3c8c20214dafbf32ab81b034b1d9,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6af00be29964479e217c8d9c6a3de0ed6a2b2ca3f03344c9b1ef869b474f8161,PodSandboxId:0774ff5dbf1c87860057ec0b08579f55d5a695f3cdf274366d9574195abae87f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712000156695296711,Labels:map[
string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-882095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e55eef5d459380400f8def1b6fef235c,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc65a14cc9d3896ed4a0aab8e1ef8215bf34c52e9af1d0b381a685d67ba785b6,PodSandboxId:64afbc2eccd701da86afdf0443707ab70c3710cd95c3fd6a9452cd8c2f580a8f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712000156711892515,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-882095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a21d2ec505bee6951e4280a8eb9da666,},Annotations:map[string]string{io.kubernetes.container.hash: f62b4a34,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:025ab47445c0ce9c1bbf3521e04360d8e449f5a9cc3b9cfc32faadb0b088b625,PodSandboxId:8ce2d8941be452051ac31da325ca6913ada7cc1a63bba38f822adefe1ae158ab,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712000156580774809,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-882095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b48e9bdc35fa015990becffe532986ba,},Annotations:map[string]string{io.kubernetes.container.hash: 9b8fd1d4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a338b383-18ff-4881-9178-5292dc86e6b1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:45:23 embed-certs-882095 crio[694]: time="2024-04-01 19:45:23.182105480Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b1b85479-c9f8-46bb-8541-2faf56657324 name=/runtime.v1.RuntimeService/Version
	Apr 01 19:45:23 embed-certs-882095 crio[694]: time="2024-04-01 19:45:23.182212605Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b1b85479-c9f8-46bb-8541-2faf56657324 name=/runtime.v1.RuntimeService/Version
	Apr 01 19:45:23 embed-certs-882095 crio[694]: time="2024-04-01 19:45:23.183490841Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1e247698-6048-49c0-bf3c-5bc32f609086 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:45:23 embed-certs-882095 crio[694]: time="2024-04-01 19:45:23.184174617Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712000723184143585,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1e247698-6048-49c0-bf3c-5bc32f609086 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:45:23 embed-certs-882095 crio[694]: time="2024-04-01 19:45:23.184892752Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=50676c84-fe2c-4951-82d0-e4e6f37c42a5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:45:23 embed-certs-882095 crio[694]: time="2024-04-01 19:45:23.184944731Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=50676c84-fe2c-4951-82d0-e4e6f37c42a5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:45:23 embed-certs-882095 crio[694]: time="2024-04-01 19:45:23.185132535Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b8e68f339de5aabe221346199d822c1b5ddea21d7db127a33649a98290d7828,PodSandboxId:05a1ecbece859bb687f1f7c87b81d94bcc34d8c4cfc2ce964a1af6767cac0980,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712000178213437449,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-fx6hf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c07b740-3374-4a54-a786-784b23ec6b83,},Annotations:map[string]string{io.kubernetes.container.hash: 4d9197d6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b48da0daaed361d9b2ac31516ceefe1d139fefed8bd29120857dbb518cd0b37c,PodSandboxId:7efef89ca0ece46a2d45c5f3e7b1fbbc0b0b1c7bc7165d5b391eb8c6ca6160eb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712000178113220801,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-hwbw6,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 7b12145a-2689-47e9-9724-d80790ed079c,},Annotations:map[string]string{io.kubernetes.container.hash: 16a80cfd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0843b91761c5daeaab1269368aaf342feaccd94a7c047a6e1a440c82a308249f,PodSandboxId:eb961299b2b5969ecf7d07ffcee4669a43569f76212f1b554ad7365a69bd200f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNI
NG,CreatedAt:1712000177901575392,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mbs4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffccbae0-7538-4a75-a6ce-afce49865f07,},Annotations:map[string]string{io.kubernetes.container.hash: 5cb0570b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3945afdef36c0a42c6c2597baed81cb27663f89912da17b7c026add868d0b02e,PodSandboxId:b10c1f3540bdb9f2555f329c4806c77af88fe248106bc9ab2f5e036d610f0d20,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:17120001774
49418688,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcff0d1d-a555-4b25-9aa5-7ab1188c21fd,},Annotations:map[string]string{io.kubernetes.container.hash: b997ae06,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68ca30006b9112dfd948bf271564137d17fdc5584ca52aa74b709acffa7651b9,PodSandboxId:55c2751e91071735f77e489fc672e2c953faa6474ab08045e6a8bc00dd36745f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712000156698353392,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-882095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1cc3c8c20214dafbf32ab81b034b1d9,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6af00be29964479e217c8d9c6a3de0ed6a2b2ca3f03344c9b1ef869b474f8161,PodSandboxId:0774ff5dbf1c87860057ec0b08579f55d5a695f3cdf274366d9574195abae87f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712000156695296711,Labels:map[
string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-882095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e55eef5d459380400f8def1b6fef235c,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc65a14cc9d3896ed4a0aab8e1ef8215bf34c52e9af1d0b381a685d67ba785b6,PodSandboxId:64afbc2eccd701da86afdf0443707ab70c3710cd95c3fd6a9452cd8c2f580a8f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712000156711892515,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-882095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a21d2ec505bee6951e4280a8eb9da666,},Annotations:map[string]string{io.kubernetes.container.hash: f62b4a34,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:025ab47445c0ce9c1bbf3521e04360d8e449f5a9cc3b9cfc32faadb0b088b625,PodSandboxId:8ce2d8941be452051ac31da325ca6913ada7cc1a63bba38f822adefe1ae158ab,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712000156580774809,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-882095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b48e9bdc35fa015990becffe532986ba,},Annotations:map[string]string{io.kubernetes.container.hash: 9b8fd1d4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=50676c84-fe2c-4951-82d0-e4e6f37c42a5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:45:23 embed-certs-882095 crio[694]: time="2024-04-01 19:45:23.234383119Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3790733f-9351-40e2-9cac-b444c7a714d0 name=/runtime.v1.RuntimeService/Version
	Apr 01 19:45:23 embed-certs-882095 crio[694]: time="2024-04-01 19:45:23.234456306Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3790733f-9351-40e2-9cac-b444c7a714d0 name=/runtime.v1.RuntimeService/Version
	Apr 01 19:45:23 embed-certs-882095 crio[694]: time="2024-04-01 19:45:23.236934630Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c96aa398-0678-4f73-b56f-bf0117495136 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:45:23 embed-certs-882095 crio[694]: time="2024-04-01 19:45:23.237556043Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712000723237518180,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c96aa398-0678-4f73-b56f-bf0117495136 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:45:23 embed-certs-882095 crio[694]: time="2024-04-01 19:45:23.238568135Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b2918198-5ed8-43aa-9ac6-03c4d900f7f6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:45:23 embed-certs-882095 crio[694]: time="2024-04-01 19:45:23.238640881Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b2918198-5ed8-43aa-9ac6-03c4d900f7f6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:45:23 embed-certs-882095 crio[694]: time="2024-04-01 19:45:23.241086082Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b8e68f339de5aabe221346199d822c1b5ddea21d7db127a33649a98290d7828,PodSandboxId:05a1ecbece859bb687f1f7c87b81d94bcc34d8c4cfc2ce964a1af6767cac0980,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712000178213437449,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-fx6hf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c07b740-3374-4a54-a786-784b23ec6b83,},Annotations:map[string]string{io.kubernetes.container.hash: 4d9197d6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b48da0daaed361d9b2ac31516ceefe1d139fefed8bd29120857dbb518cd0b37c,PodSandboxId:7efef89ca0ece46a2d45c5f3e7b1fbbc0b0b1c7bc7165d5b391eb8c6ca6160eb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712000178113220801,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-hwbw6,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 7b12145a-2689-47e9-9724-d80790ed079c,},Annotations:map[string]string{io.kubernetes.container.hash: 16a80cfd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0843b91761c5daeaab1269368aaf342feaccd94a7c047a6e1a440c82a308249f,PodSandboxId:eb961299b2b5969ecf7d07ffcee4669a43569f76212f1b554ad7365a69bd200f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNI
NG,CreatedAt:1712000177901575392,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mbs4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffccbae0-7538-4a75-a6ce-afce49865f07,},Annotations:map[string]string{io.kubernetes.container.hash: 5cb0570b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3945afdef36c0a42c6c2597baed81cb27663f89912da17b7c026add868d0b02e,PodSandboxId:b10c1f3540bdb9f2555f329c4806c77af88fe248106bc9ab2f5e036d610f0d20,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:17120001774
49418688,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcff0d1d-a555-4b25-9aa5-7ab1188c21fd,},Annotations:map[string]string{io.kubernetes.container.hash: b997ae06,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68ca30006b9112dfd948bf271564137d17fdc5584ca52aa74b709acffa7651b9,PodSandboxId:55c2751e91071735f77e489fc672e2c953faa6474ab08045e6a8bc00dd36745f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712000156698353392,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-882095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1cc3c8c20214dafbf32ab81b034b1d9,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6af00be29964479e217c8d9c6a3de0ed6a2b2ca3f03344c9b1ef869b474f8161,PodSandboxId:0774ff5dbf1c87860057ec0b08579f55d5a695f3cdf274366d9574195abae87f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712000156695296711,Labels:map[
string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-882095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e55eef5d459380400f8def1b6fef235c,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc65a14cc9d3896ed4a0aab8e1ef8215bf34c52e9af1d0b381a685d67ba785b6,PodSandboxId:64afbc2eccd701da86afdf0443707ab70c3710cd95c3fd6a9452cd8c2f580a8f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712000156711892515,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-882095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a21d2ec505bee6951e4280a8eb9da666,},Annotations:map[string]string{io.kubernetes.container.hash: f62b4a34,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:025ab47445c0ce9c1bbf3521e04360d8e449f5a9cc3b9cfc32faadb0b088b625,PodSandboxId:8ce2d8941be452051ac31da325ca6913ada7cc1a63bba38f822adefe1ae158ab,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712000156580774809,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-882095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b48e9bdc35fa015990becffe532986ba,},Annotations:map[string]string{io.kubernetes.container.hash: 9b8fd1d4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b2918198-5ed8-43aa-9ac6-03c4d900f7f6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:45:23 embed-certs-882095 crio[694]: time="2024-04-01 19:45:23.286262457Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=316fe3fb-81d9-4326-89e4-bf06a7125b6e name=/runtime.v1.RuntimeService/Version
	Apr 01 19:45:23 embed-certs-882095 crio[694]: time="2024-04-01 19:45:23.286384580Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=316fe3fb-81d9-4326-89e4-bf06a7125b6e name=/runtime.v1.RuntimeService/Version
	Apr 01 19:45:23 embed-certs-882095 crio[694]: time="2024-04-01 19:45:23.288019548Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e8d98a37-1026-43b0-9ecd-c560993c85bd name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:45:23 embed-certs-882095 crio[694]: time="2024-04-01 19:45:23.288466309Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712000723288430888,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e8d98a37-1026-43b0-9ecd-c560993c85bd name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:45:23 embed-certs-882095 crio[694]: time="2024-04-01 19:45:23.289151712Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=78edef59-1c85-4e26-97d4-112e78d680d0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:45:23 embed-certs-882095 crio[694]: time="2024-04-01 19:45:23.289217762Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=78edef59-1c85-4e26-97d4-112e78d680d0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:45:23 embed-certs-882095 crio[694]: time="2024-04-01 19:45:23.289420050Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b8e68f339de5aabe221346199d822c1b5ddea21d7db127a33649a98290d7828,PodSandboxId:05a1ecbece859bb687f1f7c87b81d94bcc34d8c4cfc2ce964a1af6767cac0980,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712000178213437449,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-fx6hf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c07b740-3374-4a54-a786-784b23ec6b83,},Annotations:map[string]string{io.kubernetes.container.hash: 4d9197d6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b48da0daaed361d9b2ac31516ceefe1d139fefed8bd29120857dbb518cd0b37c,PodSandboxId:7efef89ca0ece46a2d45c5f3e7b1fbbc0b0b1c7bc7165d5b391eb8c6ca6160eb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712000178113220801,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-hwbw6,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 7b12145a-2689-47e9-9724-d80790ed079c,},Annotations:map[string]string{io.kubernetes.container.hash: 16a80cfd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0843b91761c5daeaab1269368aaf342feaccd94a7c047a6e1a440c82a308249f,PodSandboxId:eb961299b2b5969ecf7d07ffcee4669a43569f76212f1b554ad7365a69bd200f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNI
NG,CreatedAt:1712000177901575392,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mbs4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffccbae0-7538-4a75-a6ce-afce49865f07,},Annotations:map[string]string{io.kubernetes.container.hash: 5cb0570b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3945afdef36c0a42c6c2597baed81cb27663f89912da17b7c026add868d0b02e,PodSandboxId:b10c1f3540bdb9f2555f329c4806c77af88fe248106bc9ab2f5e036d610f0d20,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:17120001774
49418688,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcff0d1d-a555-4b25-9aa5-7ab1188c21fd,},Annotations:map[string]string{io.kubernetes.container.hash: b997ae06,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68ca30006b9112dfd948bf271564137d17fdc5584ca52aa74b709acffa7651b9,PodSandboxId:55c2751e91071735f77e489fc672e2c953faa6474ab08045e6a8bc00dd36745f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712000156698353392,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-882095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1cc3c8c20214dafbf32ab81b034b1d9,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6af00be29964479e217c8d9c6a3de0ed6a2b2ca3f03344c9b1ef869b474f8161,PodSandboxId:0774ff5dbf1c87860057ec0b08579f55d5a695f3cdf274366d9574195abae87f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712000156695296711,Labels:map[
string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-882095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e55eef5d459380400f8def1b6fef235c,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc65a14cc9d3896ed4a0aab8e1ef8215bf34c52e9af1d0b381a685d67ba785b6,PodSandboxId:64afbc2eccd701da86afdf0443707ab70c3710cd95c3fd6a9452cd8c2f580a8f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712000156711892515,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-882095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a21d2ec505bee6951e4280a8eb9da666,},Annotations:map[string]string{io.kubernetes.container.hash: f62b4a34,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:025ab47445c0ce9c1bbf3521e04360d8e449f5a9cc3b9cfc32faadb0b088b625,PodSandboxId:8ce2d8941be452051ac31da325ca6913ada7cc1a63bba38f822adefe1ae158ab,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712000156580774809,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-882095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b48e9bdc35fa015990becffe532986ba,},Annotations:map[string]string{io.kubernetes.container.hash: 9b8fd1d4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=78edef59-1c85-4e26-97d4-112e78d680d0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6b8e68f339de5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   05a1ecbece859       coredns-76f75df574-fx6hf
	b48da0daaed36       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   7efef89ca0ece       coredns-76f75df574-hwbw6
	0843b91761c5d       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392   9 minutes ago       Running             kube-proxy                0                   eb961299b2b59       kube-proxy-mbs4m
	3945afdef36c0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   b10c1f3540bdb       storage-provisioner
	cc65a14cc9d38       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   9 minutes ago       Running             kube-apiserver            2                   64afbc2eccd70       kube-apiserver-embed-certs-882095
	68ca30006b911       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b   9 minutes ago       Running             kube-scheduler            2                   55c2751e91071       kube-scheduler-embed-certs-882095
	6af00be299644       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3   9 minutes ago       Running             kube-controller-manager   2                   0774ff5dbf1c8       kube-controller-manager-embed-certs-882095
	025ab47445c0c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   8ce2d8941be45       etcd-embed-certs-882095
	
	
	==> coredns [6b8e68f339de5aabe221346199d822c1b5ddea21d7db127a33649a98290d7828] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [b48da0daaed361d9b2ac31516ceefe1d139fefed8bd29120857dbb518cd0b37c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-882095
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-882095
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2
	                    minikube.k8s.io/name=embed-certs-882095
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_01T19_36_02_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Apr 2024 19:35:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-882095
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Apr 2024 19:45:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Apr 2024 19:41:29 +0000   Mon, 01 Apr 2024 19:35:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Apr 2024 19:41:29 +0000   Mon, 01 Apr 2024 19:35:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Apr 2024 19:41:29 +0000   Mon, 01 Apr 2024 19:35:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Apr 2024 19:41:29 +0000   Mon, 01 Apr 2024 19:36:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.190
	  Hostname:    embed-certs-882095
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 806f0049a29b4e6f9f7bd026d87d4347
	  System UUID:                806f0049-a29b-4e6f-9f7b-d026d87d4347
	  Boot ID:                    fb23a3fc-e023-4508-a5f4-6fc43a813270
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-fx6hf                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m8s
	  kube-system                 coredns-76f75df574-hwbw6                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m8s
	  kube-system                 etcd-embed-certs-882095                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-apiserver-embed-certs-882095             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-controller-manager-embed-certs-882095    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-proxy-mbs4m                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m8s
	  kube-system                 kube-scheduler-embed-certs-882095             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 metrics-server-57f55c9bc5-dktr6               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m6s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m5s   kube-proxy       
	  Normal  Starting                 9m21s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m21s  kubelet          Node embed-certs-882095 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m21s  kubelet          Node embed-certs-882095 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m21s  kubelet          Node embed-certs-882095 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m21s  kubelet          Node embed-certs-882095 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m21s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m21s  kubelet          Node embed-certs-882095 status is now: NodeReady
	  Normal  RegisteredNode           9m8s   node-controller  Node embed-certs-882095 event: Registered Node embed-certs-882095 in Controller
	
	
	==> dmesg <==
	[  +0.051843] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042462] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.592134] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.500482] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.684030] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000014] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.944730] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.058870] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.071296] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[Apr 1 19:31] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.173357] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.329537] systemd-fstab-generator[680]: Ignoring "noauto" option for root device
	[  +5.034549] systemd-fstab-generator[776]: Ignoring "noauto" option for root device
	[  +0.059087] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.384042] systemd-fstab-generator[901]: Ignoring "noauto" option for root device
	[  +4.670677] kauditd_printk_skb: 97 callbacks suppressed
	[  +8.380208] kauditd_printk_skb: 74 callbacks suppressed
	[Apr 1 19:35] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.956839] systemd-fstab-generator[3440]: Ignoring "noauto" option for root device
	[  +4.520898] kauditd_printk_skb: 53 callbacks suppressed
	[Apr 1 19:36] systemd-fstab-generator[3766]: Ignoring "noauto" option for root device
	[ +13.921687] systemd-fstab-generator[3961]: Ignoring "noauto" option for root device
	[  +0.084326] kauditd_printk_skb: 14 callbacks suppressed
	[Apr 1 19:37] kauditd_printk_skb: 78 callbacks suppressed
	
	
	==> etcd [025ab47445c0ce9c1bbf3521e04360d8e449f5a9cc3b9cfc32faadb0b088b625] <==
	{"level":"info","ts":"2024-04-01T19:35:56.983426Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dc6e2f4e9dcc679a switched to configuration voters=(15883684950483691418)"}
	{"level":"info","ts":"2024-04-01T19:35:56.984067Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"22dc5a3adec033ed","local-member-id":"dc6e2f4e9dcc679a","added-peer-id":"dc6e2f4e9dcc679a","added-peer-peer-urls":["https://192.168.39.190:2380"]}
	{"level":"info","ts":"2024-04-01T19:35:57.00829Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-01T19:35:57.00852Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"dc6e2f4e9dcc679a","initial-advertise-peer-urls":["https://192.168.39.190:2380"],"listen-peer-urls":["https://192.168.39.190:2380"],"advertise-client-urls":["https://192.168.39.190:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.190:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-01T19:35:57.00857Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-01T19:35:57.008796Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.190:2380"}
	{"level":"info","ts":"2024-04-01T19:35:57.008838Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.190:2380"}
	{"level":"info","ts":"2024-04-01T19:35:57.623806Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dc6e2f4e9dcc679a is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-01T19:35:57.623938Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dc6e2f4e9dcc679a became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-01T19:35:57.623994Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dc6e2f4e9dcc679a received MsgPreVoteResp from dc6e2f4e9dcc679a at term 1"}
	{"level":"info","ts":"2024-04-01T19:35:57.624027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dc6e2f4e9dcc679a became candidate at term 2"}
	{"level":"info","ts":"2024-04-01T19:35:57.624051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dc6e2f4e9dcc679a received MsgVoteResp from dc6e2f4e9dcc679a at term 2"}
	{"level":"info","ts":"2024-04-01T19:35:57.624078Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dc6e2f4e9dcc679a became leader at term 2"}
	{"level":"info","ts":"2024-04-01T19:35:57.624107Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dc6e2f4e9dcc679a elected leader dc6e2f4e9dcc679a at term 2"}
	{"level":"info","ts":"2024-04-01T19:35:57.629017Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"dc6e2f4e9dcc679a","local-member-attributes":"{Name:embed-certs-882095 ClientURLs:[https://192.168.39.190:2379]}","request-path":"/0/members/dc6e2f4e9dcc679a/attributes","cluster-id":"22dc5a3adec033ed","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-01T19:35:57.629805Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-01T19:35:57.630001Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-01T19:35:57.630579Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-01T19:35:57.636747Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-01T19:35:57.636794Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-01T19:35:57.636831Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"22dc5a3adec033ed","local-member-id":"dc6e2f4e9dcc679a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-01T19:35:57.636919Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-01T19:35:57.636939Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-01T19:35:57.638511Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.190:2379"}
	{"level":"info","ts":"2024-04-01T19:35:57.643881Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 19:45:23 up 14 min,  0 users,  load average: 0.16, 0.14, 0.14
	Linux embed-certs-882095 5.10.207 #1 SMP Wed Mar 27 22:02:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [cc65a14cc9d3896ed4a0aab8e1ef8215bf34c52e9af1d0b381a685d67ba785b6] <==
	I0401 19:39:18.092177       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0401 19:40:59.346128       1 handler_proxy.go:93] no RequestInfo found in the context
	E0401 19:40:59.346244       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0401 19:41:00.347053       1 handler_proxy.go:93] no RequestInfo found in the context
	E0401 19:41:00.347118       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0401 19:41:00.347127       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0401 19:41:00.347166       1 handler_proxy.go:93] no RequestInfo found in the context
	E0401 19:41:00.347210       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0401 19:41:00.348430       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0401 19:42:00.347771       1 handler_proxy.go:93] no RequestInfo found in the context
	E0401 19:42:00.347990       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0401 19:42:00.348024       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0401 19:42:00.348998       1 handler_proxy.go:93] no RequestInfo found in the context
	E0401 19:42:00.349038       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0401 19:42:00.349044       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0401 19:44:00.349079       1 handler_proxy.go:93] no RequestInfo found in the context
	W0401 19:44:00.349422       1 handler_proxy.go:93] no RequestInfo found in the context
	E0401 19:44:00.349448       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0401 19:44:00.349490       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0401 19:44:00.349552       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0401 19:44:00.351351       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [6af00be29964479e217c8d9c6a3de0ed6a2b2ca3f03344c9b1ef869b474f8161] <==
	I0401 19:39:46.234547       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:40:15.870307       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:40:16.242748       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:40:45.877308       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:40:46.252348       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:41:15.884262       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:41:16.261414       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:41:45.889152       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:41:46.269043       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0401 19:42:10.776987       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="424.065µs"
	E0401 19:42:15.895851       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:42:16.282173       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0401 19:42:23.772501       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="169.143µs"
	E0401 19:42:45.900543       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:42:46.290798       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:43:15.907058       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:43:16.299060       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:43:45.913407       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:43:46.307976       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:44:15.919587       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:44:16.317588       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:44:45.925185       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:44:46.328053       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:45:15.934031       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:45:16.337289       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [0843b91761c5daeaab1269368aaf342feaccd94a7c047a6e1a440c82a308249f] <==
	I0401 19:36:18.422031       1 server_others.go:72] "Using iptables proxy"
	I0401 19:36:18.442888       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.190"]
	I0401 19:36:18.531240       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0401 19:36:18.531314       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0401 19:36:18.531342       1 server_others.go:168] "Using iptables Proxier"
	I0401 19:36:18.535404       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0401 19:36:18.535959       1 server.go:865] "Version info" version="v1.29.3"
	I0401 19:36:18.536012       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 19:36:18.538024       1 config.go:188] "Starting service config controller"
	I0401 19:36:18.538111       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0401 19:36:18.538183       1 config.go:97] "Starting endpoint slice config controller"
	I0401 19:36:18.538191       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0401 19:36:18.540405       1 config.go:315] "Starting node config controller"
	I0401 19:36:18.540453       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0401 19:36:18.638768       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0401 19:36:18.639029       1 shared_informer.go:318] Caches are synced for service config
	I0401 19:36:18.640660       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [68ca30006b9112dfd948bf271564137d17fdc5584ca52aa74b709acffa7651b9] <==
	W0401 19:35:59.421112       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0401 19:35:59.422823       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0401 19:35:59.422960       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0401 19:35:59.422997       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0401 19:35:59.423073       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0401 19:35:59.423104       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0401 19:35:59.423191       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0401 19:35:59.423226       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0401 19:35:59.423287       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0401 19:35:59.423315       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0401 19:35:59.431965       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0401 19:35:59.432011       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0401 19:36:00.282250       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0401 19:36:00.282311       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0401 19:36:00.320668       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0401 19:36:00.320753       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0401 19:36:00.369215       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0401 19:36:00.369269       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0401 19:36:00.370429       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0401 19:36:00.370474       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0401 19:36:00.443893       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0401 19:36:00.443949       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0401 19:36:00.475488       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0401 19:36:00.475545       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0401 19:36:03.185006       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 01 19:43:02 embed-certs-882095 kubelet[3773]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 01 19:43:02 embed-certs-882095 kubelet[3773]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 01 19:43:02 embed-certs-882095 kubelet[3773]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 01 19:43:02 embed-certs-882095 kubelet[3773]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 01 19:43:12 embed-certs-882095 kubelet[3773]: E0401 19:43:12.755095    3773 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dktr6" podUID="c6adfcab-c746-4ad8-abe2-8b300389a4f5"
	Apr 01 19:43:26 embed-certs-882095 kubelet[3773]: E0401 19:43:26.752515    3773 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dktr6" podUID="c6adfcab-c746-4ad8-abe2-8b300389a4f5"
	Apr 01 19:43:37 embed-certs-882095 kubelet[3773]: E0401 19:43:37.753018    3773 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dktr6" podUID="c6adfcab-c746-4ad8-abe2-8b300389a4f5"
	Apr 01 19:43:50 embed-certs-882095 kubelet[3773]: E0401 19:43:50.753745    3773 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dktr6" podUID="c6adfcab-c746-4ad8-abe2-8b300389a4f5"
	Apr 01 19:44:01 embed-certs-882095 kubelet[3773]: E0401 19:44:01.753979    3773 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dktr6" podUID="c6adfcab-c746-4ad8-abe2-8b300389a4f5"
	Apr 01 19:44:02 embed-certs-882095 kubelet[3773]: E0401 19:44:02.830084    3773 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 01 19:44:02 embed-certs-882095 kubelet[3773]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 01 19:44:02 embed-certs-882095 kubelet[3773]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 01 19:44:02 embed-certs-882095 kubelet[3773]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 01 19:44:02 embed-certs-882095 kubelet[3773]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 01 19:44:13 embed-certs-882095 kubelet[3773]: E0401 19:44:13.754093    3773 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dktr6" podUID="c6adfcab-c746-4ad8-abe2-8b300389a4f5"
	Apr 01 19:44:25 embed-certs-882095 kubelet[3773]: E0401 19:44:25.752926    3773 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dktr6" podUID="c6adfcab-c746-4ad8-abe2-8b300389a4f5"
	Apr 01 19:44:38 embed-certs-882095 kubelet[3773]: E0401 19:44:38.752801    3773 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dktr6" podUID="c6adfcab-c746-4ad8-abe2-8b300389a4f5"
	Apr 01 19:44:49 embed-certs-882095 kubelet[3773]: E0401 19:44:49.753229    3773 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dktr6" podUID="c6adfcab-c746-4ad8-abe2-8b300389a4f5"
	Apr 01 19:45:02 embed-certs-882095 kubelet[3773]: E0401 19:45:02.831067    3773 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 01 19:45:02 embed-certs-882095 kubelet[3773]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 01 19:45:02 embed-certs-882095 kubelet[3773]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 01 19:45:02 embed-certs-882095 kubelet[3773]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 01 19:45:02 embed-certs-882095 kubelet[3773]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 01 19:45:04 embed-certs-882095 kubelet[3773]: E0401 19:45:04.755107    3773 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dktr6" podUID="c6adfcab-c746-4ad8-abe2-8b300389a4f5"
	Apr 01 19:45:19 embed-certs-882095 kubelet[3773]: E0401 19:45:19.753258    3773 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dktr6" podUID="c6adfcab-c746-4ad8-abe2-8b300389a4f5"
	
	
	==> storage-provisioner [3945afdef36c0a42c6c2597baed81cb27663f89912da17b7c026add868d0b02e] <==
	I0401 19:36:17.637798       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0401 19:36:17.664429       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0401 19:36:17.664589       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0401 19:36:17.701009       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0401 19:36:17.702297       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-882095_f78022a6-6657-40d3-a79c-93195cbd8c04!
	I0401 19:36:17.720534       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"34547e79-08e2-4192-8dc1-bf0d1269fb5d", APIVersion:"v1", ResourceVersion:"391", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-882095_f78022a6-6657-40d3-a79c-93195cbd8c04 became leader
	I0401 19:36:17.802638       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-882095_f78022a6-6657-40d3-a79c-93195cbd8c04!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-882095 -n embed-certs-882095
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-882095 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-dktr6
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-882095 describe pod metrics-server-57f55c9bc5-dktr6
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-882095 describe pod metrics-server-57f55c9bc5-dktr6: exit status 1 (63.56845ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-dktr6" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-882095 describe pod metrics-server-57f55c9bc5-dktr6: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0401 19:36:44.321689   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/flannel-408543/client.crt: no such file or directory
E0401 19:37:45.216536   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/bridge-408543/client.crt: no such file or directory
E0401 19:37:59.323594   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/auto-408543/client.crt: no such file or directory
E0401 19:38:14.750723   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-734648 -n default-k8s-diff-port-734648
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-04-01 19:45:44.249753707 +0000 UTC m=+5973.705304895
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-734648 -n default-k8s-diff-port-734648
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-734648 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-734648 logs -n 25: (2.257823539s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p bridge-408543 sudo cat                              | bridge-408543                | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |                |                     |                     |
	| ssh     | -p bridge-408543 sudo                                  | bridge-408543                | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | containerd config dump                                 |                              |         |                |                     |                     |
	| ssh     | -p bridge-408543 sudo                                  | bridge-408543                | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | systemctl status crio --all                            |                              |         |                |                     |                     |
	|         | --full --no-pager                                      |                              |         |                |                     |                     |
	| ssh     | -p bridge-408543 sudo                                  | bridge-408543                | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |                |                     |                     |
	| ssh     | -p bridge-408543 sudo find                             | bridge-408543                | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |                |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |                |                     |                     |
	| ssh     | -p bridge-408543 sudo crio                             | bridge-408543                | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | config                                                 |                              |         |                |                     |                     |
	| delete  | -p bridge-408543                                       | bridge-408543                | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	| delete  | -p                                                     | disable-driver-mounts-580301 | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | disable-driver-mounts-580301                           |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-734648 | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:24 UTC |
	|         | default-k8s-diff-port-734648                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p no-preload-472858             | no-preload-472858            | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p no-preload-472858                                   | no-preload-472858            | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-882095            | embed-certs-882095           | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:24 UTC | 01 Apr 24 19:24 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-882095                                  | embed-certs-882095           | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:24 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-734648  | default-k8s-diff-port-734648 | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:25 UTC | 01 Apr 24 19:25 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-734648 | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:25 UTC |                     |
	|         | default-k8s-diff-port-734648                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-472858                  | no-preload-472858            | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p no-preload-472858                                   | no-preload-472858            | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:26 UTC | 01 Apr 24 19:38 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |                |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.0                      |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-163608        | old-k8s-version-163608       | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:26 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-882095                 | embed-certs-882095           | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:26 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-882095                                  | embed-certs-882095           | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:26 UTC | 01 Apr 24 19:36 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-734648       | default-k8s-diff-port-734648 | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:27 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-734648 | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:27 UTC | 01 Apr 24 19:36 UTC |
	|         | default-k8s-diff-port-734648                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-163608                              | old-k8s-version-163608       | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:27 UTC | 01 Apr 24 19:27 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-163608             | old-k8s-version-163608       | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:27 UTC | 01 Apr 24 19:27 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-163608                              | old-k8s-version-163608       | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:27 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/01 19:27:52
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 19:27:52.967684   71168 out.go:291] Setting OutFile to fd 1 ...
	I0401 19:27:52.967904   71168 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 19:27:52.967912   71168 out.go:304] Setting ErrFile to fd 2...
	I0401 19:27:52.967916   71168 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 19:27:52.968071   71168 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-10493/.minikube/bin
	I0401 19:27:52.968601   71168 out.go:298] Setting JSON to false
	I0401 19:27:52.969458   71168 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7825,"bootTime":1711991848,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 19:27:52.969511   71168 start.go:139] virtualization: kvm guest
	I0401 19:27:52.972337   71168 out.go:177] * [old-k8s-version-163608] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 19:27:52.973728   71168 out.go:177]   - MINIKUBE_LOCATION=18233
	I0401 19:27:52.973774   71168 notify.go:220] Checking for updates...
	I0401 19:27:52.975050   71168 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 19:27:52.976498   71168 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 19:27:52.977880   71168 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-10493/.minikube
	I0401 19:27:52.979140   71168 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 19:27:52.980397   71168 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 19:27:52.982116   71168 config.go:182] Loaded profile config "old-k8s-version-163608": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 19:27:52.982478   71168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:27:52.982569   71168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:27:52.996903   71168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44083
	I0401 19:27:52.997230   71168 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:27:52.997702   71168 main.go:141] libmachine: Using API Version  1
	I0401 19:27:52.997724   71168 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:27:52.998082   71168 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:27:52.998286   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:27:53.000287   71168 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0401 19:27:53.001714   71168 driver.go:392] Setting default libvirt URI to qemu:///system
	I0401 19:27:53.001993   71168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:27:53.002030   71168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:27:53.016155   71168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43947
	I0401 19:27:53.016524   71168 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:27:53.016981   71168 main.go:141] libmachine: Using API Version  1
	I0401 19:27:53.017003   71168 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:27:53.017352   71168 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:27:53.017550   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:27:53.051163   71168 out.go:177] * Using the kvm2 driver based on existing profile
	I0401 19:27:53.052475   71168 start.go:297] selected driver: kvm2
	I0401 19:27:53.052488   71168 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-163608 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-163608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.106 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:27:53.052621   71168 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 19:27:53.053266   71168 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 19:27:53.053349   71168 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18233-10493/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0401 19:27:53.067629   71168 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0401 19:27:53.067994   71168 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 19:27:53.068065   71168 cni.go:84] Creating CNI manager for ""
	I0401 19:27:53.068083   71168 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:27:53.068130   71168 start.go:340] cluster config:
	{Name:old-k8s-version-163608 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-163608 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.106 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:27:53.068640   71168 iso.go:125] acquiring lock: {Name:mka511ffe42ecd86bd7f46e7a17ddcdd3e5e4327 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 19:27:53.070506   71168 out.go:177] * Starting "old-k8s-version-163608" primary control-plane node in "old-k8s-version-163608" cluster
	I0401 19:27:53.071686   71168 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0401 19:27:53.071716   71168 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0401 19:27:53.071726   71168 cache.go:56] Caching tarball of preloaded images
	I0401 19:27:53.071807   71168 preload.go:173] Found /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 19:27:53.071818   71168 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0401 19:27:53.071904   71168 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/config.json ...
	I0401 19:27:53.072076   71168 start.go:360] acquireMachinesLock for old-k8s-version-163608: {Name:mk6b7472209a8db5f40be4c2f0565da7e0094c19 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 19:27:57.821850   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:00.893934   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:06.973950   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:10.045903   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:16.125969   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:19.197902   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:25.277903   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:28.349963   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:34.429888   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:37.501886   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:43.581910   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:46.653871   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:52.733856   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:55.805957   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:01.885878   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:04.957919   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:11.037896   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:14.109854   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:20.189885   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:23.261848   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:29.341931   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:32.414013   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:38.493870   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:41.565912   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:47.645887   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:50.717882   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:56.797886   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:59.869824   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:30:05.949894   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:30:09.021905   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:30:15.101943   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:30:18.173911   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:30:24.253875   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:30:27.325874   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:30:33.405945   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:30:36.477889   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:30:39.482773   70687 start.go:364] duration metric: took 3m52.901392005s to acquireMachinesLock for "embed-certs-882095"
	I0401 19:30:39.482825   70687 start.go:96] Skipping create...Using existing machine configuration
	I0401 19:30:39.482831   70687 fix.go:54] fixHost starting: 
	I0401 19:30:39.483206   70687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:30:39.483272   70687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:30:39.498155   70687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43779
	I0401 19:30:39.498587   70687 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:30:39.499013   70687 main.go:141] libmachine: Using API Version  1
	I0401 19:30:39.499032   70687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:30:39.499400   70687 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:30:39.499572   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:30:39.499760   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetState
	I0401 19:30:39.501361   70687 fix.go:112] recreateIfNeeded on embed-certs-882095: state=Stopped err=<nil>
	I0401 19:30:39.501398   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	W0401 19:30:39.501552   70687 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 19:30:39.504183   70687 out.go:177] * Restarting existing kvm2 VM for "embed-certs-882095" ...
	I0401 19:30:39.505410   70687 main.go:141] libmachine: (embed-certs-882095) Calling .Start
	I0401 19:30:39.505549   70687 main.go:141] libmachine: (embed-certs-882095) Ensuring networks are active...
	I0401 19:30:39.506257   70687 main.go:141] libmachine: (embed-certs-882095) Ensuring network default is active
	I0401 19:30:39.506533   70687 main.go:141] libmachine: (embed-certs-882095) Ensuring network mk-embed-certs-882095 is active
	I0401 19:30:39.506892   70687 main.go:141] libmachine: (embed-certs-882095) Getting domain xml...
	I0401 19:30:39.507632   70687 main.go:141] libmachine: (embed-certs-882095) Creating domain...
	I0401 19:30:40.693316   70687 main.go:141] libmachine: (embed-certs-882095) Waiting to get IP...
	I0401 19:30:40.694095   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:40.694551   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:40.694597   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:40.694519   71595 retry.go:31] will retry after 283.185096ms: waiting for machine to come up
	I0401 19:30:40.979028   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:40.979500   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:40.979523   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:40.979452   71595 retry.go:31] will retry after 297.637907ms: waiting for machine to come up
	I0401 19:30:41.279111   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:41.279457   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:41.279479   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:41.279411   71595 retry.go:31] will retry after 366.625363ms: waiting for machine to come up
	I0401 19:30:39.480214   70284 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 19:30:39.480252   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetMachineName
	I0401 19:30:39.480557   70284 buildroot.go:166] provisioning hostname "no-preload-472858"
	I0401 19:30:39.480583   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetMachineName
	I0401 19:30:39.480787   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:30:39.482626   70284 machine.go:97] duration metric: took 4m37.415031648s to provisionDockerMachine
	I0401 19:30:39.482666   70284 fix.go:56] duration metric: took 4m37.43830515s for fixHost
	I0401 19:30:39.482676   70284 start.go:83] releasing machines lock for "no-preload-472858", held for 4m37.438344965s
	W0401 19:30:39.482704   70284 start.go:713] error starting host: provision: host is not running
	W0401 19:30:39.482794   70284 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0401 19:30:39.482805   70284 start.go:728] Will try again in 5 seconds ...
	I0401 19:30:41.647682   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:41.648045   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:41.648097   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:41.648026   71595 retry.go:31] will retry after 373.762437ms: waiting for machine to come up
	I0401 19:30:42.023500   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:42.023868   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:42.023904   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:42.023836   71595 retry.go:31] will retry after 461.430639ms: waiting for machine to come up
	I0401 19:30:42.486384   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:42.486836   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:42.486863   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:42.486784   71595 retry.go:31] will retry after 718.511667ms: waiting for machine to come up
	I0401 19:30:43.206555   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:43.206983   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:43.207006   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:43.206939   71595 retry.go:31] will retry after 907.934415ms: waiting for machine to come up
	I0401 19:30:44.115840   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:44.116223   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:44.116259   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:44.116173   71595 retry.go:31] will retry after 1.178492069s: waiting for machine to come up
	I0401 19:30:45.295704   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:45.296117   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:45.296146   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:45.296071   71595 retry.go:31] will retry after 1.188920707s: waiting for machine to come up
	I0401 19:30:44.484802   70284 start.go:360] acquireMachinesLock for no-preload-472858: {Name:mk6b7472209a8db5f40be4c2f0565da7e0094c19 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 19:30:46.486217   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:46.486777   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:46.486816   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:46.486740   71595 retry.go:31] will retry after 2.12728618s: waiting for machine to come up
	I0401 19:30:48.617124   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:48.617521   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:48.617553   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:48.617468   71595 retry.go:31] will retry after 2.867613028s: waiting for machine to come up
	I0401 19:30:51.488009   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:51.491502   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:51.491533   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:51.488532   71595 retry.go:31] will retry after 3.42206094s: waiting for machine to come up
	I0401 19:30:54.911723   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:54.912098   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:54.912127   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:54.912059   71595 retry.go:31] will retry after 4.263880792s: waiting for machine to come up
	I0401 19:31:00.450770   70962 start.go:364] duration metric: took 3m22.921307899s to acquireMachinesLock for "default-k8s-diff-port-734648"
	I0401 19:31:00.450836   70962 start.go:96] Skipping create...Using existing machine configuration
	I0401 19:31:00.450854   70962 fix.go:54] fixHost starting: 
	I0401 19:31:00.451364   70962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:31:00.451401   70962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:31:00.467219   70962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45255
	I0401 19:31:00.467579   70962 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:31:00.467998   70962 main.go:141] libmachine: Using API Version  1
	I0401 19:31:00.468021   70962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:31:00.468368   70962 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:31:00.468567   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:31:00.468740   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetState
	I0401 19:31:00.470224   70962 fix.go:112] recreateIfNeeded on default-k8s-diff-port-734648: state=Stopped err=<nil>
	I0401 19:31:00.470251   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	W0401 19:31:00.470396   70962 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 19:31:00.472906   70962 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-734648" ...
	I0401 19:30:59.180302   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.180756   70687 main.go:141] libmachine: (embed-certs-882095) Found IP for machine: 192.168.39.190
	I0401 19:30:59.180778   70687 main.go:141] libmachine: (embed-certs-882095) Reserving static IP address...
	I0401 19:30:59.180794   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has current primary IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.181269   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "embed-certs-882095", mac: "52:54:00:8c:f1:a7", ip: "192.168.39.190"} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.181300   70687 main.go:141] libmachine: (embed-certs-882095) DBG | skip adding static IP to network mk-embed-certs-882095 - found existing host DHCP lease matching {name: "embed-certs-882095", mac: "52:54:00:8c:f1:a7", ip: "192.168.39.190"}
	I0401 19:30:59.181311   70687 main.go:141] libmachine: (embed-certs-882095) Reserved static IP address: 192.168.39.190
	I0401 19:30:59.181324   70687 main.go:141] libmachine: (embed-certs-882095) DBG | Getting to WaitForSSH function...
	I0401 19:30:59.181331   70687 main.go:141] libmachine: (embed-certs-882095) Waiting for SSH to be available...
	I0401 19:30:59.183293   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.183599   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.183630   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.183756   70687 main.go:141] libmachine: (embed-certs-882095) DBG | Using SSH client type: external
	I0401 19:30:59.183784   70687 main.go:141] libmachine: (embed-certs-882095) DBG | Using SSH private key: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/embed-certs-882095/id_rsa (-rw-------)
	I0401 19:30:59.183837   70687 main.go:141] libmachine: (embed-certs-882095) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.190 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18233-10493/.minikube/machines/embed-certs-882095/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0401 19:30:59.183863   70687 main.go:141] libmachine: (embed-certs-882095) DBG | About to run SSH command:
	I0401 19:30:59.183924   70687 main.go:141] libmachine: (embed-certs-882095) DBG | exit 0
	I0401 19:30:59.305707   70687 main.go:141] libmachine: (embed-certs-882095) DBG | SSH cmd err, output: <nil>: 
	I0401 19:30:59.306036   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetConfigRaw
	I0401 19:30:59.306679   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetIP
	I0401 19:30:59.309266   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.309680   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.309711   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.309938   70687 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/embed-certs-882095/config.json ...
	I0401 19:30:59.310193   70687 machine.go:94] provisionDockerMachine start ...
	I0401 19:30:59.310219   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:30:59.310435   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:30:59.312549   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.312908   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.312930   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.313088   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:30:59.313247   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:30:59.313385   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:30:59.313502   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:30:59.313721   70687 main.go:141] libmachine: Using SSH client type: native
	I0401 19:30:59.313894   70687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0401 19:30:59.313904   70687 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 19:30:59.418216   70687 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0401 19:30:59.418244   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetMachineName
	I0401 19:30:59.418506   70687 buildroot.go:166] provisioning hostname "embed-certs-882095"
	I0401 19:30:59.418537   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetMachineName
	I0401 19:30:59.418703   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:30:59.421075   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.421411   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.421453   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.421534   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:30:59.421721   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:30:59.421867   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:30:59.421978   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:30:59.422122   70687 main.go:141] libmachine: Using SSH client type: native
	I0401 19:30:59.422317   70687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0401 19:30:59.422332   70687 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-882095 && echo "embed-certs-882095" | sudo tee /etc/hostname
	I0401 19:30:59.541974   70687 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-882095
	
	I0401 19:30:59.542006   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:30:59.544628   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.544992   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.545025   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.545193   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:30:59.545403   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:30:59.545566   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:30:59.545720   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:30:59.545906   70687 main.go:141] libmachine: Using SSH client type: native
	I0401 19:30:59.546060   70687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0401 19:30:59.546077   70687 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-882095' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-882095/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-882095' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 19:30:59.660103   70687 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 19:30:59.660134   70687 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18233-10493/.minikube CaCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18233-10493/.minikube}
	I0401 19:30:59.660161   70687 buildroot.go:174] setting up certificates
	I0401 19:30:59.660172   70687 provision.go:84] configureAuth start
	I0401 19:30:59.660193   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetMachineName
	I0401 19:30:59.660465   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetIP
	I0401 19:30:59.662943   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.663260   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.663302   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.663413   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:30:59.665390   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.665688   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.665719   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.665821   70687 provision.go:143] copyHostCerts
	I0401 19:30:59.665879   70687 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem, removing ...
	I0401 19:30:59.665892   70687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem
	I0401 19:30:59.665956   70687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem (1679 bytes)
	I0401 19:30:59.666041   70687 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem, removing ...
	I0401 19:30:59.666048   70687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem
	I0401 19:30:59.666071   70687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem (1082 bytes)
	I0401 19:30:59.666121   70687 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem, removing ...
	I0401 19:30:59.666128   70687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem
	I0401 19:30:59.666148   70687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem (1123 bytes)
	I0401 19:30:59.666193   70687 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem org=jenkins.embed-certs-882095 san=[127.0.0.1 192.168.39.190 embed-certs-882095 localhost minikube]
	I0401 19:30:59.761975   70687 provision.go:177] copyRemoteCerts
	I0401 19:30:59.762033   70687 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 19:30:59.762058   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:30:59.764277   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.764601   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.764626   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.764832   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:30:59.765006   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:30:59.765155   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:30:59.765250   70687 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/embed-certs-882095/id_rsa Username:docker}
	I0401 19:30:59.848158   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 19:30:59.875879   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 19:30:59.902573   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0401 19:30:59.928757   70687 provision.go:87] duration metric: took 268.570153ms to configureAuth
	I0401 19:30:59.928781   70687 buildroot.go:189] setting minikube options for container-runtime
	I0401 19:30:59.928924   70687 config.go:182] Loaded profile config "embed-certs-882095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 19:30:59.928988   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:30:59.931187   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.931571   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.931600   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.931755   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:30:59.931914   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:30:59.932067   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:30:59.932176   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:30:59.932325   70687 main.go:141] libmachine: Using SSH client type: native
	I0401 19:30:59.932506   70687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0401 19:30:59.932530   70687 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 19:31:00.214527   70687 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 19:31:00.214552   70687 machine.go:97] duration metric: took 904.342981ms to provisionDockerMachine
	I0401 19:31:00.214563   70687 start.go:293] postStartSetup for "embed-certs-882095" (driver="kvm2")
	I0401 19:31:00.214574   70687 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 19:31:00.214587   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:31:00.214892   70687 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 19:31:00.214920   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:31:00.217289   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.217580   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:31:00.217608   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.217828   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:31:00.218014   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:31:00.218137   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:31:00.218267   70687 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/embed-certs-882095/id_rsa Username:docker}
	I0401 19:31:00.301379   70687 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 19:31:00.306211   70687 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 19:31:00.306231   70687 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/addons for local assets ...
	I0401 19:31:00.306284   70687 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/files for local assets ...
	I0401 19:31:00.306377   70687 filesync.go:149] local asset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> 177512.pem in /etc/ssl/certs
	I0401 19:31:00.306459   70687 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 19:31:00.316524   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:31:00.342848   70687 start.go:296] duration metric: took 128.272743ms for postStartSetup
	I0401 19:31:00.342887   70687 fix.go:56] duration metric: took 20.860054972s for fixHost
	I0401 19:31:00.342910   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:31:00.345429   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.345883   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:31:00.345915   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.346060   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:31:00.346288   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:31:00.346504   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:31:00.346656   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:31:00.346806   70687 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:00.346961   70687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0401 19:31:00.346972   70687 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 19:31:00.450606   70687 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711999860.420567604
	
	I0401 19:31:00.450627   70687 fix.go:216] guest clock: 1711999860.420567604
	I0401 19:31:00.450635   70687 fix.go:229] Guest: 2024-04-01 19:31:00.420567604 +0000 UTC Remote: 2024-04-01 19:31:00.34289204 +0000 UTC m=+253.905703085 (delta=77.675564ms)
	I0401 19:31:00.450683   70687 fix.go:200] guest clock delta is within tolerance: 77.675564ms
	I0401 19:31:00.450693   70687 start.go:83] releasing machines lock for "embed-certs-882095", held for 20.967887876s
	I0401 19:31:00.450725   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:31:00.451011   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetIP
	I0401 19:31:00.453581   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.453959   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:31:00.453990   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.454112   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:31:00.454613   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:31:00.454788   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:31:00.454844   70687 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 19:31:00.454886   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:31:00.454997   70687 ssh_runner.go:195] Run: cat /version.json
	I0401 19:31:00.455019   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:31:00.457540   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.457811   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.457846   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:31:00.457878   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.458053   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:31:00.458141   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:31:00.458173   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.458217   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:31:00.458295   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:31:00.458387   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:31:00.458471   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:31:00.458556   70687 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/embed-certs-882095/id_rsa Username:docker}
	I0401 19:31:00.458602   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:31:00.458741   70687 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/embed-certs-882095/id_rsa Username:docker}
	I0401 19:31:00.569039   70687 ssh_runner.go:195] Run: systemctl --version
	I0401 19:31:00.575452   70687 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 19:31:00.728549   70687 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 19:31:00.735559   70687 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 19:31:00.735642   70687 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 19:31:00.756640   70687 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 19:31:00.756669   70687 start.go:494] detecting cgroup driver to use...
	I0401 19:31:00.756743   70687 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 19:31:00.776638   70687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 19:31:00.793006   70687 docker.go:217] disabling cri-docker service (if available) ...
	I0401 19:31:00.793063   70687 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 19:31:00.809240   70687 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 19:31:00.825245   70687 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 19:31:00.952595   70687 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 19:31:01.109771   70687 docker.go:233] disabling docker service ...
	I0401 19:31:01.109841   70687 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 19:31:01.126814   70687 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 19:31:01.141976   70687 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 19:31:01.301634   70687 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 19:31:01.440350   70687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 19:31:01.458083   70687 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 19:31:01.479653   70687 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0401 19:31:01.479730   70687 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:01.492598   70687 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 19:31:01.492677   70687 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:01.506469   70687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:01.521981   70687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:01.534406   70687 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 19:31:01.546817   70687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:01.558857   70687 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:01.578922   70687 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:01.593381   70687 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 19:31:01.605265   70687 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0401 19:31:01.605341   70687 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0401 19:31:01.621681   70687 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 19:31:01.633336   70687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:31:01.770373   70687 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 19:31:01.927892   70687 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 19:31:01.927952   70687 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 19:31:01.935046   70687 start.go:562] Will wait 60s for crictl version
	I0401 19:31:01.935101   70687 ssh_runner.go:195] Run: which crictl
	I0401 19:31:01.940563   70687 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 19:31:01.986956   70687 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 19:31:01.987030   70687 ssh_runner.go:195] Run: crio --version
	I0401 19:31:02.018567   70687 ssh_runner.go:195] Run: crio --version
	I0401 19:31:02.059077   70687 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0401 19:31:00.474118   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .Start
	I0401 19:31:00.474275   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Ensuring networks are active...
	I0401 19:31:00.474896   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Ensuring network default is active
	I0401 19:31:00.475289   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Ensuring network mk-default-k8s-diff-port-734648 is active
	I0401 19:31:00.475650   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Getting domain xml...
	I0401 19:31:00.476263   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Creating domain...
	I0401 19:31:01.736646   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting to get IP...
	I0401 19:31:01.737490   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:01.737889   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:01.737939   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:01.737867   71724 retry.go:31] will retry after 198.445345ms: waiting for machine to come up
	I0401 19:31:01.938446   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:01.938981   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:01.939012   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:01.938936   71724 retry.go:31] will retry after 320.128802ms: waiting for machine to come up
	I0401 19:31:02.260257   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:02.260673   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:02.260703   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:02.260633   71724 retry.go:31] will retry after 357.316906ms: waiting for machine to come up
	I0401 19:31:02.060343   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetIP
	I0401 19:31:02.063382   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:02.063775   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:31:02.063808   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:02.064047   70687 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0401 19:31:02.069227   70687 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:31:02.085344   70687 kubeadm.go:877] updating cluster {Name:embed-certs-882095 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.3 ClusterName:embed-certs-882095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.190 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 19:31:02.085451   70687 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0401 19:31:02.085490   70687 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:31:02.139383   70687 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0401 19:31:02.139454   70687 ssh_runner.go:195] Run: which lz4
	I0401 19:31:02.144331   70687 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0401 19:31:02.149534   70687 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 19:31:02.149561   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0401 19:31:03.954448   70687 crio.go:462] duration metric: took 1.810143668s to copy over tarball
	I0401 19:31:03.954523   70687 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 19:31:06.445735   70687 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.491184732s)
	I0401 19:31:06.445759   70687 crio.go:469] duration metric: took 2.491285648s to extract the tarball
	I0401 19:31:06.445765   70687 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 19:31:02.620250   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:02.620729   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:02.620760   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:02.620666   71724 retry.go:31] will retry after 520.509423ms: waiting for machine to come up
	I0401 19:31:03.142471   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:03.142902   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:03.142930   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:03.142864   71724 retry.go:31] will retry after 714.309176ms: waiting for machine to come up
	I0401 19:31:03.858594   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:03.859071   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:03.859104   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:03.859035   71724 retry.go:31] will retry after 620.601084ms: waiting for machine to come up
	I0401 19:31:04.480923   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:04.481350   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:04.481381   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:04.481313   71724 retry.go:31] will retry after 1.00716549s: waiting for machine to come up
	I0401 19:31:05.489788   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:05.490243   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:05.490273   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:05.490186   71724 retry.go:31] will retry after 1.158564029s: waiting for machine to come up
	I0401 19:31:06.650440   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:06.650969   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:06.650997   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:06.650915   71724 retry.go:31] will retry after 1.172294728s: waiting for machine to come up
	I0401 19:31:06.485475   70687 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:31:06.532426   70687 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 19:31:06.532448   70687 cache_images.go:84] Images are preloaded, skipping loading
	I0401 19:31:06.532455   70687 kubeadm.go:928] updating node { 192.168.39.190 8443 v1.29.3 crio true true} ...
	I0401 19:31:06.532544   70687 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-882095 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.190
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:embed-certs-882095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 19:31:06.532611   70687 ssh_runner.go:195] Run: crio config
	I0401 19:31:06.585119   70687 cni.go:84] Creating CNI manager for ""
	I0401 19:31:06.585144   70687 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:31:06.585158   70687 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 19:31:06.585185   70687 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.190 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-882095 NodeName:embed-certs-882095 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.190"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.190 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 19:31:06.585374   70687 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.190
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-882095"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.190
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.190"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 19:31:06.585473   70687 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0401 19:31:06.596747   70687 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 19:31:06.596818   70687 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 19:31:06.606959   70687 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0401 19:31:06.628202   70687 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 19:31:06.649043   70687 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0401 19:31:06.668400   70687 ssh_runner.go:195] Run: grep 192.168.39.190	control-plane.minikube.internal$ /etc/hosts
	I0401 19:31:06.672469   70687 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.190	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:31:06.685666   70687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:31:06.806186   70687 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:31:06.823315   70687 certs.go:68] Setting up /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/embed-certs-882095 for IP: 192.168.39.190
	I0401 19:31:06.823355   70687 certs.go:194] generating shared ca certs ...
	I0401 19:31:06.823376   70687 certs.go:226] acquiring lock for ca certs: {Name:mk348b3e250c104b662139cd7212c6c6dfda3180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:31:06.823569   70687 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key
	I0401 19:31:06.823645   70687 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key
	I0401 19:31:06.823659   70687 certs.go:256] generating profile certs ...
	I0401 19:31:06.823764   70687 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/embed-certs-882095/client.key
	I0401 19:31:06.823872   70687 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/embed-certs-882095/apiserver.key.c07921ce
	I0401 19:31:06.823945   70687 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/embed-certs-882095/proxy-client.key
	I0401 19:31:06.824092   70687 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem (1338 bytes)
	W0401 19:31:06.824132   70687 certs.go:480] ignoring /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751_empty.pem, impossibly tiny 0 bytes
	I0401 19:31:06.824145   70687 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem (1675 bytes)
	I0401 19:31:06.824183   70687 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem (1082 bytes)
	I0401 19:31:06.824223   70687 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem (1123 bytes)
	I0401 19:31:06.824254   70687 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem (1679 bytes)
	I0401 19:31:06.824309   70687 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:31:06.824942   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 19:31:06.867274   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 19:31:06.907288   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 19:31:06.948328   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 19:31:06.975058   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/embed-certs-882095/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0401 19:31:07.003183   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/embed-certs-882095/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0401 19:31:07.032030   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/embed-certs-882095/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 19:31:07.061612   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/embed-certs-882095/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 19:31:07.090149   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /usr/share/ca-certificates/177512.pem (1708 bytes)
	I0401 19:31:07.116885   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 19:31:07.143296   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem --> /usr/share/ca-certificates/17751.pem (1338 bytes)
	I0401 19:31:07.169420   70687 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0401 19:31:07.188908   70687 ssh_runner.go:195] Run: openssl version
	I0401 19:31:07.195591   70687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177512.pem && ln -fs /usr/share/ca-certificates/177512.pem /etc/ssl/certs/177512.pem"
	I0401 19:31:07.211583   70687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177512.pem
	I0401 19:31:07.217049   70687 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 18:15 /usr/share/ca-certificates/177512.pem
	I0401 19:31:07.217110   70687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177512.pem
	I0401 19:31:07.223751   70687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177512.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 19:31:07.237393   70687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 19:31:07.250523   70687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:07.255928   70687 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 18:07 /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:07.255981   70687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:07.262373   70687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 19:31:07.275174   70687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17751.pem && ln -fs /usr/share/ca-certificates/17751.pem /etc/ssl/certs/17751.pem"
	I0401 19:31:07.288039   70687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17751.pem
	I0401 19:31:07.293339   70687 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 18:15 /usr/share/ca-certificates/17751.pem
	I0401 19:31:07.293392   70687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17751.pem
	I0401 19:31:07.299983   70687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17751.pem /etc/ssl/certs/51391683.0"
	I0401 19:31:07.313120   70687 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 19:31:07.318425   70687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 19:31:07.325172   70687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 19:31:07.331674   70687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 19:31:07.338299   70687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 19:31:07.344896   70687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 19:31:07.351424   70687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 19:31:07.357898   70687 kubeadm.go:391] StartCluster: {Name:embed-certs-882095 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.3 ClusterName:embed-certs-882095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.190 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:31:07.357995   70687 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 19:31:07.358047   70687 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:31:07.401268   70687 cri.go:89] found id: ""
	I0401 19:31:07.401326   70687 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0401 19:31:07.414232   70687 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0401 19:31:07.414255   70687 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0401 19:31:07.414262   70687 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0401 19:31:07.414308   70687 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 19:31:07.425972   70687 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 19:31:07.426977   70687 kubeconfig.go:125] found "embed-certs-882095" server: "https://192.168.39.190:8443"
	I0401 19:31:07.428767   70687 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 19:31:07.440164   70687 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.190
	I0401 19:31:07.440191   70687 kubeadm.go:1154] stopping kube-system containers ...
	I0401 19:31:07.440201   70687 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0401 19:31:07.440244   70687 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:31:07.484303   70687 cri.go:89] found id: ""
	I0401 19:31:07.484407   70687 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0401 19:31:07.505186   70687 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:31:07.518316   70687 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:31:07.518342   70687 kubeadm.go:156] found existing configuration files:
	
	I0401 19:31:07.518393   70687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 19:31:07.530759   70687 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:31:07.530832   70687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:31:07.542799   70687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 19:31:07.553972   70687 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:31:07.554031   70687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:31:07.565324   70687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 19:31:07.576244   70687 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:31:07.576318   70687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:31:07.588874   70687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 19:31:07.600440   70687 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:31:07.600526   70687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:31:07.611963   70687 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:31:07.623225   70687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:07.740800   70687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:09.050887   70687 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.310046744s)
	I0401 19:31:09.050920   70687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:09.266170   70687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:09.336585   70687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:09.422513   70687 api_server.go:52] waiting for apiserver process to appear ...
	I0401 19:31:09.422594   70687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:09.923709   70687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:10.422822   70687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:10.922892   70687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:10.946590   70687 api_server.go:72] duration metric: took 1.524076694s to wait for apiserver process to appear ...
	I0401 19:31:10.946627   70687 api_server.go:88] waiting for apiserver healthz status ...
	I0401 19:31:10.946650   70687 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0401 19:31:07.825239   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:07.825629   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:07.825676   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:07.825586   71724 retry.go:31] will retry after 1.412332675s: waiting for machine to come up
	I0401 19:31:09.240010   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:09.240385   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:09.240416   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:09.240327   71724 retry.go:31] will retry after 2.601344034s: waiting for machine to come up
	I0401 19:31:11.843464   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:11.843948   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:11.843976   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:11.843900   71724 retry.go:31] will retry after 3.297720076s: waiting for machine to come up
	I0401 19:31:13.350274   70687 api_server.go:279] https://192.168.39.190:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0401 19:31:13.350309   70687 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0401 19:31:13.350325   70687 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0401 19:31:13.383494   70687 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0401 19:31:13.383543   70687 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0401 19:31:13.447744   70687 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0401 19:31:13.452796   70687 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0401 19:31:13.452852   70687 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0401 19:31:13.946971   70687 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0401 19:31:13.951522   70687 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0401 19:31:13.951554   70687 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0401 19:31:14.447104   70687 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0401 19:31:14.455165   70687 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0401 19:31:14.455204   70687 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0401 19:31:14.947278   70687 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0401 19:31:14.951487   70687 api_server.go:279] https://192.168.39.190:8443/healthz returned 200:
	ok
	I0401 19:31:14.958647   70687 api_server.go:141] control plane version: v1.29.3
	I0401 19:31:14.958670   70687 api_server.go:131] duration metric: took 4.012036456s to wait for apiserver health ...
	I0401 19:31:14.958687   70687 cni.go:84] Creating CNI manager for ""
	I0401 19:31:14.958693   70687 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:31:14.960494   70687 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0401 19:31:14.961899   70687 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0401 19:31:14.973709   70687 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0401 19:31:14.998105   70687 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 19:31:15.008481   70687 system_pods.go:59] 8 kube-system pods found
	I0401 19:31:15.008525   70687 system_pods.go:61] "coredns-76f75df574-nvcq4" [663bd69b-6da8-4a66-b20f-ea1eb507096a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:31:15.008536   70687 system_pods.go:61] "etcd-embed-certs-882095" [2b56dddc-b309-4965-811e-459c59b86dac] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0401 19:31:15.008551   70687 system_pods.go:61] "kube-apiserver-embed-certs-882095" [2e376ce4-504c-441a-baf8-0184a17e5bf4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0401 19:31:15.008561   70687 system_pods.go:61] "kube-controller-manager-embed-certs-882095" [e6bf3b2f-289b-4719-86f7-43e873fe8d85] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0401 19:31:15.008571   70687 system_pods.go:61] "kube-proxy-td6jk" [275536ff-4ec0-4d2c-8658-57aadda367b2] Running
	I0401 19:31:15.008580   70687 system_pods.go:61] "kube-scheduler-embed-certs-882095" [4551eb2a-9560-4d4f-aac0-9cfe6c790649] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0401 19:31:15.008591   70687 system_pods.go:61] "metrics-server-57f55c9bc5-g6z6c" [dc8aee6a-f101-4109-a259-351fddbddd44] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:31:15.008599   70687 system_pods.go:61] "storage-provisioner" [82a76833-c874-45d8-8ba7-1a483c15a997] Running
	I0401 19:31:15.008609   70687 system_pods.go:74] duration metric: took 10.480741ms to wait for pod list to return data ...
	I0401 19:31:15.008622   70687 node_conditions.go:102] verifying NodePressure condition ...
	I0401 19:31:15.012256   70687 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 19:31:15.012289   70687 node_conditions.go:123] node cpu capacity is 2
	I0401 19:31:15.012303   70687 node_conditions.go:105] duration metric: took 3.672159ms to run NodePressure ...
	I0401 19:31:15.012327   70687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:15.288861   70687 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0401 19:31:15.293731   70687 kubeadm.go:733] kubelet initialised
	I0401 19:31:15.293750   70687 kubeadm.go:734] duration metric: took 4.868595ms waiting for restarted kubelet to initialise ...
	I0401 19:31:15.293758   70687 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:31:15.298657   70687 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-nvcq4" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:15.304795   70687 pod_ready.go:97] node "embed-certs-882095" hosting pod "coredns-76f75df574-nvcq4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-882095" has status "Ready":"False"
	I0401 19:31:15.304813   70687 pod_ready.go:81] duration metric: took 6.134849ms for pod "coredns-76f75df574-nvcq4" in "kube-system" namespace to be "Ready" ...
	E0401 19:31:15.304822   70687 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-882095" hosting pod "coredns-76f75df574-nvcq4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-882095" has status "Ready":"False"
	I0401 19:31:15.304827   70687 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:15.309184   70687 pod_ready.go:97] node "embed-certs-882095" hosting pod "etcd-embed-certs-882095" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-882095" has status "Ready":"False"
	I0401 19:31:15.309204   70687 pod_ready.go:81] duration metric: took 4.369325ms for pod "etcd-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	E0401 19:31:15.309213   70687 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-882095" hosting pod "etcd-embed-certs-882095" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-882095" has status "Ready":"False"
	I0401 19:31:15.309221   70687 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:15.313737   70687 pod_ready.go:97] node "embed-certs-882095" hosting pod "kube-apiserver-embed-certs-882095" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-882095" has status "Ready":"False"
	I0401 19:31:15.313755   70687 pod_ready.go:81] duration metric: took 4.525801ms for pod "kube-apiserver-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	E0401 19:31:15.313764   70687 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-882095" hosting pod "kube-apiserver-embed-certs-882095" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-882095" has status "Ready":"False"
	I0401 19:31:15.313771   70687 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:15.401827   70687 pod_ready.go:97] node "embed-certs-882095" hosting pod "kube-controller-manager-embed-certs-882095" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-882095" has status "Ready":"False"
	I0401 19:31:15.401857   70687 pod_ready.go:81] duration metric: took 88.077915ms for pod "kube-controller-manager-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	E0401 19:31:15.401871   70687 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-882095" hosting pod "kube-controller-manager-embed-certs-882095" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-882095" has status "Ready":"False"
	I0401 19:31:15.401878   70687 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-td6jk" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:15.802462   70687 pod_ready.go:92] pod "kube-proxy-td6jk" in "kube-system" namespace has status "Ready":"True"
	I0401 19:31:15.802484   70687 pod_ready.go:81] duration metric: took 400.599194ms for pod "kube-proxy-td6jk" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:15.802494   70687 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:15.142653   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:15.143000   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:15.143062   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:15.142972   71724 retry.go:31] will retry after 3.764823961s: waiting for machine to come up
	I0401 19:31:20.350903   71168 start.go:364] duration metric: took 3m27.278785625s to acquireMachinesLock for "old-k8s-version-163608"
	I0401 19:31:20.350993   71168 start.go:96] Skipping create...Using existing machine configuration
	I0401 19:31:20.351010   71168 fix.go:54] fixHost starting: 
	I0401 19:31:20.351490   71168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:31:20.351571   71168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:31:20.368575   71168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38247
	I0401 19:31:20.368936   71168 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:31:20.369448   71168 main.go:141] libmachine: Using API Version  1
	I0401 19:31:20.369469   71168 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:31:20.369822   71168 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:31:20.370033   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:31:20.370195   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetState
	I0401 19:31:20.371625   71168 fix.go:112] recreateIfNeeded on old-k8s-version-163608: state=Stopped err=<nil>
	I0401 19:31:20.371681   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	W0401 19:31:20.371842   71168 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 19:31:20.374328   71168 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-163608" ...
	I0401 19:31:17.809256   70687 pod_ready.go:102] pod "kube-scheduler-embed-certs-882095" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:19.809947   70687 pod_ready.go:102] pod "kube-scheduler-embed-certs-882095" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:20.818455   70687 pod_ready.go:92] pod "kube-scheduler-embed-certs-882095" in "kube-system" namespace has status "Ready":"True"
	I0401 19:31:20.818481   70687 pod_ready.go:81] duration metric: took 5.015979611s for pod "kube-scheduler-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:20.818493   70687 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:18.910798   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:18.911231   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Found IP for machine: 192.168.61.145
	I0401 19:31:18.911266   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has current primary IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:18.911277   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Reserving static IP address...
	I0401 19:31:18.911761   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-734648", mac: "52:54:00:49:dc:50", ip: "192.168.61.145"} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:18.911795   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | skip adding static IP to network mk-default-k8s-diff-port-734648 - found existing host DHCP lease matching {name: "default-k8s-diff-port-734648", mac: "52:54:00:49:dc:50", ip: "192.168.61.145"}
	I0401 19:31:18.911819   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Reserved static IP address: 192.168.61.145
	I0401 19:31:18.911835   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for SSH to be available...
	I0401 19:31:18.911869   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | Getting to WaitForSSH function...
	I0401 19:31:18.913767   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:18.914054   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:18.914082   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:18.914207   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | Using SSH client type: external
	I0401 19:31:18.914236   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | Using SSH private key: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/default-k8s-diff-port-734648/id_rsa (-rw-------)
	I0401 19:31:18.914278   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.145 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18233-10493/.minikube/machines/default-k8s-diff-port-734648/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0401 19:31:18.914300   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | About to run SSH command:
	I0401 19:31:18.914313   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | exit 0
	I0401 19:31:19.037713   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | SSH cmd err, output: <nil>: 
	I0401 19:31:19.038080   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetConfigRaw
	I0401 19:31:19.038767   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetIP
	I0401 19:31:19.042390   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.043249   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:19.043311   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.043949   70962 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/default-k8s-diff-port-734648/config.json ...
	I0401 19:31:19.044504   70962 machine.go:94] provisionDockerMachine start ...
	I0401 19:31:19.044554   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:31:19.044916   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:19.047637   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.047908   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:19.047941   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.048088   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:31:19.048265   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:19.048408   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:19.048522   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:31:19.048636   70962 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:19.048790   70962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.145 22 <nil> <nil>}
	I0401 19:31:19.048800   70962 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 19:31:19.154415   70962 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0401 19:31:19.154444   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetMachineName
	I0401 19:31:19.154683   70962 buildroot.go:166] provisioning hostname "default-k8s-diff-port-734648"
	I0401 19:31:19.154713   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetMachineName
	I0401 19:31:19.154887   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:19.157442   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.157867   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:19.157896   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.158041   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:31:19.158237   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:19.158402   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:19.158540   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:31:19.158713   70962 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:19.158905   70962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.145 22 <nil> <nil>}
	I0401 19:31:19.158920   70962 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-734648 && echo "default-k8s-diff-port-734648" | sudo tee /etc/hostname
	I0401 19:31:19.276129   70962 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-734648
	
	I0401 19:31:19.276160   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:19.278657   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.278918   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:19.278940   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.279158   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:31:19.279353   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:19.279523   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:19.279671   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:31:19.279831   70962 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:19.280057   70962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.145 22 <nil> <nil>}
	I0401 19:31:19.280082   70962 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-734648' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-734648/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-734648' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 19:31:19.395730   70962 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 19:31:19.395755   70962 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18233-10493/.minikube CaCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18233-10493/.minikube}
	I0401 19:31:19.395779   70962 buildroot.go:174] setting up certificates
	I0401 19:31:19.395788   70962 provision.go:84] configureAuth start
	I0401 19:31:19.395798   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetMachineName
	I0401 19:31:19.396046   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetIP
	I0401 19:31:19.398668   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.399036   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:19.399065   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.399219   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:19.401309   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.401611   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:19.401656   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.401750   70962 provision.go:143] copyHostCerts
	I0401 19:31:19.401812   70962 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem, removing ...
	I0401 19:31:19.401822   70962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem
	I0401 19:31:19.401876   70962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem (1082 bytes)
	I0401 19:31:19.401978   70962 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem, removing ...
	I0401 19:31:19.401988   70962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem
	I0401 19:31:19.402015   70962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem (1123 bytes)
	I0401 19:31:19.402121   70962 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem, removing ...
	I0401 19:31:19.402129   70962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem
	I0401 19:31:19.402147   70962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem (1679 bytes)
	I0401 19:31:19.402205   70962 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-734648 san=[127.0.0.1 192.168.61.145 default-k8s-diff-port-734648 localhost minikube]
	I0401 19:31:19.655203   70962 provision.go:177] copyRemoteCerts
	I0401 19:31:19.655256   70962 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 19:31:19.655281   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:19.658194   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.658512   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:19.658540   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.658693   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:31:19.658896   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:19.659039   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:31:19.659187   70962 sshutil.go:53] new ssh client: &{IP:192.168.61.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/default-k8s-diff-port-734648/id_rsa Username:docker}
	I0401 19:31:19.743131   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 19:31:19.771327   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0401 19:31:19.797350   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 19:31:19.824244   70962 provision.go:87] duration metric: took 428.444366ms to configureAuth
	I0401 19:31:19.824274   70962 buildroot.go:189] setting minikube options for container-runtime
	I0401 19:31:19.824473   70962 config.go:182] Loaded profile config "default-k8s-diff-port-734648": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 19:31:19.824563   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:19.827376   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.827798   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:19.827838   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.827984   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:31:19.828184   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:19.828352   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:19.828496   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:31:19.828653   70962 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:19.828827   70962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.145 22 <nil> <nil>}
	I0401 19:31:19.828865   70962 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 19:31:20.107291   70962 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 19:31:20.107320   70962 machine.go:97] duration metric: took 1.062788118s to provisionDockerMachine
	I0401 19:31:20.107333   70962 start.go:293] postStartSetup for "default-k8s-diff-port-734648" (driver="kvm2")
	I0401 19:31:20.107347   70962 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 19:31:20.107369   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:31:20.107671   70962 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 19:31:20.107693   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:20.110380   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.110739   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:20.110780   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.110895   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:31:20.111075   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:20.111218   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:31:20.111353   70962 sshutil.go:53] new ssh client: &{IP:192.168.61.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/default-k8s-diff-port-734648/id_rsa Username:docker}
	I0401 19:31:20.193908   70962 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 19:31:20.198544   70962 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 19:31:20.198572   70962 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/addons for local assets ...
	I0401 19:31:20.198639   70962 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/files for local assets ...
	I0401 19:31:20.198704   70962 filesync.go:149] local asset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> 177512.pem in /etc/ssl/certs
	I0401 19:31:20.198788   70962 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 19:31:20.209866   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:31:20.240362   70962 start.go:296] duration metric: took 133.016405ms for postStartSetup
	I0401 19:31:20.240399   70962 fix.go:56] duration metric: took 19.789546756s for fixHost
	I0401 19:31:20.240418   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:20.243069   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.243448   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:20.243479   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.243657   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:31:20.243865   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:20.244061   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:20.244209   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:31:20.244399   70962 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:20.244600   70962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.145 22 <nil> <nil>}
	I0401 19:31:20.244616   70962 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 19:31:20.350752   70962 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711999880.326440079
	
	I0401 19:31:20.350779   70962 fix.go:216] guest clock: 1711999880.326440079
	I0401 19:31:20.350789   70962 fix.go:229] Guest: 2024-04-01 19:31:20.326440079 +0000 UTC Remote: 2024-04-01 19:31:20.240403038 +0000 UTC m=+222.858311555 (delta=86.037041ms)
	I0401 19:31:20.350808   70962 fix.go:200] guest clock delta is within tolerance: 86.037041ms
	I0401 19:31:20.350812   70962 start.go:83] releasing machines lock for "default-k8s-diff-port-734648", held for 19.899997669s
	I0401 19:31:20.350838   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:31:20.351118   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetIP
	I0401 19:31:20.354040   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.354395   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:20.354413   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.354595   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:31:20.355068   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:31:20.355238   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:31:20.355317   70962 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 19:31:20.355356   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:20.355530   70962 ssh_runner.go:195] Run: cat /version.json
	I0401 19:31:20.355557   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:20.357970   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.358372   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:20.358405   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.358430   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.358585   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:31:20.358766   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:20.358807   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:20.358834   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.358957   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:31:20.359013   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:31:20.359150   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:20.359203   70962 sshutil.go:53] new ssh client: &{IP:192.168.61.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/default-k8s-diff-port-734648/id_rsa Username:docker}
	I0401 19:31:20.359292   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:31:20.359439   70962 sshutil.go:53] new ssh client: &{IP:192.168.61.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/default-k8s-diff-port-734648/id_rsa Username:docker}
	I0401 19:31:20.466422   70962 ssh_runner.go:195] Run: systemctl --version
	I0401 19:31:20.472949   70962 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 19:31:20.626069   70962 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 19:31:20.633425   70962 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 19:31:20.633497   70962 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 19:31:20.658883   70962 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 19:31:20.658910   70962 start.go:494] detecting cgroup driver to use...
	I0401 19:31:20.658979   70962 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 19:31:20.686302   70962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 19:31:20.704507   70962 docker.go:217] disabling cri-docker service (if available) ...
	I0401 19:31:20.704583   70962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 19:31:20.725216   70962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 19:31:20.740635   70962 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 19:31:20.864184   70962 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 19:31:21.010752   70962 docker.go:233] disabling docker service ...
	I0401 19:31:21.010821   70962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 19:31:21.030718   70962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 19:31:21.047787   70962 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 19:31:21.194455   70962 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 19:31:21.337547   70962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 19:31:21.357144   70962 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 19:31:21.381709   70962 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0401 19:31:21.381782   70962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:21.393160   70962 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 19:31:21.393229   70962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:21.405047   70962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:21.416810   70962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:21.428947   70962 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 19:31:21.440886   70962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:21.452872   70962 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:21.473096   70962 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:21.484427   70962 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 19:31:21.494121   70962 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0401 19:31:21.494190   70962 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0401 19:31:21.509859   70962 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 19:31:21.520329   70962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:31:21.671075   70962 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 19:31:21.818822   70962 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 19:31:21.818892   70962 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 19:31:21.825189   70962 start.go:562] Will wait 60s for crictl version
	I0401 19:31:21.825260   70962 ssh_runner.go:195] Run: which crictl
	I0401 19:31:21.830058   70962 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 19:31:21.869617   70962 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 19:31:21.869721   70962 ssh_runner.go:195] Run: crio --version
	I0401 19:31:21.906091   70962 ssh_runner.go:195] Run: crio --version
	I0401 19:31:21.946240   70962 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0401 19:31:21.947653   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetIP
	I0401 19:31:21.950691   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:21.951156   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:21.951201   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:21.951445   70962 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0401 19:31:21.959376   70962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:31:21.974226   70962 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-734648 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:default-k8s-diff-port-734648 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.145 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 19:31:21.974348   70962 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0401 19:31:21.974426   70962 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:31:22.011856   70962 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0401 19:31:22.011930   70962 ssh_runner.go:195] Run: which lz4
	I0401 19:31:22.016672   70962 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0401 19:31:22.021864   70962 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 19:31:22.021893   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0401 19:31:20.375755   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .Start
	I0401 19:31:20.375932   71168 main.go:141] libmachine: (old-k8s-version-163608) Ensuring networks are active...
	I0401 19:31:20.376713   71168 main.go:141] libmachine: (old-k8s-version-163608) Ensuring network default is active
	I0401 19:31:20.377858   71168 main.go:141] libmachine: (old-k8s-version-163608) Ensuring network mk-old-k8s-version-163608 is active
	I0401 19:31:20.378278   71168 main.go:141] libmachine: (old-k8s-version-163608) Getting domain xml...
	I0401 19:31:20.378972   71168 main.go:141] libmachine: (old-k8s-version-163608) Creating domain...
	I0401 19:31:21.643237   71168 main.go:141] libmachine: (old-k8s-version-163608) Waiting to get IP...
	I0401 19:31:21.644082   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:21.644468   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:21.644535   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:21.644446   71902 retry.go:31] will retry after 208.251344ms: waiting for machine to come up
	I0401 19:31:21.854070   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:21.854545   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:21.854593   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:21.854527   71902 retry.go:31] will retry after 240.466964ms: waiting for machine to come up
	I0401 19:31:22.096940   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:22.097447   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:22.097470   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:22.097405   71902 retry.go:31] will retry after 480.217755ms: waiting for machine to come up
	I0401 19:31:22.579111   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:22.579596   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:22.579628   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:22.579518   71902 retry.go:31] will retry after 581.713487ms: waiting for machine to come up
	I0401 19:31:22.826723   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:25.326165   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:23.813558   70962 crio.go:462] duration metric: took 1.796902191s to copy over tarball
	I0401 19:31:23.813619   70962 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 19:31:26.447802   70962 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.634145928s)
	I0401 19:31:26.447840   70962 crio.go:469] duration metric: took 2.634257029s to extract the tarball
	I0401 19:31:26.447849   70962 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 19:31:26.488228   70962 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:31:26.535741   70962 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 19:31:26.535770   70962 cache_images.go:84] Images are preloaded, skipping loading
	I0401 19:31:26.535780   70962 kubeadm.go:928] updating node { 192.168.61.145 8444 v1.29.3 crio true true} ...
	I0401 19:31:26.535931   70962 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-734648 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.145
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-734648 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 19:31:26.536019   70962 ssh_runner.go:195] Run: crio config
	I0401 19:31:26.590211   70962 cni.go:84] Creating CNI manager for ""
	I0401 19:31:26.590239   70962 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:31:26.590254   70962 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 19:31:26.590282   70962 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.145 APIServerPort:8444 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-734648 NodeName:default-k8s-diff-port-734648 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.145"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.145 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 19:31:26.590459   70962 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.145
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-734648"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.145
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.145"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 19:31:26.590533   70962 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0401 19:31:26.602186   70962 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 19:31:26.602264   70962 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 19:31:26.616193   70962 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0401 19:31:26.636634   70962 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 19:31:26.660339   70962 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0401 19:31:26.687935   70962 ssh_runner.go:195] Run: grep 192.168.61.145	control-plane.minikube.internal$ /etc/hosts
	I0401 19:31:26.693966   70962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.145	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:31:26.709876   70962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:31:26.854990   70962 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:31:26.877303   70962 certs.go:68] Setting up /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/default-k8s-diff-port-734648 for IP: 192.168.61.145
	I0401 19:31:26.877327   70962 certs.go:194] generating shared ca certs ...
	I0401 19:31:26.877350   70962 certs.go:226] acquiring lock for ca certs: {Name:mk348b3e250c104b662139cd7212c6c6dfda3180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:31:26.877578   70962 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key
	I0401 19:31:26.877621   70962 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key
	I0401 19:31:26.877637   70962 certs.go:256] generating profile certs ...
	I0401 19:31:26.877777   70962 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/default-k8s-diff-port-734648/client.key
	I0401 19:31:26.877864   70962 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/default-k8s-diff-port-734648/apiserver.key.e4671486
	I0401 19:31:26.877909   70962 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/default-k8s-diff-port-734648/proxy-client.key
	I0401 19:31:26.878007   70962 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem (1338 bytes)
	W0401 19:31:26.878049   70962 certs.go:480] ignoring /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751_empty.pem, impossibly tiny 0 bytes
	I0401 19:31:26.878062   70962 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem (1675 bytes)
	I0401 19:31:26.878094   70962 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem (1082 bytes)
	I0401 19:31:26.878128   70962 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem (1123 bytes)
	I0401 19:31:26.878153   70962 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem (1679 bytes)
	I0401 19:31:26.878203   70962 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:31:26.879101   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 19:31:26.917600   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 19:31:26.968606   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 19:31:27.012527   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 19:31:27.078525   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/default-k8s-diff-port-734648/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0401 19:31:27.125195   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/default-k8s-diff-port-734648/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 19:31:27.157190   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/default-k8s-diff-port-734648/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 19:31:27.185434   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/default-k8s-diff-port-734648/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 19:31:27.215215   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 19:31:27.246938   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem --> /usr/share/ca-certificates/17751.pem (1338 bytes)
	I0401 19:31:27.277210   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /usr/share/ca-certificates/177512.pem (1708 bytes)
	I0401 19:31:27.307099   70962 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0401 19:31:27.326664   70962 ssh_runner.go:195] Run: openssl version
	I0401 19:31:27.333292   70962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 19:31:27.344724   70962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:27.350096   70962 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 18:07 /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:27.350146   70962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:27.356421   70962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 19:31:27.368124   70962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17751.pem && ln -fs /usr/share/ca-certificates/17751.pem /etc/ssl/certs/17751.pem"
	I0401 19:31:27.379331   70962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17751.pem
	I0401 19:31:27.384465   70962 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 18:15 /usr/share/ca-certificates/17751.pem
	I0401 19:31:27.384518   70962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17751.pem
	I0401 19:31:27.391192   70962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17751.pem /etc/ssl/certs/51391683.0"
	I0401 19:31:27.403898   70962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177512.pem && ln -fs /usr/share/ca-certificates/177512.pem /etc/ssl/certs/177512.pem"
	I0401 19:31:27.418676   70962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177512.pem
	I0401 19:31:27.424254   70962 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 18:15 /usr/share/ca-certificates/177512.pem
	I0401 19:31:27.424308   70962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177512.pem
	I0401 19:31:23.163331   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:23.163803   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:23.163838   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:23.163770   71902 retry.go:31] will retry after 737.12898ms: waiting for machine to come up
	I0401 19:31:23.902739   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:23.903192   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:23.903222   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:23.903139   71902 retry.go:31] will retry after 718.826495ms: waiting for machine to come up
	I0401 19:31:24.624169   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:24.624620   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:24.624648   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:24.624574   71902 retry.go:31] will retry after 1.020701715s: waiting for machine to come up
	I0401 19:31:25.647470   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:25.647957   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:25.647988   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:25.647921   71902 retry.go:31] will retry after 1.318891306s: waiting for machine to come up
	I0401 19:31:26.968134   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:26.968588   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:26.968613   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:26.968535   71902 retry.go:31] will retry after 1.465864517s: waiting for machine to come up
	I0401 19:31:27.752110   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:29.827324   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:27.431798   70962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177512.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 19:31:27.749367   70962 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 19:31:27.757123   70962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 19:31:27.768626   70962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 19:31:27.778119   70962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 19:31:27.786893   70962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 19:31:27.797129   70962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 19:31:27.804804   70962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 19:31:27.813194   70962 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-734648 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.29.3 ClusterName:default-k8s-diff-port-734648 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.145 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:31:27.813274   70962 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 19:31:27.813325   70962 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:31:27.864565   70962 cri.go:89] found id: ""
	I0401 19:31:27.864637   70962 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0401 19:31:27.876745   70962 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0401 19:31:27.876789   70962 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0401 19:31:27.876797   70962 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0401 19:31:27.876862   70962 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 19:31:27.887494   70962 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 19:31:27.888632   70962 kubeconfig.go:125] found "default-k8s-diff-port-734648" server: "https://192.168.61.145:8444"
	I0401 19:31:27.890729   70962 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 19:31:27.900847   70962 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.145
	I0401 19:31:27.900877   70962 kubeadm.go:1154] stopping kube-system containers ...
	I0401 19:31:27.900889   70962 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0401 19:31:27.900936   70962 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:31:27.952874   70962 cri.go:89] found id: ""
	I0401 19:31:27.952954   70962 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0401 19:31:27.971647   70962 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:31:27.982541   70962 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:31:27.982576   70962 kubeadm.go:156] found existing configuration files:
	
	I0401 19:31:27.982612   70962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0401 19:31:27.992341   70962 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:31:27.992414   70962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:31:28.002685   70962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0401 19:31:28.012599   70962 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:31:28.012658   70962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:31:28.022731   70962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0401 19:31:28.033584   70962 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:31:28.033661   70962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:31:28.044940   70962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0401 19:31:28.055832   70962 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:31:28.055886   70962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:31:28.066919   70962 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:31:28.078715   70962 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:28.212251   70962 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:29.214190   70962 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.001904972s)
	I0401 19:31:29.214224   70962 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:29.444484   70962 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:29.536112   70962 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:29.664087   70962 api_server.go:52] waiting for apiserver process to appear ...
	I0401 19:31:29.664201   70962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:30.165117   70962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:30.664872   70962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:30.707251   70962 api_server.go:72] duration metric: took 1.04316448s to wait for apiserver process to appear ...
	I0401 19:31:30.707280   70962 api_server.go:88] waiting for apiserver healthz status ...
	I0401 19:31:30.707297   70962 api_server.go:253] Checking apiserver healthz at https://192.168.61.145:8444/healthz ...
	I0401 19:31:30.707881   70962 api_server.go:269] stopped: https://192.168.61.145:8444/healthz: Get "https://192.168.61.145:8444/healthz": dial tcp 192.168.61.145:8444: connect: connection refused
	I0401 19:31:31.207434   70962 api_server.go:253] Checking apiserver healthz at https://192.168.61.145:8444/healthz ...
	I0401 19:31:28.435890   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:28.436304   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:28.436334   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:28.436255   71902 retry.go:31] will retry after 2.062597688s: waiting for machine to come up
	I0401 19:31:30.500523   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:30.500999   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:30.501027   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:30.500954   71902 retry.go:31] will retry after 2.068480339s: waiting for machine to come up
	I0401 19:31:32.571229   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:32.571603   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:32.571635   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:32.571550   71902 retry.go:31] will retry after 3.355965883s: waiting for machine to come up
	I0401 19:31:33.707613   70962 api_server.go:279] https://192.168.61.145:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0401 19:31:33.707647   70962 api_server.go:103] status: https://192.168.61.145:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0401 19:31:33.707663   70962 api_server.go:253] Checking apiserver healthz at https://192.168.61.145:8444/healthz ...
	I0401 19:31:33.728509   70962 api_server.go:279] https://192.168.61.145:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0401 19:31:33.728582   70962 api_server.go:103] status: https://192.168.61.145:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0401 19:31:34.208163   70962 api_server.go:253] Checking apiserver healthz at https://192.168.61.145:8444/healthz ...
	I0401 19:31:34.212754   70962 api_server.go:279] https://192.168.61.145:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0401 19:31:34.212784   70962 api_server.go:103] status: https://192.168.61.145:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0401 19:31:34.708282   70962 api_server.go:253] Checking apiserver healthz at https://192.168.61.145:8444/healthz ...
	I0401 19:31:34.715268   70962 api_server.go:279] https://192.168.61.145:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0401 19:31:34.715294   70962 api_server.go:103] status: https://192.168.61.145:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0401 19:31:35.207460   70962 api_server.go:253] Checking apiserver healthz at https://192.168.61.145:8444/healthz ...
	I0401 19:31:35.212542   70962 api_server.go:279] https://192.168.61.145:8444/healthz returned 200:
	ok
	I0401 19:31:35.219264   70962 api_server.go:141] control plane version: v1.29.3
	I0401 19:31:35.219287   70962 api_server.go:131] duration metric: took 4.512000334s to wait for apiserver health ...
	I0401 19:31:35.219294   70962 cni.go:84] Creating CNI manager for ""
	I0401 19:31:35.219309   70962 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:31:35.221080   70962 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0401 19:31:31.828694   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:34.325740   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:35.222800   70962 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0401 19:31:35.238787   70962 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0401 19:31:35.286002   70962 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 19:31:35.302379   70962 system_pods.go:59] 8 kube-system pods found
	I0401 19:31:35.302420   70962 system_pods.go:61] "coredns-76f75df574-tdwrh" [c1d3b591-fa81-46dd-847c-ffdfc22937fa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:31:35.302437   70962 system_pods.go:61] "etcd-default-k8s-diff-port-734648" [e977793d-ec92-40b8-a0fe-1b2400fb1af6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0401 19:31:35.302447   70962 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-734648" [2d0eae31-35c3-40aa-9d28-a2f51849c15d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0401 19:31:35.302469   70962 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-734648" [cded1171-2e1b-4d70-9f26-d1d3a6558da1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0401 19:31:35.302483   70962 system_pods.go:61] "kube-proxy-mn546" [f9b6366f-7095-418c-ba24-529c0555f438] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:31:35.302493   70962 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-734648" [c1518ece-8cbf-49fe-9091-15b38dc1bd62] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0401 19:31:35.302504   70962 system_pods.go:61] "metrics-server-57f55c9bc5-g7mg2" [d1ede79a-a7e6-42bd-a799-197ffc7c7939] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:31:35.302519   70962 system_pods.go:61] "storage-provisioner" [bd55f9c8-580c-4eb1-adbc-020d5bbedce9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:31:35.302532   70962 system_pods.go:74] duration metric: took 16.508651ms to wait for pod list to return data ...
	I0401 19:31:35.302545   70962 node_conditions.go:102] verifying NodePressure condition ...
	I0401 19:31:35.305826   70962 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 19:31:35.305862   70962 node_conditions.go:123] node cpu capacity is 2
	I0401 19:31:35.305876   70962 node_conditions.go:105] duration metric: took 3.322577ms to run NodePressure ...
	I0401 19:31:35.305895   70962 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:35.603225   70962 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0401 19:31:35.608584   70962 kubeadm.go:733] kubelet initialised
	I0401 19:31:35.608611   70962 kubeadm.go:734] duration metric: took 5.361549ms waiting for restarted kubelet to initialise ...
	I0401 19:31:35.608620   70962 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:31:35.615252   70962 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-tdwrh" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:35.620605   70962 pod_ready.go:97] node "default-k8s-diff-port-734648" hosting pod "coredns-76f75df574-tdwrh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-734648" has status "Ready":"False"
	I0401 19:31:35.620627   70962 pod_ready.go:81] duration metric: took 5.353257ms for pod "coredns-76f75df574-tdwrh" in "kube-system" namespace to be "Ready" ...
	E0401 19:31:35.620634   70962 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-734648" hosting pod "coredns-76f75df574-tdwrh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-734648" has status "Ready":"False"
	I0401 19:31:35.620641   70962 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:35.625280   70962 pod_ready.go:97] node "default-k8s-diff-port-734648" hosting pod "etcd-default-k8s-diff-port-734648" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-734648" has status "Ready":"False"
	I0401 19:31:35.625297   70962 pod_ready.go:81] duration metric: took 4.646748ms for pod "etcd-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	E0401 19:31:35.625311   70962 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-734648" hosting pod "etcd-default-k8s-diff-port-734648" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-734648" has status "Ready":"False"
	I0401 19:31:35.625325   70962 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:35.630150   70962 pod_ready.go:97] node "default-k8s-diff-port-734648" hosting pod "kube-apiserver-default-k8s-diff-port-734648" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-734648" has status "Ready":"False"
	I0401 19:31:35.630170   70962 pod_ready.go:81] duration metric: took 4.83409ms for pod "kube-apiserver-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	E0401 19:31:35.630178   70962 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-734648" hosting pod "kube-apiserver-default-k8s-diff-port-734648" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-734648" has status "Ready":"False"
	I0401 19:31:35.630184   70962 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:35.693865   70962 pod_ready.go:97] node "default-k8s-diff-port-734648" hosting pod "kube-controller-manager-default-k8s-diff-port-734648" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-734648" has status "Ready":"False"
	I0401 19:31:35.693890   70962 pod_ready.go:81] duration metric: took 63.697397ms for pod "kube-controller-manager-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	E0401 19:31:35.693901   70962 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-734648" hosting pod "kube-controller-manager-default-k8s-diff-port-734648" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-734648" has status "Ready":"False"
	I0401 19:31:35.693908   70962 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mn546" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:36.090904   70962 pod_ready.go:92] pod "kube-proxy-mn546" in "kube-system" namespace has status "Ready":"True"
	I0401 19:31:36.090928   70962 pod_ready.go:81] duration metric: took 397.013717ms for pod "kube-proxy-mn546" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:36.090938   70962 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:35.929498   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:35.930010   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:35.930042   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:35.929963   71902 retry.go:31] will retry after 3.806123644s: waiting for machine to come up
	I0401 19:31:41.203538   70284 start.go:364] duration metric: took 56.718693538s to acquireMachinesLock for "no-preload-472858"
	I0401 19:31:41.203592   70284 start.go:96] Skipping create...Using existing machine configuration
	I0401 19:31:41.203607   70284 fix.go:54] fixHost starting: 
	I0401 19:31:41.204096   70284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:31:41.204143   70284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:31:41.221574   70284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42471
	I0401 19:31:41.222045   70284 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:31:41.222527   70284 main.go:141] libmachine: Using API Version  1
	I0401 19:31:41.222547   70284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:31:41.222856   70284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:31:41.223051   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:31:41.223209   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetState
	I0401 19:31:41.224801   70284 fix.go:112] recreateIfNeeded on no-preload-472858: state=Stopped err=<nil>
	I0401 19:31:41.224827   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	W0401 19:31:41.224979   70284 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 19:31:41.226937   70284 out.go:177] * Restarting existing kvm2 VM for "no-preload-472858" ...
	I0401 19:31:36.824790   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:38.824976   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:40.827269   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:41.228315   70284 main.go:141] libmachine: (no-preload-472858) Calling .Start
	I0401 19:31:41.228509   70284 main.go:141] libmachine: (no-preload-472858) Ensuring networks are active...
	I0401 19:31:41.229206   70284 main.go:141] libmachine: (no-preload-472858) Ensuring network default is active
	I0401 19:31:41.229603   70284 main.go:141] libmachine: (no-preload-472858) Ensuring network mk-no-preload-472858 is active
	I0401 19:31:41.229999   70284 main.go:141] libmachine: (no-preload-472858) Getting domain xml...
	I0401 19:31:41.230682   70284 main.go:141] libmachine: (no-preload-472858) Creating domain...
	I0401 19:31:38.097417   70962 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:40.098187   70962 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:42.099891   70962 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:39.739700   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.740313   71168 main.go:141] libmachine: (old-k8s-version-163608) Found IP for machine: 192.168.50.106
	I0401 19:31:39.740369   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has current primary IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.740386   71168 main.go:141] libmachine: (old-k8s-version-163608) Reserving static IP address...
	I0401 19:31:39.740767   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "old-k8s-version-163608", mac: "52:54:00:fe:1b:e7", ip: "192.168.50.106"} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:39.740798   71168 main.go:141] libmachine: (old-k8s-version-163608) Reserved static IP address: 192.168.50.106
	I0401 19:31:39.740818   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | skip adding static IP to network mk-old-k8s-version-163608 - found existing host DHCP lease matching {name: "old-k8s-version-163608", mac: "52:54:00:fe:1b:e7", ip: "192.168.50.106"}
	I0401 19:31:39.740839   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | Getting to WaitForSSH function...
	I0401 19:31:39.740857   71168 main.go:141] libmachine: (old-k8s-version-163608) Waiting for SSH to be available...
	I0401 19:31:39.743023   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.743417   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:39.743447   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.743589   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | Using SSH client type: external
	I0401 19:31:39.743614   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | Using SSH private key: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/old-k8s-version-163608/id_rsa (-rw-------)
	I0401 19:31:39.743648   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.106 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18233-10493/.minikube/machines/old-k8s-version-163608/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0401 19:31:39.743662   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | About to run SSH command:
	I0401 19:31:39.743676   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | exit 0
	I0401 19:31:39.877699   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | SSH cmd err, output: <nil>: 
	I0401 19:31:39.878044   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetConfigRaw
	I0401 19:31:39.878611   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetIP
	I0401 19:31:39.880733   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.881074   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:39.881107   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.881352   71168 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/config.json ...
	I0401 19:31:39.881510   71168 machine.go:94] provisionDockerMachine start ...
	I0401 19:31:39.881529   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:31:39.881766   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:39.883980   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.884318   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:39.884360   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.884483   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:39.884675   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:39.884877   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:39.885029   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:39.885175   71168 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:39.885339   71168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.106 22 <nil> <nil>}
	I0401 19:31:39.885349   71168 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 19:31:39.994935   71168 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0401 19:31:39.994971   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetMachineName
	I0401 19:31:39.995213   71168 buildroot.go:166] provisioning hostname "old-k8s-version-163608"
	I0401 19:31:39.995241   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetMachineName
	I0401 19:31:39.995472   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:39.998179   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.998490   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:39.998525   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.998656   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:39.998805   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:39.998949   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:39.999054   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:39.999183   71168 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:39.999372   71168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.106 22 <nil> <nil>}
	I0401 19:31:39.999390   71168 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-163608 && echo "old-k8s-version-163608" | sudo tee /etc/hostname
	I0401 19:31:40.128852   71168 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-163608
	
	I0401 19:31:40.128880   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:40.131508   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.131817   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:40.131874   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.131987   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:40.132188   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:40.132365   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:40.132503   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:40.132693   71168 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:40.132890   71168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.106 22 <nil> <nil>}
	I0401 19:31:40.132908   71168 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-163608' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-163608/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-163608' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 19:31:40.252693   71168 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 19:31:40.252727   71168 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18233-10493/.minikube CaCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18233-10493/.minikube}
	I0401 19:31:40.252749   71168 buildroot.go:174] setting up certificates
	I0401 19:31:40.252759   71168 provision.go:84] configureAuth start
	I0401 19:31:40.252767   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetMachineName
	I0401 19:31:40.253030   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetIP
	I0401 19:31:40.255827   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.256183   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:40.256210   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.256418   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:40.259041   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.259388   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:40.259418   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.259540   71168 provision.go:143] copyHostCerts
	I0401 19:31:40.259592   71168 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem, removing ...
	I0401 19:31:40.259602   71168 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem
	I0401 19:31:40.259654   71168 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem (1082 bytes)
	I0401 19:31:40.259745   71168 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem, removing ...
	I0401 19:31:40.259754   71168 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem
	I0401 19:31:40.259773   71168 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem (1123 bytes)
	I0401 19:31:40.259822   71168 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem, removing ...
	I0401 19:31:40.259830   71168 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem
	I0401 19:31:40.259846   71168 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem (1679 bytes)
	I0401 19:31:40.259891   71168 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-163608 san=[127.0.0.1 192.168.50.106 localhost minikube old-k8s-version-163608]
	I0401 19:31:40.465177   71168 provision.go:177] copyRemoteCerts
	I0401 19:31:40.465241   71168 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 19:31:40.465265   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:40.467676   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.468040   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:40.468070   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.468272   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:40.468456   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:40.468622   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:40.468767   71168 sshutil.go:53] new ssh client: &{IP:192.168.50.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/old-k8s-version-163608/id_rsa Username:docker}
	I0401 19:31:40.557764   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 19:31:40.585326   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0401 19:31:40.611671   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 19:31:40.639265   71168 provision.go:87] duration metric: took 386.497023ms to configureAuth
	I0401 19:31:40.639296   71168 buildroot.go:189] setting minikube options for container-runtime
	I0401 19:31:40.639521   71168 config.go:182] Loaded profile config "old-k8s-version-163608": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 19:31:40.639590   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:40.642321   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.642733   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:40.642762   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.642921   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:40.643122   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:40.643294   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:40.643442   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:40.643647   71168 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:40.643802   71168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.106 22 <nil> <nil>}
	I0401 19:31:40.643819   71168 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 19:31:40.940619   71168 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 19:31:40.940647   71168 machine.go:97] duration metric: took 1.059122816s to provisionDockerMachine
	I0401 19:31:40.940661   71168 start.go:293] postStartSetup for "old-k8s-version-163608" (driver="kvm2")
	I0401 19:31:40.940672   71168 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 19:31:40.940687   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:31:40.940955   71168 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 19:31:40.940981   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:40.943787   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.944159   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:40.944197   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.944347   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:40.944556   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:40.944700   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:40.944834   71168 sshutil.go:53] new ssh client: &{IP:192.168.50.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/old-k8s-version-163608/id_rsa Username:docker}
	I0401 19:31:41.035824   71168 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 19:31:41.040975   71168 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 19:31:41.041007   71168 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/addons for local assets ...
	I0401 19:31:41.041085   71168 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/files for local assets ...
	I0401 19:31:41.041165   71168 filesync.go:149] local asset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> 177512.pem in /etc/ssl/certs
	I0401 19:31:41.041255   71168 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 19:31:41.052356   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:31:41.080699   71168 start.go:296] duration metric: took 140.024653ms for postStartSetup
	I0401 19:31:41.080737   71168 fix.go:56] duration metric: took 20.729726297s for fixHost
	I0401 19:31:41.080759   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:41.083664   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:41.084045   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:41.084075   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:41.084202   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:41.084405   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:41.084599   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:41.084796   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:41.084971   71168 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:41.085169   71168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.106 22 <nil> <nil>}
	I0401 19:31:41.085180   71168 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 19:31:41.203392   71168 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711999901.182365994
	
	I0401 19:31:41.203412   71168 fix.go:216] guest clock: 1711999901.182365994
	I0401 19:31:41.203419   71168 fix.go:229] Guest: 2024-04-01 19:31:41.182365994 +0000 UTC Remote: 2024-04-01 19:31:41.080741553 +0000 UTC m=+228.159955492 (delta=101.624441ms)
	I0401 19:31:41.203437   71168 fix.go:200] guest clock delta is within tolerance: 101.624441ms
	I0401 19:31:41.203442   71168 start.go:83] releasing machines lock for "old-k8s-version-163608", held for 20.852486097s
	I0401 19:31:41.203462   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:31:41.203744   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetIP
	I0401 19:31:41.206582   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:41.206952   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:41.206973   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:41.207151   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:31:41.207701   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:31:41.207891   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:31:41.207954   71168 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 19:31:41.207996   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:41.208096   71168 ssh_runner.go:195] Run: cat /version.json
	I0401 19:31:41.208127   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:41.210731   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:41.210928   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:41.211107   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:41.211132   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:41.211317   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:41.211446   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:41.211488   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:41.211491   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:41.211636   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:41.211692   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:41.211783   71168 sshutil.go:53] new ssh client: &{IP:192.168.50.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/old-k8s-version-163608/id_rsa Username:docker}
	I0401 19:31:41.211891   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:41.212031   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:41.212187   71168 sshutil.go:53] new ssh client: &{IP:192.168.50.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/old-k8s-version-163608/id_rsa Username:docker}
	I0401 19:31:41.296330   71168 ssh_runner.go:195] Run: systemctl --version
	I0401 19:31:41.326247   71168 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 19:31:41.479411   71168 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 19:31:41.486996   71168 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 19:31:41.487063   71168 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 19:31:41.507840   71168 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 19:31:41.507870   71168 start.go:494] detecting cgroup driver to use...
	I0401 19:31:41.507942   71168 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 19:31:41.533063   71168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 19:31:41.551699   71168 docker.go:217] disabling cri-docker service (if available) ...
	I0401 19:31:41.551754   71168 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 19:31:41.568078   71168 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 19:31:41.584278   71168 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 19:31:41.726884   71168 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 19:31:41.882514   71168 docker.go:233] disabling docker service ...
	I0401 19:31:41.882587   71168 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 19:31:41.901235   71168 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 19:31:41.919787   71168 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 19:31:42.082420   71168 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 19:31:42.248527   71168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 19:31:42.266610   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 19:31:42.295677   71168 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0401 19:31:42.295740   71168 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:42.313855   71168 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 19:31:42.313920   71168 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:42.327176   71168 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:42.339527   71168 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:42.351220   71168 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 19:31:42.363716   71168 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 19:31:42.379911   71168 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0401 19:31:42.379971   71168 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0401 19:31:42.395282   71168 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 19:31:42.407713   71168 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:31:42.579648   71168 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 19:31:42.764748   71168 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 19:31:42.764858   71168 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 19:31:42.771038   71168 start.go:562] Will wait 60s for crictl version
	I0401 19:31:42.771125   71168 ssh_runner.go:195] Run: which crictl
	I0401 19:31:42.775871   71168 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 19:31:42.823135   71168 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 19:31:42.823218   71168 ssh_runner.go:195] Run: crio --version
	I0401 19:31:42.863748   71168 ssh_runner.go:195] Run: crio --version
	I0401 19:31:42.900263   71168 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0401 19:31:42.901631   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetIP
	I0401 19:31:42.904464   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:42.904773   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:42.904812   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:42.905048   71168 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0401 19:31:42.910117   71168 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:31:42.925313   71168 kubeadm.go:877] updating cluster {Name:old-k8s-version-163608 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-163608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.106 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 19:31:42.925475   71168 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0401 19:31:42.925542   71168 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:31:42.828772   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:44.829527   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:42.553437   70284 main.go:141] libmachine: (no-preload-472858) Waiting to get IP...
	I0401 19:31:42.554422   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:42.554810   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:42.554907   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:42.554806   72041 retry.go:31] will retry after 237.823736ms: waiting for machine to come up
	I0401 19:31:42.794546   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:42.795159   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:42.795205   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:42.795117   72041 retry.go:31] will retry after 326.387674ms: waiting for machine to come up
	I0401 19:31:43.123632   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:43.124306   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:43.124342   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:43.124244   72041 retry.go:31] will retry after 455.262949ms: waiting for machine to come up
	I0401 19:31:43.580752   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:43.581420   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:43.581440   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:43.581375   72041 retry.go:31] will retry after 520.307316ms: waiting for machine to come up
	I0401 19:31:44.103924   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:44.104407   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:44.104431   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:44.104361   72041 retry.go:31] will retry after 491.638031ms: waiting for machine to come up
	I0401 19:31:44.598440   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:44.598990   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:44.599015   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:44.598901   72041 retry.go:31] will retry after 652.234963ms: waiting for machine to come up
	I0401 19:31:45.252362   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:45.252901   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:45.252933   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:45.252853   72041 retry.go:31] will retry after 1.047335678s: waiting for machine to come up
	I0401 19:31:46.301894   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:46.302324   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:46.302349   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:46.302281   72041 retry.go:31] will retry after 1.303326069s: waiting for machine to come up
	I0401 19:31:44.101042   70962 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:46.099803   70962 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace has status "Ready":"True"
	I0401 19:31:46.099828   70962 pod_ready.go:81] duration metric: took 10.008882274s for pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:46.099843   70962 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:42.974220   71168 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0401 19:31:42.974307   71168 ssh_runner.go:195] Run: which lz4
	I0401 19:31:42.979179   71168 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0401 19:31:42.984204   71168 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 19:31:42.984236   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0401 19:31:45.108131   71168 crio.go:462] duration metric: took 2.128988098s to copy over tarball
	I0401 19:31:45.108232   71168 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 19:31:47.328534   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:49.827306   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:47.606907   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:47.607392   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:47.607419   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:47.607356   72041 retry.go:31] will retry after 1.729010443s: waiting for machine to come up
	I0401 19:31:49.338200   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:49.338722   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:49.338751   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:49.338667   72041 retry.go:31] will retry after 2.069036941s: waiting for machine to come up
	I0401 19:31:51.409458   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:51.409945   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:51.409976   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:51.409894   72041 retry.go:31] will retry after 2.405834741s: waiting for machine to come up
	I0401 19:31:48.108234   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:50.607720   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:48.581824   71168 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.473552916s)
	I0401 19:31:48.581871   71168 crio.go:469] duration metric: took 3.473700991s to extract the tarball
	I0401 19:31:48.581881   71168 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 19:31:48.630609   71168 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:31:48.673027   71168 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0401 19:31:48.673048   71168 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0401 19:31:48.673085   71168 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:31:48.673129   71168 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 19:31:48.673155   71168 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 19:31:48.673190   71168 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 19:31:48.673133   71168 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0401 19:31:48.673273   71168 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0401 19:31:48.673143   71168 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0401 19:31:48.673336   71168 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 19:31:48.675068   71168 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:31:48.675073   71168 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 19:31:48.675068   71168 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 19:31:48.675093   71168 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0401 19:31:48.675072   71168 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0401 19:31:48.675073   71168 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 19:31:48.675115   71168 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 19:31:48.675096   71168 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0401 19:31:48.827947   71168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0401 19:31:48.846025   71168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0401 19:31:48.848769   71168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0401 19:31:48.858366   71168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0401 19:31:48.858613   71168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0401 19:31:48.859241   71168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 19:31:48.862047   71168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0401 19:31:48.912299   71168 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0401 19:31:48.912346   71168 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0401 19:31:48.912399   71168 ssh_runner.go:195] Run: which crictl
	I0401 19:31:49.030117   71168 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0401 19:31:49.030357   71168 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 19:31:49.030122   71168 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0401 19:31:49.030433   71168 ssh_runner.go:195] Run: which crictl
	I0401 19:31:49.030460   71168 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 19:31:49.030526   71168 ssh_runner.go:195] Run: which crictl
	I0401 19:31:49.062211   71168 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0401 19:31:49.062327   71168 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0401 19:31:49.062234   71168 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0401 19:31:49.062415   71168 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0401 19:31:49.062396   71168 ssh_runner.go:195] Run: which crictl
	I0401 19:31:49.062461   71168 ssh_runner.go:195] Run: which crictl
	I0401 19:31:49.078249   71168 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0401 19:31:49.078308   71168 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 19:31:49.078323   71168 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0401 19:31:49.078358   71168 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 19:31:49.078379   71168 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0401 19:31:49.078398   71168 ssh_runner.go:195] Run: which crictl
	I0401 19:31:49.078426   71168 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0401 19:31:49.078440   71168 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0401 19:31:49.078362   71168 ssh_runner.go:195] Run: which crictl
	I0401 19:31:49.078466   71168 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0401 19:31:49.078494   71168 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0401 19:31:49.225060   71168 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0401 19:31:49.225137   71168 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0401 19:31:49.225160   71168 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0401 19:31:49.225199   71168 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0401 19:31:49.225250   71168 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0401 19:31:49.225252   71168 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 19:31:49.225326   71168 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0401 19:31:49.280782   71168 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0401 19:31:49.281709   71168 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0401 19:31:49.299218   71168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:31:49.465497   71168 cache_images.go:92] duration metric: took 792.432136ms to LoadCachedImages
	W0401 19:31:49.465595   71168 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0401 19:31:49.465613   71168 kubeadm.go:928] updating node { 192.168.50.106 8443 v1.20.0 crio true true} ...
	I0401 19:31:49.465768   71168 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-163608 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.106
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-163608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 19:31:49.465862   71168 ssh_runner.go:195] Run: crio config
	I0401 19:31:49.529730   71168 cni.go:84] Creating CNI manager for ""
	I0401 19:31:49.529757   71168 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:31:49.529771   71168 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 19:31:49.529799   71168 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.106 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-163608 NodeName:old-k8s-version-163608 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.106"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.106 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0401 19:31:49.529969   71168 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.106
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-163608"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.106
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.106"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 19:31:49.530037   71168 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0401 19:31:49.542642   71168 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 19:31:49.542724   71168 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 19:31:49.557001   71168 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0401 19:31:49.579568   71168 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 19:31:49.599692   71168 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0401 19:31:49.619780   71168 ssh_runner.go:195] Run: grep 192.168.50.106	control-plane.minikube.internal$ /etc/hosts
	I0401 19:31:49.625597   71168 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.106	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:31:49.643862   71168 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:31:49.791391   71168 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:31:49.814470   71168 certs.go:68] Setting up /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608 for IP: 192.168.50.106
	I0401 19:31:49.814497   71168 certs.go:194] generating shared ca certs ...
	I0401 19:31:49.814516   71168 certs.go:226] acquiring lock for ca certs: {Name:mk348b3e250c104b662139cd7212c6c6dfda3180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:31:49.814680   71168 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key
	I0401 19:31:49.814736   71168 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key
	I0401 19:31:49.814745   71168 certs.go:256] generating profile certs ...
	I0401 19:31:49.814852   71168 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/client.key
	I0401 19:31:49.814916   71168 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/apiserver.key.f2de0982
	I0401 19:31:49.814964   71168 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/proxy-client.key
	I0401 19:31:49.815119   71168 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem (1338 bytes)
	W0401 19:31:49.815178   71168 certs.go:480] ignoring /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751_empty.pem, impossibly tiny 0 bytes
	I0401 19:31:49.815195   71168 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem (1675 bytes)
	I0401 19:31:49.815224   71168 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem (1082 bytes)
	I0401 19:31:49.815266   71168 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem (1123 bytes)
	I0401 19:31:49.815299   71168 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem (1679 bytes)
	I0401 19:31:49.815362   71168 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:31:49.816196   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 19:31:49.866842   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 19:31:49.913788   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 19:31:49.953223   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 19:31:50.004313   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0401 19:31:50.046972   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 19:31:50.086990   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 19:31:50.134907   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 19:31:50.163395   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /usr/share/ca-certificates/177512.pem (1708 bytes)
	I0401 19:31:50.191901   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 19:31:50.221196   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem --> /usr/share/ca-certificates/17751.pem (1338 bytes)
	I0401 19:31:50.253024   71168 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0401 19:31:50.275781   71168 ssh_runner.go:195] Run: openssl version
	I0401 19:31:50.282795   71168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177512.pem && ln -fs /usr/share/ca-certificates/177512.pem /etc/ssl/certs/177512.pem"
	I0401 19:31:50.296952   71168 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177512.pem
	I0401 19:31:50.303868   71168 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 18:15 /usr/share/ca-certificates/177512.pem
	I0401 19:31:50.303950   71168 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177512.pem
	I0401 19:31:50.312249   71168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177512.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 19:31:50.328985   71168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 19:31:50.345917   71168 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:50.352041   71168 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 18:07 /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:50.352103   71168 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:50.358752   71168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 19:31:50.371702   71168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17751.pem && ln -fs /usr/share/ca-certificates/17751.pem /etc/ssl/certs/17751.pem"
	I0401 19:31:50.384633   71168 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17751.pem
	I0401 19:31:50.391229   71168 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 18:15 /usr/share/ca-certificates/17751.pem
	I0401 19:31:50.391277   71168 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17751.pem
	I0401 19:31:50.397980   71168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17751.pem /etc/ssl/certs/51391683.0"
	I0401 19:31:50.412674   71168 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 19:31:50.418084   71168 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 19:31:50.425102   71168 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 19:31:50.431949   71168 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 19:31:50.438665   71168 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 19:31:50.446633   71168 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 19:31:50.454688   71168 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 19:31:50.462805   71168 kubeadm.go:391] StartCluster: {Name:old-k8s-version-163608 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-163608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.106 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:31:50.462922   71168 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 19:31:50.462956   71168 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:31:50.505702   71168 cri.go:89] found id: ""
	I0401 19:31:50.505788   71168 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0401 19:31:50.517916   71168 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0401 19:31:50.517934   71168 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0401 19:31:50.517940   71168 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0401 19:31:50.517995   71168 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 19:31:50.529459   71168 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 19:31:50.530408   71168 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-163608" does not appear in /home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 19:31:50.531055   71168 kubeconfig.go:62] /home/jenkins/minikube-integration/18233-10493/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-163608" cluster setting kubeconfig missing "old-k8s-version-163608" context setting]
	I0401 19:31:50.532369   71168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/kubeconfig: {Name:mkbd988e40ba29769e9f8a43c4d876f38e957f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:31:50.534578   71168 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 19:31:50.546275   71168 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.106
	I0401 19:31:50.546309   71168 kubeadm.go:1154] stopping kube-system containers ...
	I0401 19:31:50.546328   71168 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0401 19:31:50.546371   71168 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:31:50.588826   71168 cri.go:89] found id: ""
	I0401 19:31:50.588881   71168 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0401 19:31:50.610933   71168 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:31:50.622201   71168 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:31:50.622221   71168 kubeadm.go:156] found existing configuration files:
	
	I0401 19:31:50.622266   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 19:31:50.634006   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:31:50.634071   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:31:50.647891   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 19:31:50.662548   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:31:50.662596   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:31:50.674627   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 19:31:50.686739   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:31:50.686825   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:31:50.700400   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 19:31:50.712952   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:31:50.713014   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:31:50.725616   71168 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:31:50.739130   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:50.874552   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:51.568640   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:51.850288   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:52.009607   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:52.122887   71168 api_server.go:52] waiting for apiserver process to appear ...
	I0401 19:31:52.122962   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:52.623084   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:51.827968   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:54.325686   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:56.325892   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:53.817748   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:53.818158   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:53.818184   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:53.818122   72041 retry.go:31] will retry after 2.747390243s: waiting for machine to come up
	I0401 19:31:56.567288   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:56.567711   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:56.567742   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:56.567657   72041 retry.go:31] will retry after 3.904473051s: waiting for machine to come up
	I0401 19:31:53.107786   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:55.108974   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:53.123783   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:53.623248   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:54.124004   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:54.623873   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:55.123458   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:55.623923   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:56.123441   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:56.623192   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:57.123012   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:57.624010   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:58.325934   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:00.825343   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:00.476692   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.477192   70284 main.go:141] libmachine: (no-preload-472858) Found IP for machine: 192.168.72.119
	I0401 19:32:00.477217   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has current primary IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.477223   70284 main.go:141] libmachine: (no-preload-472858) Reserving static IP address...
	I0401 19:32:00.477672   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "no-preload-472858", mac: "52:54:00:0a:2e:03", ip: "192.168.72.119"} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:00.477708   70284 main.go:141] libmachine: (no-preload-472858) DBG | skip adding static IP to network mk-no-preload-472858 - found existing host DHCP lease matching {name: "no-preload-472858", mac: "52:54:00:0a:2e:03", ip: "192.168.72.119"}
	I0401 19:32:00.477726   70284 main.go:141] libmachine: (no-preload-472858) Reserved static IP address: 192.168.72.119
	I0401 19:32:00.477742   70284 main.go:141] libmachine: (no-preload-472858) Waiting for SSH to be available...
	I0401 19:32:00.477770   70284 main.go:141] libmachine: (no-preload-472858) DBG | Getting to WaitForSSH function...
	I0401 19:32:00.479949   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.480306   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:00.480334   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.480475   70284 main.go:141] libmachine: (no-preload-472858) DBG | Using SSH client type: external
	I0401 19:32:00.480508   70284 main.go:141] libmachine: (no-preload-472858) DBG | Using SSH private key: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/no-preload-472858/id_rsa (-rw-------)
	I0401 19:32:00.480538   70284 main.go:141] libmachine: (no-preload-472858) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.119 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18233-10493/.minikube/machines/no-preload-472858/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0401 19:32:00.480554   70284 main.go:141] libmachine: (no-preload-472858) DBG | About to run SSH command:
	I0401 19:32:00.480566   70284 main.go:141] libmachine: (no-preload-472858) DBG | exit 0
	I0401 19:32:00.610108   70284 main.go:141] libmachine: (no-preload-472858) DBG | SSH cmd err, output: <nil>: 
	I0401 19:32:00.610458   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetConfigRaw
	I0401 19:32:00.611059   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetIP
	I0401 19:32:00.613496   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.613872   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:00.613906   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.614179   70284 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/no-preload-472858/config.json ...
	I0401 19:32:00.614363   70284 machine.go:94] provisionDockerMachine start ...
	I0401 19:32:00.614382   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:32:00.614593   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:00.617019   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.617404   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:00.617430   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.617585   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:32:00.617780   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:00.617953   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:00.618098   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:32:00.618260   70284 main.go:141] libmachine: Using SSH client type: native
	I0401 19:32:00.618451   70284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.119 22 <nil> <nil>}
	I0401 19:32:00.618462   70284 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 19:32:00.730438   70284 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0401 19:32:00.730473   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetMachineName
	I0401 19:32:00.730725   70284 buildroot.go:166] provisioning hostname "no-preload-472858"
	I0401 19:32:00.730754   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetMachineName
	I0401 19:32:00.730994   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:00.733932   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.734274   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:00.734308   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.734419   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:32:00.734591   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:00.734752   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:00.734918   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:32:00.735092   70284 main.go:141] libmachine: Using SSH client type: native
	I0401 19:32:00.735296   70284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.119 22 <nil> <nil>}
	I0401 19:32:00.735313   70284 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-472858 && echo "no-preload-472858" | sudo tee /etc/hostname
	I0401 19:32:00.865664   70284 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-472858
	
	I0401 19:32:00.865702   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:00.868247   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.868619   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:00.868649   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.868845   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:32:00.869037   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:00.869244   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:00.869420   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:32:00.869671   70284 main.go:141] libmachine: Using SSH client type: native
	I0401 19:32:00.869840   70284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.119 22 <nil> <nil>}
	I0401 19:32:00.869859   70284 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-472858' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-472858/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-472858' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 19:32:00.991430   70284 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 19:32:00.991460   70284 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18233-10493/.minikube CaCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18233-10493/.minikube}
	I0401 19:32:00.991484   70284 buildroot.go:174] setting up certificates
	I0401 19:32:00.991493   70284 provision.go:84] configureAuth start
	I0401 19:32:00.991504   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetMachineName
	I0401 19:32:00.991748   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetIP
	I0401 19:32:00.994239   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.994566   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:00.994596   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.994722   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:00.996735   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.997064   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:00.997090   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.997212   70284 provision.go:143] copyHostCerts
	I0401 19:32:00.997265   70284 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem, removing ...
	I0401 19:32:00.997281   70284 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem
	I0401 19:32:00.997346   70284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem (1082 bytes)
	I0401 19:32:00.997493   70284 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem, removing ...
	I0401 19:32:00.997507   70284 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem
	I0401 19:32:00.997533   70284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem (1123 bytes)
	I0401 19:32:00.997619   70284 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem, removing ...
	I0401 19:32:00.997629   70284 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem
	I0401 19:32:00.997667   70284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem (1679 bytes)
	I0401 19:32:00.997733   70284 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem org=jenkins.no-preload-472858 san=[127.0.0.1 192.168.72.119 localhost minikube no-preload-472858]
	I0401 19:32:01.212397   70284 provision.go:177] copyRemoteCerts
	I0401 19:32:01.212453   70284 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 19:32:01.212473   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:01.214810   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.215170   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:01.215198   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.215398   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:32:01.215603   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:01.215761   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:32:01.215903   70284 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/no-preload-472858/id_rsa Username:docker}
	I0401 19:32:01.303113   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0401 19:32:01.331807   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 19:32:01.358429   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 19:32:01.384521   70284 provision.go:87] duration metric: took 393.005717ms to configureAuth
	I0401 19:32:01.384559   70284 buildroot.go:189] setting minikube options for container-runtime
	I0401 19:32:01.384748   70284 config.go:182] Loaded profile config "no-preload-472858": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0401 19:32:01.384862   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:01.387446   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.387828   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:01.387866   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.387966   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:32:01.388168   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:01.388356   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:01.388509   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:32:01.388663   70284 main.go:141] libmachine: Using SSH client type: native
	I0401 19:32:01.388847   70284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.119 22 <nil> <nil>}
	I0401 19:32:01.388867   70284 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 19:32:01.692586   70284 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 19:32:01.692615   70284 machine.go:97] duration metric: took 1.078237975s to provisionDockerMachine
	I0401 19:32:01.692628   70284 start.go:293] postStartSetup for "no-preload-472858" (driver="kvm2")
	I0401 19:32:01.692644   70284 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 19:32:01.692668   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:32:01.692988   70284 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 19:32:01.693012   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:01.696033   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.696405   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:01.696450   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.696603   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:32:01.696763   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:01.696901   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:32:01.697089   70284 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/no-preload-472858/id_rsa Username:docker}
	I0401 19:32:01.786626   70284 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 19:32:01.791703   70284 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 19:32:01.791726   70284 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/addons for local assets ...
	I0401 19:32:01.791802   70284 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/files for local assets ...
	I0401 19:32:01.791901   70284 filesync.go:149] local asset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> 177512.pem in /etc/ssl/certs
	I0401 19:32:01.791991   70284 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 19:32:01.803733   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:32:01.831768   70284 start.go:296] duration metric: took 139.126077ms for postStartSetup
	I0401 19:32:01.831804   70284 fix.go:56] duration metric: took 20.628199635s for fixHost
	I0401 19:32:01.831823   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:01.834218   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.834548   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:01.834574   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.834725   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:32:01.834901   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:01.835066   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:01.835188   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:32:01.835327   70284 main.go:141] libmachine: Using SSH client type: native
	I0401 19:32:01.835544   70284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.119 22 <nil> <nil>}
	I0401 19:32:01.835558   70284 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 19:31:57.607923   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:59.608857   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:02.106942   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:58.123200   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:58.624028   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:59.123026   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:59.623993   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:00.123039   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:00.623632   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:01.123204   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:01.623162   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:02.123264   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:02.623788   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:01.947198   70284 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711999921.892647753
	
	I0401 19:32:01.947267   70284 fix.go:216] guest clock: 1711999921.892647753
	I0401 19:32:01.947279   70284 fix.go:229] Guest: 2024-04-01 19:32:01.892647753 +0000 UTC Remote: 2024-04-01 19:32:01.831808507 +0000 UTC m=+359.938807685 (delta=60.839246ms)
	I0401 19:32:01.947305   70284 fix.go:200] guest clock delta is within tolerance: 60.839246ms
	I0401 19:32:01.947317   70284 start.go:83] releasing machines lock for "no-preload-472858", held for 20.743748352s
	I0401 19:32:01.947347   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:32:01.947621   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetIP
	I0401 19:32:01.950387   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.950719   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:01.950750   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.950940   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:32:01.951438   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:32:01.951631   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:32:01.951681   70284 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 19:32:01.951737   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:01.951854   70284 ssh_runner.go:195] Run: cat /version.json
	I0401 19:32:01.951881   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:01.954468   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.954603   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.954780   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:01.954815   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.954932   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:01.954960   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.954984   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:32:01.955193   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:01.955230   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:32:01.955341   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:32:01.955388   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:01.955510   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:32:01.955501   70284 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/no-preload-472858/id_rsa Username:docker}
	I0401 19:32:01.955670   70284 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/no-preload-472858/id_rsa Username:docker}
	I0401 19:32:02.035332   70284 ssh_runner.go:195] Run: systemctl --version
	I0401 19:32:02.061178   70284 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 19:32:02.220309   70284 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 19:32:02.227811   70284 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 19:32:02.227885   70284 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 19:32:02.247605   70284 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 19:32:02.247634   70284 start.go:494] detecting cgroup driver to use...
	I0401 19:32:02.247690   70284 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 19:32:02.265463   70284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 19:32:02.280175   70284 docker.go:217] disabling cri-docker service (if available) ...
	I0401 19:32:02.280246   70284 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 19:32:02.295003   70284 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 19:32:02.315072   70284 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 19:32:02.449108   70284 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 19:32:02.627772   70284 docker.go:233] disabling docker service ...
	I0401 19:32:02.627850   70284 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 19:32:02.642924   70284 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 19:32:02.657038   70284 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 19:32:02.787085   70284 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 19:32:02.918355   70284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 19:32:02.934828   70284 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 19:32:02.955495   70284 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0401 19:32:02.955548   70284 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:32:02.966690   70284 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 19:32:02.966754   70284 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:32:02.977812   70284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:32:02.989329   70284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:32:03.000727   70284 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 19:32:03.012341   70284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:32:03.023305   70284 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:32:03.044213   70284 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:32:03.055614   70284 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 19:32:03.065880   70284 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0401 19:32:03.065927   70284 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0401 19:32:03.080514   70284 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 19:32:03.090798   70284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:32:03.224199   70284 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 19:32:03.389414   70284 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 19:32:03.389482   70284 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 19:32:03.395493   70284 start.go:562] Will wait 60s for crictl version
	I0401 19:32:03.395539   70284 ssh_runner.go:195] Run: which crictl
	I0401 19:32:03.399739   70284 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 19:32:03.441020   70284 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 19:32:03.441114   70284 ssh_runner.go:195] Run: crio --version
	I0401 19:32:03.474572   70284 ssh_runner.go:195] Run: crio --version
	I0401 19:32:03.511681   70284 out.go:177] * Preparing Kubernetes v1.30.0-rc.0 on CRI-O 1.29.1 ...
	I0401 19:32:02.825628   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:04.825973   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:03.513067   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetIP
	I0401 19:32:03.515901   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:03.516281   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:03.516315   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:03.516523   70284 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0401 19:32:03.521197   70284 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:32:03.536333   70284 kubeadm.go:877] updating cluster {Name:no-preload-472858 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0-rc.0 ClusterName:no-preload-472858 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.119 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 19:32:03.536459   70284 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.0 and runtime crio
	I0401 19:32:03.536507   70284 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:32:03.582858   70284 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0-rc.0". assuming images are not preloaded.
	I0401 19:32:03.582887   70284 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0-rc.0 registry.k8s.io/kube-controller-manager:v1.30.0-rc.0 registry.k8s.io/kube-scheduler:v1.30.0-rc.0 registry.k8s.io/kube-proxy:v1.30.0-rc.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0401 19:32:03.582970   70284 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:32:03.583026   70284 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0401 19:32:03.583032   70284 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0401 19:32:03.583071   70284 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0401 19:32:03.583161   70284 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0401 19:32:03.582997   70284 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0401 19:32:03.583238   70284 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0401 19:32:03.583388   70284 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0401 19:32:03.584618   70284 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0401 19:32:03.584626   70284 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0401 19:32:03.584630   70284 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:32:03.584619   70284 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0401 19:32:03.584640   70284 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0401 19:32:03.584626   70284 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0401 19:32:03.584701   70284 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0401 19:32:03.584856   70284 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0401 19:32:03.730086   70284 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0401 19:32:03.752217   70284 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0401 19:32:03.765621   70284 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0401 19:32:03.766526   70284 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0401 19:32:03.770748   70284 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0401 19:32:03.777614   70284 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0401 19:32:03.777672   70284 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0401 19:32:03.777699   70284 ssh_runner.go:195] Run: which crictl
	I0401 19:32:03.840814   70284 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0401 19:32:03.852416   70284 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0401 19:32:03.869889   70284 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0-rc.0" does not exist at hash "e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3" in container runtime
	I0401 19:32:03.869929   70284 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0401 19:32:03.869979   70284 ssh_runner.go:195] Run: which crictl
	I0401 19:32:03.874654   70284 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0-rc.0" does not exist at hash "ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a" in container runtime
	I0401 19:32:03.874693   70284 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0401 19:32:03.874737   70284 ssh_runner.go:195] Run: which crictl
	I0401 19:32:03.899207   70284 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:32:03.906139   70284 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0-rc.0" does not exist at hash "fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5" in container runtime
	I0401 19:32:03.906182   70284 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0401 19:32:03.906227   70284 ssh_runner.go:195] Run: which crictl
	I0401 19:32:03.996916   70284 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0401 19:32:03.996987   70284 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0-rc.0" does not exist at hash "33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652" in container runtime
	I0401 19:32:03.997022   70284 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0401 19:32:03.997045   70284 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0401 19:32:03.997053   70284 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0401 19:32:03.997054   70284 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0401 19:32:03.997089   70284 ssh_runner.go:195] Run: which crictl
	I0401 19:32:03.997128   70284 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0401 19:32:03.997142   70284 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0401 19:32:03.997090   70284 ssh_runner.go:195] Run: which crictl
	I0401 19:32:03.997164   70284 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:32:03.997194   70284 ssh_runner.go:195] Run: which crictl
	I0401 19:32:03.997211   70284 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0401 19:32:04.090272   70284 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0401 19:32:04.090548   70284 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0401 19:32:04.090639   70284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0401 19:32:04.102041   70284 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0401 19:32:04.102130   70284 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.0
	I0401 19:32:04.102168   70284 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.0
	I0401 19:32:04.102226   70284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0
	I0401 19:32:04.102241   70284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0
	I0401 19:32:04.102278   70284 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:32:04.108100   70284 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.0
	I0401 19:32:04.108192   70284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0
	I0401 19:32:04.182707   70284 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0401 19:32:04.182747   70284 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0401 19:32:04.182759   70284 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0401 19:32:04.182815   70284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0401 19:32:04.182820   70284 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0401 19:32:04.182883   70284 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.0
	I0401 19:32:04.182988   70284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0
	I0401 19:32:04.186135   70284 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0 (exists)
	I0401 19:32:04.186175   70284 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0 (exists)
	I0401 19:32:04.186221   70284 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0 (exists)
	I0401 19:32:04.186242   70284 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0401 19:32:04.186324   70284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0401 19:32:06.352362   70284 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.169442796s)
	I0401 19:32:06.352398   70284 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0401 19:32:06.352419   70284 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0
	I0401 19:32:06.352416   70284 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0: (2.16957379s)
	I0401 19:32:06.352443   70284 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0401 19:32:06.352465   70284 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0
	I0401 19:32:06.352465   70284 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0: (2.16945688s)
	I0401 19:32:06.352479   70284 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.166139431s)
	I0401 19:32:06.352490   70284 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0401 19:32:06.352491   70284 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0 (exists)
	I0401 19:32:04.109989   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:06.294038   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:03.123452   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:03.623784   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:04.123649   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:04.623076   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:05.123822   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:05.623487   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:06.123635   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:06.623689   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:07.123919   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:07.623237   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:06.826244   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:09.326937   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:09.261547   70284 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0: (2.909056315s)
	I0401 19:32:09.261572   70284 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.0 from cache
	I0401 19:32:09.261600   70284 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0
	I0401 19:32:09.261668   70284 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0
	I0401 19:32:11.739636   70284 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0: (2.477945807s)
	I0401 19:32:11.739667   70284 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.0 from cache
	I0401 19:32:11.739702   70284 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0
	I0401 19:32:11.739761   70284 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0
	I0401 19:32:08.609901   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:11.114752   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:08.123689   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:08.623160   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:09.124002   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:09.623090   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:10.123049   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:10.623111   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:11.123042   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:11.623980   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:12.123074   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:12.623530   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:11.826409   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:13.828437   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:16.326097   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:13.195232   70284 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0: (1.455440816s)
	I0401 19:32:13.195267   70284 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.0 from cache
	I0401 19:32:13.195299   70284 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0401 19:32:13.195350   70284 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0401 19:32:13.607042   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:16.107993   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:13.123428   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:13.623899   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:14.123324   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:14.623889   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:15.123496   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:15.623779   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:16.124012   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:16.623620   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:17.123867   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:17.623014   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:18.326127   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:20.326575   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:17.202247   70284 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.006869591s)
	I0401 19:32:17.202284   70284 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0401 19:32:17.202315   70284 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0401 19:32:17.202364   70284 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0401 19:32:17.962735   70284 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0401 19:32:17.962785   70284 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0
	I0401 19:32:17.962850   70284 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0
	I0401 19:32:20.235136   70284 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0: (2.272262595s)
	I0401 19:32:20.235161   70284 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.0 from cache
	I0401 19:32:20.235193   70284 cache_images.go:123] Successfully loaded all cached images
	I0401 19:32:20.235197   70284 cache_images.go:92] duration metric: took 16.652290938s to LoadCachedImages
	I0401 19:32:20.235205   70284 kubeadm.go:928] updating node { 192.168.72.119 8443 v1.30.0-rc.0 crio true true} ...
	I0401 19:32:20.235332   70284 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-472858 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.119
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-rc.0 ClusterName:no-preload-472858 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 19:32:20.235402   70284 ssh_runner.go:195] Run: crio config
	I0401 19:32:20.296015   70284 cni.go:84] Creating CNI manager for ""
	I0401 19:32:20.296039   70284 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:32:20.296050   70284 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 19:32:20.296074   70284 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.119 APIServerPort:8443 KubernetesVersion:v1.30.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-472858 NodeName:no-preload-472858 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.119"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.119 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 19:32:20.296217   70284 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.119
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-472858"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.119
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.119"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 19:32:20.296275   70284 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.0
	I0401 19:32:20.307937   70284 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 19:32:20.308009   70284 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 19:32:20.318571   70284 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0401 19:32:20.339284   70284 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0401 19:32:20.358601   70284 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0401 19:32:20.379394   70284 ssh_runner.go:195] Run: grep 192.168.72.119	control-plane.minikube.internal$ /etc/hosts
	I0401 19:32:20.383948   70284 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.119	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:32:20.397559   70284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:32:20.549147   70284 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:32:20.568027   70284 certs.go:68] Setting up /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/no-preload-472858 for IP: 192.168.72.119
	I0401 19:32:20.568051   70284 certs.go:194] generating shared ca certs ...
	I0401 19:32:20.568070   70284 certs.go:226] acquiring lock for ca certs: {Name:mk348b3e250c104b662139cd7212c6c6dfda3180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:32:20.568273   70284 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key
	I0401 19:32:20.568337   70284 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key
	I0401 19:32:20.568352   70284 certs.go:256] generating profile certs ...
	I0401 19:32:20.568453   70284 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/no-preload-472858/client.key
	I0401 19:32:20.568534   70284 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/no-preload-472858/apiserver.key.bfc8ff8f
	I0401 19:32:20.568586   70284 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/no-preload-472858/proxy-client.key
	I0401 19:32:20.568691   70284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem (1338 bytes)
	W0401 19:32:20.568718   70284 certs.go:480] ignoring /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751_empty.pem, impossibly tiny 0 bytes
	I0401 19:32:20.568728   70284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem (1675 bytes)
	I0401 19:32:20.568747   70284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem (1082 bytes)
	I0401 19:32:20.568773   70284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem (1123 bytes)
	I0401 19:32:20.568795   70284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem (1679 bytes)
	I0401 19:32:20.568830   70284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:32:20.569519   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 19:32:20.605218   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 19:32:20.650321   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 19:32:20.676884   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 19:32:20.705378   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/no-preload-472858/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0401 19:32:20.733068   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/no-preload-472858/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 19:32:20.767387   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/no-preload-472858/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 19:32:20.793543   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/no-preload-472858/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 19:32:20.820843   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /usr/share/ca-certificates/177512.pem (1708 bytes)
	I0401 19:32:20.848364   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 19:32:20.877551   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem --> /usr/share/ca-certificates/17751.pem (1338 bytes)
	I0401 19:32:20.904650   70284 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0401 19:32:20.922876   70284 ssh_runner.go:195] Run: openssl version
	I0401 19:32:20.929441   70284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 19:32:20.942496   70284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:32:20.948011   70284 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 18:07 /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:32:20.948080   70284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:32:20.954320   70284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 19:32:20.968060   70284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17751.pem && ln -fs /usr/share/ca-certificates/17751.pem /etc/ssl/certs/17751.pem"
	I0401 19:32:20.981591   70284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17751.pem
	I0401 19:32:20.986660   70284 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 18:15 /usr/share/ca-certificates/17751.pem
	I0401 19:32:20.986706   70284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17751.pem
	I0401 19:32:20.993394   70284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17751.pem /etc/ssl/certs/51391683.0"
	I0401 19:32:21.006530   70284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177512.pem && ln -fs /usr/share/ca-certificates/177512.pem /etc/ssl/certs/177512.pem"
	I0401 19:32:21.020014   70284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177512.pem
	I0401 19:32:21.025507   70284 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 18:15 /usr/share/ca-certificates/177512.pem
	I0401 19:32:21.025560   70284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177512.pem
	I0401 19:32:21.032433   70284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177512.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 19:32:21.047002   70284 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 19:32:21.052551   70284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 19:32:21.059875   70284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 19:32:21.067243   70284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 19:32:21.074304   70284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 19:32:21.080978   70284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 19:32:21.088051   70284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 19:32:21.095219   70284 kubeadm.go:391] StartCluster: {Name:no-preload-472858 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0-rc.0 ClusterName:no-preload-472858 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.119 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:32:21.095325   70284 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 19:32:21.095403   70284 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:32:21.144103   70284 cri.go:89] found id: ""
	I0401 19:32:21.144187   70284 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0401 19:32:21.157222   70284 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0401 19:32:21.157241   70284 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0401 19:32:21.157246   70284 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0401 19:32:21.157290   70284 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 19:32:21.169027   70284 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 19:32:21.170123   70284 kubeconfig.go:125] found "no-preload-472858" server: "https://192.168.72.119:8443"
	I0401 19:32:21.172523   70284 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 19:32:21.183801   70284 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.119
	I0401 19:32:21.183838   70284 kubeadm.go:1154] stopping kube-system containers ...
	I0401 19:32:21.183847   70284 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0401 19:32:21.183892   70284 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:32:21.229279   70284 cri.go:89] found id: ""
	I0401 19:32:21.229357   70284 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0401 19:32:21.249719   70284 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:32:21.261894   70284 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:32:21.261929   70284 kubeadm.go:156] found existing configuration files:
	
	I0401 19:32:21.261984   70284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 19:32:21.273961   70284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:32:21.274026   70284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:32:21.286746   70284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 19:32:21.297920   70284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:32:21.297986   70284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:32:21.308793   70284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 19:32:21.319612   70284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:32:21.319658   70284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:32:21.332730   70284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 19:32:21.344752   70284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:32:21.344810   70284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:32:21.355821   70284 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:32:21.366649   70284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:32:21.482208   70284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:32:18.607685   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:20.607824   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:18.123795   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:18.623529   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:19.123446   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:19.623223   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:20.123133   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:20.623058   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:21.123302   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:21.623115   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:22.123810   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:22.623878   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:22.826056   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:24.826357   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:22.312148   70284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:32:22.533156   70284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:32:22.620390   70284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:32:22.704948   70284 api_server.go:52] waiting for apiserver process to appear ...
	I0401 19:32:22.705039   70284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:23.205114   70284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:23.706000   70284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:23.725209   70284 api_server.go:72] duration metric: took 1.020261742s to wait for apiserver process to appear ...
	I0401 19:32:23.725243   70284 api_server.go:88] waiting for apiserver healthz status ...
	I0401 19:32:23.725264   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:23.725749   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": dial tcp 192.168.72.119:8443: connect: connection refused
	I0401 19:32:24.226383   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:23.107450   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:25.109899   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:23.123507   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:23.623244   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:24.123444   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:24.623346   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:25.123834   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:25.623814   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:26.124028   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:26.623428   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:27.123592   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:27.623451   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:27.327961   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:29.826272   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:29.226831   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0401 19:32:29.226876   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:27.607575   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:29.608427   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:32.106668   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:28.123454   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:28.623502   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:29.123265   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:29.623449   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:30.123525   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:30.623634   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:31.123972   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:31.623023   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:32.123346   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:32.623839   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:32.325638   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:34.325777   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:36.326510   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:34.227668   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0401 19:32:34.227723   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:34.606929   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:36.607515   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:33.123673   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:33.623088   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:34.123230   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:34.623967   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:35.123420   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:35.623499   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:36.123152   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:36.623963   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:37.123682   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:37.623536   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:38.829585   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:41.325607   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:39.228117   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0401 19:32:39.228164   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:39.107473   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:41.607043   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:38.123238   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:38.623831   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:39.123180   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:39.623801   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:40.123478   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:40.623651   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:41.123687   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:41.624016   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:42.123891   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:42.623493   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:43.326457   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:45.827310   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:44.228934   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0401 19:32:44.228982   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:44.259601   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": read tcp 192.168.72.1:37026->192.168.72.119:8443: read: connection reset by peer
	I0401 19:32:44.726186   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:44.726759   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": dial tcp 192.168.72.119:8443: connect: connection refused
	I0401 19:32:45.226347   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:43.607936   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:46.106775   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:43.123504   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:43.623527   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:44.124016   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:44.623931   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:45.123188   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:45.623649   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:46.123570   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:46.623179   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:47.123273   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:47.623842   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:48.325252   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:50.327365   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:50.226859   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0401 19:32:50.226907   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:48.109152   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:50.607327   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:48.123759   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:48.623092   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:49.123174   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:49.623986   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:50.123301   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:50.623694   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:51.123466   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:51.623618   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:52.123073   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:32:52.123172   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:32:52.164635   71168 cri.go:89] found id: ""
	I0401 19:32:52.164656   71168 logs.go:276] 0 containers: []
	W0401 19:32:52.164663   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:32:52.164669   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:32:52.164738   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:32:52.202531   71168 cri.go:89] found id: ""
	I0401 19:32:52.202560   71168 logs.go:276] 0 containers: []
	W0401 19:32:52.202572   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:32:52.202580   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:32:52.202653   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:32:52.247667   71168 cri.go:89] found id: ""
	I0401 19:32:52.247693   71168 logs.go:276] 0 containers: []
	W0401 19:32:52.247703   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:32:52.247714   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:32:52.247774   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:32:52.289029   71168 cri.go:89] found id: ""
	I0401 19:32:52.289054   71168 logs.go:276] 0 containers: []
	W0401 19:32:52.289062   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:32:52.289068   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:32:52.289114   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:32:52.326820   71168 cri.go:89] found id: ""
	I0401 19:32:52.326864   71168 logs.go:276] 0 containers: []
	W0401 19:32:52.326875   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:32:52.326882   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:32:52.326944   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:32:52.362793   71168 cri.go:89] found id: ""
	I0401 19:32:52.362827   71168 logs.go:276] 0 containers: []
	W0401 19:32:52.362838   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:32:52.362845   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:32:52.362950   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:32:52.400174   71168 cri.go:89] found id: ""
	I0401 19:32:52.400204   71168 logs.go:276] 0 containers: []
	W0401 19:32:52.400215   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:32:52.400222   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:32:52.400282   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:32:52.436027   71168 cri.go:89] found id: ""
	I0401 19:32:52.436056   71168 logs.go:276] 0 containers: []
	W0401 19:32:52.436066   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:32:52.436085   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:32:52.436099   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:32:52.477246   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:32:52.477272   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:32:52.529215   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:32:52.529247   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:32:52.544695   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:32:52.544724   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:32:52.677816   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:32:52.677849   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:32:52.677877   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:32:52.825288   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:54.826043   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:55.228105   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0401 19:32:55.228139   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:53.106774   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:55.107668   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:55.241224   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:55.256975   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:32:55.257045   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:32:55.298280   71168 cri.go:89] found id: ""
	I0401 19:32:55.298307   71168 logs.go:276] 0 containers: []
	W0401 19:32:55.298319   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:32:55.298326   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:32:55.298397   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:32:55.337707   71168 cri.go:89] found id: ""
	I0401 19:32:55.337732   71168 logs.go:276] 0 containers: []
	W0401 19:32:55.337739   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:32:55.337745   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:32:55.337791   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:32:55.381455   71168 cri.go:89] found id: ""
	I0401 19:32:55.381479   71168 logs.go:276] 0 containers: []
	W0401 19:32:55.381490   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:32:55.381496   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:32:55.381557   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:32:55.420715   71168 cri.go:89] found id: ""
	I0401 19:32:55.420739   71168 logs.go:276] 0 containers: []
	W0401 19:32:55.420749   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:32:55.420756   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:32:55.420820   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:32:55.459546   71168 cri.go:89] found id: ""
	I0401 19:32:55.459575   71168 logs.go:276] 0 containers: []
	W0401 19:32:55.459583   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:32:55.459588   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:32:55.459634   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:32:55.504240   71168 cri.go:89] found id: ""
	I0401 19:32:55.504267   71168 logs.go:276] 0 containers: []
	W0401 19:32:55.504277   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:32:55.504285   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:32:55.504368   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:32:55.539399   71168 cri.go:89] found id: ""
	I0401 19:32:55.539426   71168 logs.go:276] 0 containers: []
	W0401 19:32:55.539437   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:32:55.539443   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:32:55.539509   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:32:55.583823   71168 cri.go:89] found id: ""
	I0401 19:32:55.583861   71168 logs.go:276] 0 containers: []
	W0401 19:32:55.583872   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:32:55.583881   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:32:55.583895   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:32:55.645489   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:32:55.645523   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:32:55.712883   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:32:55.712920   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:32:55.734890   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:32:55.734923   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:32:55.853068   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:32:55.853089   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:32:55.853102   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:32:57.325965   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:59.827753   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:00.228533   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0401 19:33:00.228582   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:57.607203   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:59.610732   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:02.108676   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:58.435925   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:58.450910   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:32:58.450980   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:32:58.487470   71168 cri.go:89] found id: ""
	I0401 19:32:58.487495   71168 logs.go:276] 0 containers: []
	W0401 19:32:58.487506   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:32:58.487514   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:32:58.487562   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:32:58.529513   71168 cri.go:89] found id: ""
	I0401 19:32:58.529534   71168 logs.go:276] 0 containers: []
	W0401 19:32:58.529543   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:32:58.529547   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:32:58.529592   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:32:58.574170   71168 cri.go:89] found id: ""
	I0401 19:32:58.574197   71168 logs.go:276] 0 containers: []
	W0401 19:32:58.574205   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:32:58.574211   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:32:58.574258   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:32:58.615379   71168 cri.go:89] found id: ""
	I0401 19:32:58.615405   71168 logs.go:276] 0 containers: []
	W0401 19:32:58.615414   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:32:58.615419   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:32:58.615468   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:32:58.655496   71168 cri.go:89] found id: ""
	I0401 19:32:58.655523   71168 logs.go:276] 0 containers: []
	W0401 19:32:58.655534   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:32:58.655542   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:32:58.655593   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:32:58.697199   71168 cri.go:89] found id: ""
	I0401 19:32:58.697229   71168 logs.go:276] 0 containers: []
	W0401 19:32:58.697238   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:32:58.697246   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:32:58.697312   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:32:58.735618   71168 cri.go:89] found id: ""
	I0401 19:32:58.735643   71168 logs.go:276] 0 containers: []
	W0401 19:32:58.735651   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:32:58.735656   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:32:58.735701   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:32:58.780583   71168 cri.go:89] found id: ""
	I0401 19:32:58.780613   71168 logs.go:276] 0 containers: []
	W0401 19:32:58.780624   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:32:58.780635   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:32:58.780649   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:32:58.829717   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:32:58.829743   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:32:58.844836   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:32:58.844866   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:32:58.923138   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:32:58.923157   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:32:58.923172   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:32:58.993680   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:32:58.993713   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:01.538920   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:01.556943   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:01.557017   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:01.608397   71168 cri.go:89] found id: ""
	I0401 19:33:01.608417   71168 logs.go:276] 0 containers: []
	W0401 19:33:01.608425   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:01.608430   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:01.608490   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:01.666573   71168 cri.go:89] found id: ""
	I0401 19:33:01.666599   71168 logs.go:276] 0 containers: []
	W0401 19:33:01.666609   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:01.666615   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:01.666674   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:01.726308   71168 cri.go:89] found id: ""
	I0401 19:33:01.726331   71168 logs.go:276] 0 containers: []
	W0401 19:33:01.726341   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:01.726347   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:01.726412   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:01.773095   71168 cri.go:89] found id: ""
	I0401 19:33:01.773118   71168 logs.go:276] 0 containers: []
	W0401 19:33:01.773125   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:01.773131   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:01.773189   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:01.813011   71168 cri.go:89] found id: ""
	I0401 19:33:01.813034   71168 logs.go:276] 0 containers: []
	W0401 19:33:01.813042   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:01.813048   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:01.813096   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:01.859124   71168 cri.go:89] found id: ""
	I0401 19:33:01.859151   71168 logs.go:276] 0 containers: []
	W0401 19:33:01.859161   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:01.859169   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:01.859228   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:01.904491   71168 cri.go:89] found id: ""
	I0401 19:33:01.904519   71168 logs.go:276] 0 containers: []
	W0401 19:33:01.904530   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:01.904537   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:01.904596   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:01.946768   71168 cri.go:89] found id: ""
	I0401 19:33:01.946794   71168 logs.go:276] 0 containers: []
	W0401 19:33:01.946804   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:01.946815   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:01.946829   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:02.026315   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:02.026362   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:02.072861   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:02.072893   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:02.132064   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:02.132105   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:02.151545   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:02.151575   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:02.234059   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:02.325806   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:04.327258   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:03.215901   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0401 19:33:03.215933   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0401 19:33:03.215947   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:03.264913   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0401 19:33:03.264946   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0401 19:33:03.264961   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:03.272548   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0401 19:33:03.272580   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0401 19:33:03.726254   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:03.731022   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:03.731050   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:04.225595   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:04.237757   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:04.237783   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:04.725330   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:04.734019   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:04.734047   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:05.225303   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:05.242774   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:05.242811   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:05.726350   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:05.730775   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:05.730838   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:06.225345   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:06.229749   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:06.229793   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:06.725687   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:06.730607   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:06.730640   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:04.112109   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:06.606160   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:04.734559   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:04.755071   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:04.755130   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:04.798316   71168 cri.go:89] found id: ""
	I0401 19:33:04.798345   71168 logs.go:276] 0 containers: []
	W0401 19:33:04.798358   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:04.798366   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:04.798426   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:04.840011   71168 cri.go:89] found id: ""
	I0401 19:33:04.840032   71168 logs.go:276] 0 containers: []
	W0401 19:33:04.840043   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:04.840050   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:04.840106   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:04.883686   71168 cri.go:89] found id: ""
	I0401 19:33:04.883713   71168 logs.go:276] 0 containers: []
	W0401 19:33:04.883725   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:04.883733   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:04.883795   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:04.933810   71168 cri.go:89] found id: ""
	I0401 19:33:04.933844   71168 logs.go:276] 0 containers: []
	W0401 19:33:04.933855   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:04.933863   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:04.933925   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:04.983118   71168 cri.go:89] found id: ""
	I0401 19:33:04.983139   71168 logs.go:276] 0 containers: []
	W0401 19:33:04.983146   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:04.983151   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:04.983207   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:05.036146   71168 cri.go:89] found id: ""
	I0401 19:33:05.036169   71168 logs.go:276] 0 containers: []
	W0401 19:33:05.036179   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:05.036186   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:05.036242   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:05.086269   71168 cri.go:89] found id: ""
	I0401 19:33:05.086296   71168 logs.go:276] 0 containers: []
	W0401 19:33:05.086308   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:05.086315   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:05.086378   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:05.140893   71168 cri.go:89] found id: ""
	I0401 19:33:05.140914   71168 logs.go:276] 0 containers: []
	W0401 19:33:05.140922   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:05.140931   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:05.140946   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:05.161222   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:05.161249   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:05.262254   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:05.262276   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:05.262289   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:05.352880   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:05.352908   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:05.400720   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:05.400748   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:07.954227   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:07.225774   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:07.230656   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:07.230684   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:07.726299   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:07.731793   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:07.731830   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:08.225362   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:08.229716   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:08.229755   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:08.725315   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:08.733428   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 200:
	ok
	I0401 19:33:08.739761   70284 api_server.go:141] control plane version: v1.30.0-rc.0
	I0401 19:33:08.739788   70284 api_server.go:131] duration metric: took 45.014537527s to wait for apiserver health ...
	I0401 19:33:08.739796   70284 cni.go:84] Creating CNI manager for ""
	I0401 19:33:08.739802   70284 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:33:08.741701   70284 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0401 19:33:06.825165   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:08.829987   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:11.327172   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:08.743011   70284 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0401 19:33:08.758184   70284 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0401 19:33:08.778975   70284 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 19:33:08.789725   70284 system_pods.go:59] 8 kube-system pods found
	I0401 19:33:08.789763   70284 system_pods.go:61] "coredns-7db6d8ff4d-gdml5" [039c8887-dff0-40e5-b8b5-00ef2f4a21cc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:33:08.789771   70284 system_pods.go:61] "etcd-no-preload-472858" [09086659-e20f-40da-b01f-3690e110ffeb] Running
	I0401 19:33:08.789781   70284 system_pods.go:61] "kube-apiserver-no-preload-472858" [5139434c-3d23-4736-86ad-28253c89f7da] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0401 19:33:08.789794   70284 system_pods.go:61] "kube-controller-manager-no-preload-472858" [965d600a-612e-4625-b883-7105f9166503] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0401 19:33:08.789806   70284 system_pods.go:61] "kube-proxy-7c22p" [903412f5-252c-41f3-81ac-1ae47522b403] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:33:08.789820   70284 system_pods.go:61] "kube-scheduler-no-preload-472858" [936981be-fc5e-4865-811c-936fab59f37b] Running
	I0401 19:33:08.789832   70284 system_pods.go:61] "metrics-server-569cc877fc-wlr7k" [14010e9a-9662-46c9-bc46-cc6d19c0cddf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:33:08.789839   70284 system_pods.go:61] "storage-provisioner" [2e5d9f78-e74c-4b3b-8878-e4bd8ce34108] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:33:08.789861   70284 system_pods.go:74] duration metric: took 10.868458ms to wait for pod list to return data ...
	I0401 19:33:08.789874   70284 node_conditions.go:102] verifying NodePressure condition ...
	I0401 19:33:08.793853   70284 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 19:33:08.793883   70284 node_conditions.go:123] node cpu capacity is 2
	I0401 19:33:08.793897   70284 node_conditions.go:105] duration metric: took 4.016996ms to run NodePressure ...
	I0401 19:33:08.793916   70284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:33:09.081698   70284 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0401 19:33:09.085681   70284 kubeadm.go:733] kubelet initialised
	I0401 19:33:09.085699   70284 kubeadm.go:734] duration metric: took 3.976973ms waiting for restarted kubelet to initialise ...
	I0401 19:33:09.085705   70284 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:33:09.090647   70284 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:11.102738   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:08.608194   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:11.109659   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:07.970794   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:07.970850   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:08.013694   71168 cri.go:89] found id: ""
	I0401 19:33:08.013719   71168 logs.go:276] 0 containers: []
	W0401 19:33:08.013729   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:08.013737   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:08.013810   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:08.050810   71168 cri.go:89] found id: ""
	I0401 19:33:08.050849   71168 logs.go:276] 0 containers: []
	W0401 19:33:08.050861   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:08.050868   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:08.050932   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:08.092056   71168 cri.go:89] found id: ""
	I0401 19:33:08.092086   71168 logs.go:276] 0 containers: []
	W0401 19:33:08.092096   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:08.092102   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:08.092157   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:08.133171   71168 cri.go:89] found id: ""
	I0401 19:33:08.133195   71168 logs.go:276] 0 containers: []
	W0401 19:33:08.133205   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:08.133212   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:08.133271   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:08.173997   71168 cri.go:89] found id: ""
	I0401 19:33:08.174023   71168 logs.go:276] 0 containers: []
	W0401 19:33:08.174034   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:08.174041   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:08.174102   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:08.212740   71168 cri.go:89] found id: ""
	I0401 19:33:08.212768   71168 logs.go:276] 0 containers: []
	W0401 19:33:08.212778   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:08.212785   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:08.212831   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:08.254815   71168 cri.go:89] found id: ""
	I0401 19:33:08.254837   71168 logs.go:276] 0 containers: []
	W0401 19:33:08.254847   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:08.254854   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:08.254909   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:08.295347   71168 cri.go:89] found id: ""
	I0401 19:33:08.295375   71168 logs.go:276] 0 containers: []
	W0401 19:33:08.295382   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:08.295390   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:08.295402   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:08.311574   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:08.311600   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:08.405437   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:08.405455   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:08.405470   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:08.483687   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:08.483722   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:08.526132   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:08.526158   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:11.076590   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:11.093846   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:11.093983   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:11.146046   71168 cri.go:89] found id: ""
	I0401 19:33:11.146073   71168 logs.go:276] 0 containers: []
	W0401 19:33:11.146083   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:11.146088   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:11.146146   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:11.193751   71168 cri.go:89] found id: ""
	I0401 19:33:11.193782   71168 logs.go:276] 0 containers: []
	W0401 19:33:11.193793   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:11.193801   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:11.193873   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:11.242150   71168 cri.go:89] found id: ""
	I0401 19:33:11.242178   71168 logs.go:276] 0 containers: []
	W0401 19:33:11.242189   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:11.242197   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:11.242271   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:11.294063   71168 cri.go:89] found id: ""
	I0401 19:33:11.294092   71168 logs.go:276] 0 containers: []
	W0401 19:33:11.294103   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:11.294110   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:11.294175   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:11.334764   71168 cri.go:89] found id: ""
	I0401 19:33:11.334784   71168 logs.go:276] 0 containers: []
	W0401 19:33:11.334791   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:11.334797   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:11.334846   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:11.372770   71168 cri.go:89] found id: ""
	I0401 19:33:11.372789   71168 logs.go:276] 0 containers: []
	W0401 19:33:11.372795   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:11.372806   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:11.372871   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:11.413233   71168 cri.go:89] found id: ""
	I0401 19:33:11.413261   71168 logs.go:276] 0 containers: []
	W0401 19:33:11.413271   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:11.413278   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:11.413337   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:11.456044   71168 cri.go:89] found id: ""
	I0401 19:33:11.456073   71168 logs.go:276] 0 containers: []
	W0401 19:33:11.456084   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:11.456093   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:11.456103   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:11.471157   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:11.471183   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:11.550489   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:11.550508   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:11.550523   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:11.635360   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:11.635389   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:11.680683   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:11.680713   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:13.827425   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:16.325563   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:13.104812   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:15.602114   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:13.607926   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:16.107219   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:14.235295   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:14.251513   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:14.251590   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:14.291688   71168 cri.go:89] found id: ""
	I0401 19:33:14.291715   71168 logs.go:276] 0 containers: []
	W0401 19:33:14.291725   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:14.291732   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:14.291792   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:14.332030   71168 cri.go:89] found id: ""
	I0401 19:33:14.332051   71168 logs.go:276] 0 containers: []
	W0401 19:33:14.332060   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:14.332068   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:14.332132   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:14.372098   71168 cri.go:89] found id: ""
	I0401 19:33:14.372122   71168 logs.go:276] 0 containers: []
	W0401 19:33:14.372130   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:14.372137   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:14.372183   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:14.410529   71168 cri.go:89] found id: ""
	I0401 19:33:14.410554   71168 logs.go:276] 0 containers: []
	W0401 19:33:14.410563   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:14.410570   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:14.410624   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:14.451198   71168 cri.go:89] found id: ""
	I0401 19:33:14.451226   71168 logs.go:276] 0 containers: []
	W0401 19:33:14.451238   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:14.451246   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:14.451306   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:14.494588   71168 cri.go:89] found id: ""
	I0401 19:33:14.494616   71168 logs.go:276] 0 containers: []
	W0401 19:33:14.494627   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:14.494635   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:14.494689   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:14.537561   71168 cri.go:89] found id: ""
	I0401 19:33:14.537583   71168 logs.go:276] 0 containers: []
	W0401 19:33:14.537590   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:14.537597   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:14.537674   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:14.580624   71168 cri.go:89] found id: ""
	I0401 19:33:14.580651   71168 logs.go:276] 0 containers: []
	W0401 19:33:14.580662   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:14.580672   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:14.580688   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:14.635769   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:14.635798   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:14.650275   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:14.650304   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:14.742355   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:14.742378   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:14.742394   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:14.827839   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:14.827869   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:17.373408   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:17.390110   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:17.390185   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:17.432355   71168 cri.go:89] found id: ""
	I0401 19:33:17.432384   71168 logs.go:276] 0 containers: []
	W0401 19:33:17.432396   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:17.432409   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:17.432471   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:17.476458   71168 cri.go:89] found id: ""
	I0401 19:33:17.476484   71168 logs.go:276] 0 containers: []
	W0401 19:33:17.476495   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:17.476502   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:17.476587   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:17.519657   71168 cri.go:89] found id: ""
	I0401 19:33:17.519686   71168 logs.go:276] 0 containers: []
	W0401 19:33:17.519694   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:17.519699   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:17.519751   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:17.559962   71168 cri.go:89] found id: ""
	I0401 19:33:17.559985   71168 logs.go:276] 0 containers: []
	W0401 19:33:17.559992   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:17.559997   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:17.560054   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:17.608924   71168 cri.go:89] found id: ""
	I0401 19:33:17.608995   71168 logs.go:276] 0 containers: []
	W0401 19:33:17.609009   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:17.609016   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:17.609075   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:17.648371   71168 cri.go:89] found id: ""
	I0401 19:33:17.648394   71168 logs.go:276] 0 containers: []
	W0401 19:33:17.648401   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:17.648406   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:17.648462   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:17.689217   71168 cri.go:89] found id: ""
	I0401 19:33:17.689239   71168 logs.go:276] 0 containers: []
	W0401 19:33:17.689246   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:17.689252   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:17.689312   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:17.741738   71168 cri.go:89] found id: ""
	I0401 19:33:17.741768   71168 logs.go:276] 0 containers: []
	W0401 19:33:17.741779   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:17.741790   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:17.741805   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:17.839857   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:17.839887   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:17.888684   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:17.888716   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:17.944268   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:17.944298   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:17.959305   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:17.959334   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0401 19:33:18.327388   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:20.826627   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:18.100065   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:20.100714   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:18.107770   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:20.108880   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	W0401 19:33:18.040820   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:20.541980   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:20.558198   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:20.558270   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:20.596329   71168 cri.go:89] found id: ""
	I0401 19:33:20.596357   71168 logs.go:276] 0 containers: []
	W0401 19:33:20.596366   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:20.596373   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:20.596431   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:20.638611   71168 cri.go:89] found id: ""
	I0401 19:33:20.638639   71168 logs.go:276] 0 containers: []
	W0401 19:33:20.638664   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:20.638672   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:20.638729   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:20.677984   71168 cri.go:89] found id: ""
	I0401 19:33:20.678014   71168 logs.go:276] 0 containers: []
	W0401 19:33:20.678024   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:20.678032   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:20.678080   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:20.718491   71168 cri.go:89] found id: ""
	I0401 19:33:20.718520   71168 logs.go:276] 0 containers: []
	W0401 19:33:20.718530   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:20.718537   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:20.718597   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:20.772147   71168 cri.go:89] found id: ""
	I0401 19:33:20.772174   71168 logs.go:276] 0 containers: []
	W0401 19:33:20.772185   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:20.772199   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:20.772258   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:20.823339   71168 cri.go:89] found id: ""
	I0401 19:33:20.823361   71168 logs.go:276] 0 containers: []
	W0401 19:33:20.823372   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:20.823380   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:20.823463   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:20.884081   71168 cri.go:89] found id: ""
	I0401 19:33:20.884106   71168 logs.go:276] 0 containers: []
	W0401 19:33:20.884117   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:20.884124   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:20.884185   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:20.931679   71168 cri.go:89] found id: ""
	I0401 19:33:20.931703   71168 logs.go:276] 0 containers: []
	W0401 19:33:20.931713   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:20.931722   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:20.931736   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:21.016766   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:21.016797   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:21.067600   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:21.067632   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:21.136989   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:21.137045   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:21.152673   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:21.152706   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:21.250186   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:23.325222   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:25.326919   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:22.597922   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:24.602701   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:22.606659   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:24.606811   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:26.608185   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:23.750565   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:23.768458   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:23.768534   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:23.814489   71168 cri.go:89] found id: ""
	I0401 19:33:23.814534   71168 logs.go:276] 0 containers: []
	W0401 19:33:23.814555   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:23.814565   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:23.814632   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:23.854954   71168 cri.go:89] found id: ""
	I0401 19:33:23.854981   71168 logs.go:276] 0 containers: []
	W0401 19:33:23.854989   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:23.854995   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:23.855060   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:23.896115   71168 cri.go:89] found id: ""
	I0401 19:33:23.896148   71168 logs.go:276] 0 containers: []
	W0401 19:33:23.896159   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:23.896169   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:23.896231   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:23.941300   71168 cri.go:89] found id: ""
	I0401 19:33:23.941324   71168 logs.go:276] 0 containers: []
	W0401 19:33:23.941337   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:23.941344   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:23.941390   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:23.983955   71168 cri.go:89] found id: ""
	I0401 19:33:23.983982   71168 logs.go:276] 0 containers: []
	W0401 19:33:23.983991   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:23.983997   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:23.984056   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:24.020756   71168 cri.go:89] found id: ""
	I0401 19:33:24.020777   71168 logs.go:276] 0 containers: []
	W0401 19:33:24.020784   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:24.020789   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:24.020835   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:24.063426   71168 cri.go:89] found id: ""
	I0401 19:33:24.063454   71168 logs.go:276] 0 containers: []
	W0401 19:33:24.063462   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:24.063467   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:24.063529   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:24.110924   71168 cri.go:89] found id: ""
	I0401 19:33:24.110945   71168 logs.go:276] 0 containers: []
	W0401 19:33:24.110952   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:24.110960   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:24.110969   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:24.179200   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:24.179240   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:24.194880   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:24.194909   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:24.280555   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:24.280588   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:24.280603   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:24.359502   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:24.359534   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:26.909147   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:26.925961   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:26.926028   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:26.969502   71168 cri.go:89] found id: ""
	I0401 19:33:26.969525   71168 logs.go:276] 0 containers: []
	W0401 19:33:26.969536   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:26.969543   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:26.969604   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:27.015205   71168 cri.go:89] found id: ""
	I0401 19:33:27.015232   71168 logs.go:276] 0 containers: []
	W0401 19:33:27.015241   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:27.015246   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:27.015296   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:27.055943   71168 cri.go:89] found id: ""
	I0401 19:33:27.055968   71168 logs.go:276] 0 containers: []
	W0401 19:33:27.055977   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:27.055983   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:27.056039   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:27.095447   71168 cri.go:89] found id: ""
	I0401 19:33:27.095474   71168 logs.go:276] 0 containers: []
	W0401 19:33:27.095485   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:27.095497   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:27.095558   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:27.137912   71168 cri.go:89] found id: ""
	I0401 19:33:27.137941   71168 logs.go:276] 0 containers: []
	W0401 19:33:27.137948   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:27.137954   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:27.138008   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:27.183303   71168 cri.go:89] found id: ""
	I0401 19:33:27.183325   71168 logs.go:276] 0 containers: []
	W0401 19:33:27.183335   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:27.183344   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:27.183403   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:27.225780   71168 cri.go:89] found id: ""
	I0401 19:33:27.225804   71168 logs.go:276] 0 containers: []
	W0401 19:33:27.225814   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:27.225822   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:27.225880   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:27.268136   71168 cri.go:89] found id: ""
	I0401 19:33:27.268159   71168 logs.go:276] 0 containers: []
	W0401 19:33:27.268168   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:27.268191   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:27.268215   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:27.325527   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:27.325557   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:27.341727   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:27.341763   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:27.432369   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:27.432389   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:27.432403   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:27.523104   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:27.523135   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:27.826804   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:30.326279   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:27.099509   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:29.597830   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:31.598325   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:29.107400   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:31.107514   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:30.066147   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:30.079999   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:30.080062   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:30.121887   71168 cri.go:89] found id: ""
	I0401 19:33:30.121911   71168 logs.go:276] 0 containers: []
	W0401 19:33:30.121920   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:30.121929   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:30.121986   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:30.163939   71168 cri.go:89] found id: ""
	I0401 19:33:30.163967   71168 logs.go:276] 0 containers: []
	W0401 19:33:30.163978   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:30.163986   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:30.164051   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:30.203924   71168 cri.go:89] found id: ""
	I0401 19:33:30.203965   71168 logs.go:276] 0 containers: []
	W0401 19:33:30.203977   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:30.203985   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:30.204048   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:30.243771   71168 cri.go:89] found id: ""
	I0401 19:33:30.243798   71168 logs.go:276] 0 containers: []
	W0401 19:33:30.243809   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:30.243816   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:30.243888   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:30.284039   71168 cri.go:89] found id: ""
	I0401 19:33:30.284066   71168 logs.go:276] 0 containers: []
	W0401 19:33:30.284074   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:30.284079   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:30.284127   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:30.327549   71168 cri.go:89] found id: ""
	I0401 19:33:30.327570   71168 logs.go:276] 0 containers: []
	W0401 19:33:30.327577   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:30.327583   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:30.327630   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:30.365258   71168 cri.go:89] found id: ""
	I0401 19:33:30.365281   71168 logs.go:276] 0 containers: []
	W0401 19:33:30.365291   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:30.365297   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:30.365352   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:30.405959   71168 cri.go:89] found id: ""
	I0401 19:33:30.405984   71168 logs.go:276] 0 containers: []
	W0401 19:33:30.405992   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:30.405999   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:30.406011   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:30.480668   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:30.480692   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:30.480706   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:30.566042   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:30.566077   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:30.629250   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:30.629285   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:30.682185   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:30.682213   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:32.824844   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:34.826598   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:33.600555   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:36.100194   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:33.608315   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:36.106573   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:33.199466   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:33.213557   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:33.213630   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:33.255038   71168 cri.go:89] found id: ""
	I0401 19:33:33.255062   71168 logs.go:276] 0 containers: []
	W0401 19:33:33.255072   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:33.255079   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:33.255143   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:33.297724   71168 cri.go:89] found id: ""
	I0401 19:33:33.297751   71168 logs.go:276] 0 containers: []
	W0401 19:33:33.297761   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:33.297767   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:33.297836   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:33.340694   71168 cri.go:89] found id: ""
	I0401 19:33:33.340718   71168 logs.go:276] 0 containers: []
	W0401 19:33:33.340727   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:33.340735   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:33.340794   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:33.388857   71168 cri.go:89] found id: ""
	I0401 19:33:33.388883   71168 logs.go:276] 0 containers: []
	W0401 19:33:33.388891   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:33.388896   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:33.388940   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:33.430875   71168 cri.go:89] found id: ""
	I0401 19:33:33.430899   71168 logs.go:276] 0 containers: []
	W0401 19:33:33.430906   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:33.430911   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:33.430966   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:33.479877   71168 cri.go:89] found id: ""
	I0401 19:33:33.479905   71168 logs.go:276] 0 containers: []
	W0401 19:33:33.479917   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:33.479923   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:33.479968   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:33.522635   71168 cri.go:89] found id: ""
	I0401 19:33:33.522662   71168 logs.go:276] 0 containers: []
	W0401 19:33:33.522672   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:33.522680   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:33.522737   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:33.560497   71168 cri.go:89] found id: ""
	I0401 19:33:33.560519   71168 logs.go:276] 0 containers: []
	W0401 19:33:33.560527   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:33.560534   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:33.560549   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:33.612141   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:33.612170   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:33.665142   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:33.665170   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:33.681076   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:33.681100   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:33.755938   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:33.755966   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:33.755983   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:36.341957   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:36.359519   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:36.359586   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:36.416339   71168 cri.go:89] found id: ""
	I0401 19:33:36.416362   71168 logs.go:276] 0 containers: []
	W0401 19:33:36.416373   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:36.416381   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:36.416442   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:36.473883   71168 cri.go:89] found id: ""
	I0401 19:33:36.473906   71168 logs.go:276] 0 containers: []
	W0401 19:33:36.473918   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:36.473925   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:36.473988   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:36.521532   71168 cri.go:89] found id: ""
	I0401 19:33:36.521558   71168 logs.go:276] 0 containers: []
	W0401 19:33:36.521568   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:36.521575   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:36.521639   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:36.563420   71168 cri.go:89] found id: ""
	I0401 19:33:36.563446   71168 logs.go:276] 0 containers: []
	W0401 19:33:36.563454   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:36.563459   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:36.563520   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:36.605658   71168 cri.go:89] found id: ""
	I0401 19:33:36.605678   71168 logs.go:276] 0 containers: []
	W0401 19:33:36.605689   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:36.605697   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:36.605759   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:36.645611   71168 cri.go:89] found id: ""
	I0401 19:33:36.645631   71168 logs.go:276] 0 containers: []
	W0401 19:33:36.645638   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:36.645656   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:36.645715   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:36.685994   71168 cri.go:89] found id: ""
	I0401 19:33:36.686022   71168 logs.go:276] 0 containers: []
	W0401 19:33:36.686033   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:36.686041   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:36.686099   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:36.725573   71168 cri.go:89] found id: ""
	I0401 19:33:36.725598   71168 logs.go:276] 0 containers: []
	W0401 19:33:36.725608   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:36.725618   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:36.725630   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:36.778854   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:36.778885   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:36.795003   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:36.795036   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:36.872648   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:36.872666   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:36.872678   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:36.956648   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:36.956683   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:36.827745   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:38.830544   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:41.326012   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:38.597991   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:41.097044   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:38.107961   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:40.606475   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:39.502868   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:39.519090   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:39.519161   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:39.562347   71168 cri.go:89] found id: ""
	I0401 19:33:39.562371   71168 logs.go:276] 0 containers: []
	W0401 19:33:39.562379   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:39.562384   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:39.562442   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:39.607250   71168 cri.go:89] found id: ""
	I0401 19:33:39.607276   71168 logs.go:276] 0 containers: []
	W0401 19:33:39.607286   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:39.607293   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:39.607343   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:39.650683   71168 cri.go:89] found id: ""
	I0401 19:33:39.650704   71168 logs.go:276] 0 containers: []
	W0401 19:33:39.650712   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:39.650717   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:39.650764   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:39.694676   71168 cri.go:89] found id: ""
	I0401 19:33:39.694706   71168 logs.go:276] 0 containers: []
	W0401 19:33:39.694718   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:39.694724   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:39.694783   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:39.733873   71168 cri.go:89] found id: ""
	I0401 19:33:39.733901   71168 logs.go:276] 0 containers: []
	W0401 19:33:39.733911   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:39.733919   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:39.733980   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:39.773625   71168 cri.go:89] found id: ""
	I0401 19:33:39.773668   71168 logs.go:276] 0 containers: []
	W0401 19:33:39.773679   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:39.773686   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:39.773735   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:39.815020   71168 cri.go:89] found id: ""
	I0401 19:33:39.815053   71168 logs.go:276] 0 containers: []
	W0401 19:33:39.815064   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:39.815071   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:39.815134   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:39.855575   71168 cri.go:89] found id: ""
	I0401 19:33:39.855606   71168 logs.go:276] 0 containers: []
	W0401 19:33:39.855615   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:39.855626   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:39.855641   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:39.873827   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:39.873857   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:39.948487   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:39.948507   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:39.948521   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:40.034026   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:40.034062   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:40.077798   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:40.077828   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:42.637999   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:42.654991   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:42.655063   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:42.695920   71168 cri.go:89] found id: ""
	I0401 19:33:42.695953   71168 logs.go:276] 0 containers: []
	W0401 19:33:42.695964   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:42.695971   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:42.696030   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:42.737303   71168 cri.go:89] found id: ""
	I0401 19:33:42.737325   71168 logs.go:276] 0 containers: []
	W0401 19:33:42.737333   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:42.737341   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:42.737393   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:42.777922   71168 cri.go:89] found id: ""
	I0401 19:33:42.777953   71168 logs.go:276] 0 containers: []
	W0401 19:33:42.777965   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:42.777972   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:42.778036   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:42.818339   71168 cri.go:89] found id: ""
	I0401 19:33:42.818364   71168 logs.go:276] 0 containers: []
	W0401 19:33:42.818372   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:42.818379   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:42.818435   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:42.859470   71168 cri.go:89] found id: ""
	I0401 19:33:42.859494   71168 logs.go:276] 0 containers: []
	W0401 19:33:42.859502   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:42.859507   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:42.859556   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:42.901950   71168 cri.go:89] found id: ""
	I0401 19:33:42.901980   71168 logs.go:276] 0 containers: []
	W0401 19:33:42.901989   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:42.901996   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:42.902063   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:42.947230   71168 cri.go:89] found id: ""
	I0401 19:33:42.947258   71168 logs.go:276] 0 containers: []
	W0401 19:33:42.947268   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:42.947275   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:42.947351   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:43.827204   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:46.325749   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:43.098252   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:45.098316   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:42.607590   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:44.607666   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:47.107837   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:42.988997   71168 cri.go:89] found id: ""
	I0401 19:33:42.989022   71168 logs.go:276] 0 containers: []
	W0401 19:33:42.989032   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:42.989049   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:42.989066   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:43.075323   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:43.075352   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:43.075363   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:43.164445   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:43.164479   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:43.215852   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:43.215885   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:43.271301   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:43.271334   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:45.786705   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:45.804389   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:45.804445   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:45.849838   71168 cri.go:89] found id: ""
	I0401 19:33:45.849872   71168 logs.go:276] 0 containers: []
	W0401 19:33:45.849883   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:45.849891   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:45.849950   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:45.890603   71168 cri.go:89] found id: ""
	I0401 19:33:45.890625   71168 logs.go:276] 0 containers: []
	W0401 19:33:45.890635   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:45.890642   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:45.890703   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:45.929189   71168 cri.go:89] found id: ""
	I0401 19:33:45.929210   71168 logs.go:276] 0 containers: []
	W0401 19:33:45.929218   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:45.929223   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:45.929268   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:45.968266   71168 cri.go:89] found id: ""
	I0401 19:33:45.968292   71168 logs.go:276] 0 containers: []
	W0401 19:33:45.968303   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:45.968310   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:45.968365   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:46.007114   71168 cri.go:89] found id: ""
	I0401 19:33:46.007135   71168 logs.go:276] 0 containers: []
	W0401 19:33:46.007143   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:46.007148   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:46.007195   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:46.046067   71168 cri.go:89] found id: ""
	I0401 19:33:46.046088   71168 logs.go:276] 0 containers: []
	W0401 19:33:46.046095   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:46.046101   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:46.046186   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:46.083604   71168 cri.go:89] found id: ""
	I0401 19:33:46.083630   71168 logs.go:276] 0 containers: []
	W0401 19:33:46.083644   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:46.083651   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:46.083709   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:46.125435   71168 cri.go:89] found id: ""
	I0401 19:33:46.125457   71168 logs.go:276] 0 containers: []
	W0401 19:33:46.125464   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:46.125472   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:46.125483   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:46.179060   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:46.179092   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:46.195139   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:46.195179   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:46.275876   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:46.275903   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:46.275914   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:46.365430   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:46.365465   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:48.825540   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:50.827204   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:47.099197   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:49.105260   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:51.597808   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:49.108344   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:51.607079   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:48.908390   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:48.924357   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:48.924416   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:48.969325   71168 cri.go:89] found id: ""
	I0401 19:33:48.969351   71168 logs.go:276] 0 containers: []
	W0401 19:33:48.969359   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:48.969364   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:48.969421   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:49.006702   71168 cri.go:89] found id: ""
	I0401 19:33:49.006724   71168 logs.go:276] 0 containers: []
	W0401 19:33:49.006731   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:49.006736   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:49.006785   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:49.051196   71168 cri.go:89] found id: ""
	I0401 19:33:49.051229   71168 logs.go:276] 0 containers: []
	W0401 19:33:49.051241   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:49.051260   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:49.051336   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:49.098123   71168 cri.go:89] found id: ""
	I0401 19:33:49.098150   71168 logs.go:276] 0 containers: []
	W0401 19:33:49.098159   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:49.098166   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:49.098225   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:49.138203   71168 cri.go:89] found id: ""
	I0401 19:33:49.138232   71168 logs.go:276] 0 containers: []
	W0401 19:33:49.138239   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:49.138244   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:49.138290   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:49.185441   71168 cri.go:89] found id: ""
	I0401 19:33:49.185465   71168 logs.go:276] 0 containers: []
	W0401 19:33:49.185473   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:49.185478   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:49.185537   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:49.235649   71168 cri.go:89] found id: ""
	I0401 19:33:49.235670   71168 logs.go:276] 0 containers: []
	W0401 19:33:49.235678   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:49.235683   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:49.235762   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:49.279638   71168 cri.go:89] found id: ""
	I0401 19:33:49.279662   71168 logs.go:276] 0 containers: []
	W0401 19:33:49.279673   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:49.279683   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:49.279699   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:49.340761   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:49.340798   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:49.356552   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:49.356581   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:49.441110   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:49.441129   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:49.441140   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:49.523159   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:49.523189   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:52.067710   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:52.082986   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:52.083046   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:52.128510   71168 cri.go:89] found id: ""
	I0401 19:33:52.128531   71168 logs.go:276] 0 containers: []
	W0401 19:33:52.128538   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:52.128543   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:52.128590   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:52.167767   71168 cri.go:89] found id: ""
	I0401 19:33:52.167792   71168 logs.go:276] 0 containers: []
	W0401 19:33:52.167803   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:52.167810   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:52.167871   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:52.206384   71168 cri.go:89] found id: ""
	I0401 19:33:52.206416   71168 logs.go:276] 0 containers: []
	W0401 19:33:52.206426   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:52.206433   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:52.206493   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:52.245277   71168 cri.go:89] found id: ""
	I0401 19:33:52.245301   71168 logs.go:276] 0 containers: []
	W0401 19:33:52.245309   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:52.245318   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:52.245388   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:52.283925   71168 cri.go:89] found id: ""
	I0401 19:33:52.283954   71168 logs.go:276] 0 containers: []
	W0401 19:33:52.283964   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:52.283971   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:52.284032   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:52.323944   71168 cri.go:89] found id: ""
	I0401 19:33:52.323970   71168 logs.go:276] 0 containers: []
	W0401 19:33:52.323981   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:52.323988   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:52.324045   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:52.364853   71168 cri.go:89] found id: ""
	I0401 19:33:52.364882   71168 logs.go:276] 0 containers: []
	W0401 19:33:52.364893   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:52.364901   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:52.364958   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:52.404136   71168 cri.go:89] found id: ""
	I0401 19:33:52.404158   71168 logs.go:276] 0 containers: []
	W0401 19:33:52.404165   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:52.404173   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:52.404184   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:52.459097   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:52.459129   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:52.474392   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:52.474417   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:52.551817   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:52.551843   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:52.551860   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:52.650710   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:52.650750   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:53.326050   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:55.327326   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:52.607062   70284 pod_ready.go:92] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"True"
	I0401 19:33:52.607082   70284 pod_ready.go:81] duration metric: took 43.516413537s for pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.607091   70284 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.628695   70284 pod_ready.go:92] pod "etcd-no-preload-472858" in "kube-system" namespace has status "Ready":"True"
	I0401 19:33:52.628725   70284 pod_ready.go:81] duration metric: took 21.625468ms for pod "etcd-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.628739   70284 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.643017   70284 pod_ready.go:92] pod "kube-apiserver-no-preload-472858" in "kube-system" namespace has status "Ready":"True"
	I0401 19:33:52.643044   70284 pod_ready.go:81] duration metric: took 14.296056ms for pod "kube-apiserver-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.643058   70284 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.649063   70284 pod_ready.go:92] pod "kube-controller-manager-no-preload-472858" in "kube-system" namespace has status "Ready":"True"
	I0401 19:33:52.649091   70284 pod_ready.go:81] duration metric: took 6.024238ms for pod "kube-controller-manager-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.649105   70284 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7c22p" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.654806   70284 pod_ready.go:92] pod "kube-proxy-7c22p" in "kube-system" namespace has status "Ready":"True"
	I0401 19:33:52.654829   70284 pod_ready.go:81] duration metric: took 5.709865ms for pod "kube-proxy-7c22p" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.654840   70284 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.997116   70284 pod_ready.go:92] pod "kube-scheduler-no-preload-472858" in "kube-system" namespace has status "Ready":"True"
	I0401 19:33:52.997139   70284 pod_ready.go:81] duration metric: took 342.291727ms for pod "kube-scheduler-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.997148   70284 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:55.004130   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:53.608064   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:56.106148   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:55.205689   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:55.222840   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:55.222901   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:55.263783   71168 cri.go:89] found id: ""
	I0401 19:33:55.263813   71168 logs.go:276] 0 containers: []
	W0401 19:33:55.263820   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:55.263828   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:55.263883   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:55.300788   71168 cri.go:89] found id: ""
	I0401 19:33:55.300818   71168 logs.go:276] 0 containers: []
	W0401 19:33:55.300826   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:55.300834   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:55.300888   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:55.343189   71168 cri.go:89] found id: ""
	I0401 19:33:55.343215   71168 logs.go:276] 0 containers: []
	W0401 19:33:55.343223   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:55.343229   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:55.343286   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:55.387560   71168 cri.go:89] found id: ""
	I0401 19:33:55.387587   71168 logs.go:276] 0 containers: []
	W0401 19:33:55.387597   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:55.387604   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:55.387663   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:55.428078   71168 cri.go:89] found id: ""
	I0401 19:33:55.428103   71168 logs.go:276] 0 containers: []
	W0401 19:33:55.428112   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:55.428119   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:55.428181   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:55.472696   71168 cri.go:89] found id: ""
	I0401 19:33:55.472722   71168 logs.go:276] 0 containers: []
	W0401 19:33:55.472734   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:55.472741   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:55.472797   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:55.518071   71168 cri.go:89] found id: ""
	I0401 19:33:55.518115   71168 logs.go:276] 0 containers: []
	W0401 19:33:55.518126   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:55.518136   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:55.518201   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:55.555697   71168 cri.go:89] found id: ""
	I0401 19:33:55.555717   71168 logs.go:276] 0 containers: []
	W0401 19:33:55.555724   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:55.555732   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:55.555747   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:55.637462   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:55.637492   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:55.682353   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:55.682380   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:55.735451   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:55.735484   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:55.750928   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:55.750954   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:55.824610   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:57.328228   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:59.826213   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:57.005395   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:59.505575   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:01.506107   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:58.106643   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:00.606864   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:58.325742   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:58.341022   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:58.341092   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:58.380910   71168 cri.go:89] found id: ""
	I0401 19:33:58.380932   71168 logs.go:276] 0 containers: []
	W0401 19:33:58.380940   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:58.380946   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:58.380990   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:58.420387   71168 cri.go:89] found id: ""
	I0401 19:33:58.420413   71168 logs.go:276] 0 containers: []
	W0401 19:33:58.420425   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:58.420431   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:58.420479   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:58.460470   71168 cri.go:89] found id: ""
	I0401 19:33:58.460501   71168 logs.go:276] 0 containers: []
	W0401 19:33:58.460511   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:58.460520   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:58.460580   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:58.496844   71168 cri.go:89] found id: ""
	I0401 19:33:58.496867   71168 logs.go:276] 0 containers: []
	W0401 19:33:58.496875   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:58.496881   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:58.496930   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:58.535883   71168 cri.go:89] found id: ""
	I0401 19:33:58.535905   71168 logs.go:276] 0 containers: []
	W0401 19:33:58.535915   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:58.535922   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:58.535979   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:58.576833   71168 cri.go:89] found id: ""
	I0401 19:33:58.576855   71168 logs.go:276] 0 containers: []
	W0401 19:33:58.576863   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:58.576869   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:58.576913   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:58.615057   71168 cri.go:89] found id: ""
	I0401 19:33:58.615081   71168 logs.go:276] 0 containers: []
	W0401 19:33:58.615091   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:58.615098   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:58.615156   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:58.657982   71168 cri.go:89] found id: ""
	I0401 19:33:58.658008   71168 logs.go:276] 0 containers: []
	W0401 19:33:58.658018   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:58.658028   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:58.658045   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:58.734579   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:58.734601   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:58.734616   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:58.821779   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:58.821819   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:58.894470   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:58.894506   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:58.949854   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:58.949884   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:01.465820   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:01.481929   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:01.481984   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:01.525371   71168 cri.go:89] found id: ""
	I0401 19:34:01.525397   71168 logs.go:276] 0 containers: []
	W0401 19:34:01.525407   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:01.525415   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:01.525473   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:01.571106   71168 cri.go:89] found id: ""
	I0401 19:34:01.571136   71168 logs.go:276] 0 containers: []
	W0401 19:34:01.571146   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:01.571153   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:01.571214   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:01.617666   71168 cri.go:89] found id: ""
	I0401 19:34:01.617705   71168 logs.go:276] 0 containers: []
	W0401 19:34:01.617717   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:01.617725   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:01.617787   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:01.655286   71168 cri.go:89] found id: ""
	I0401 19:34:01.655311   71168 logs.go:276] 0 containers: []
	W0401 19:34:01.655321   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:01.655328   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:01.655396   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:01.694911   71168 cri.go:89] found id: ""
	I0401 19:34:01.694940   71168 logs.go:276] 0 containers: []
	W0401 19:34:01.694950   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:01.694957   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:01.695040   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:01.734970   71168 cri.go:89] found id: ""
	I0401 19:34:01.734996   71168 logs.go:276] 0 containers: []
	W0401 19:34:01.735007   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:01.735014   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:01.735071   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:01.778846   71168 cri.go:89] found id: ""
	I0401 19:34:01.778871   71168 logs.go:276] 0 containers: []
	W0401 19:34:01.778879   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:01.778885   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:01.778958   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:01.821934   71168 cri.go:89] found id: ""
	I0401 19:34:01.821964   71168 logs.go:276] 0 containers: []
	W0401 19:34:01.821975   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:01.821986   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:01.822002   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:01.880123   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:01.880155   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:01.895178   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:01.895200   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:01.972248   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:01.972275   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:01.972290   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:02.056663   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:02.056694   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:02.325323   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:04.326474   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:06.327583   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:04.004061   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:06.004176   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:02.608516   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:05.108477   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:04.603745   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:04.619269   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:04.619344   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:04.658089   71168 cri.go:89] found id: ""
	I0401 19:34:04.658111   71168 logs.go:276] 0 containers: []
	W0401 19:34:04.658118   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:04.658123   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:04.658168   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:04.700596   71168 cri.go:89] found id: ""
	I0401 19:34:04.700622   71168 logs.go:276] 0 containers: []
	W0401 19:34:04.700634   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:04.700641   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:04.700708   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:04.744960   71168 cri.go:89] found id: ""
	I0401 19:34:04.744990   71168 logs.go:276] 0 containers: []
	W0401 19:34:04.744999   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:04.745004   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:04.745052   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:04.788239   71168 cri.go:89] found id: ""
	I0401 19:34:04.788264   71168 logs.go:276] 0 containers: []
	W0401 19:34:04.788272   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:04.788278   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:04.788343   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:04.830788   71168 cri.go:89] found id: ""
	I0401 19:34:04.830812   71168 logs.go:276] 0 containers: []
	W0401 19:34:04.830850   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:04.830859   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:04.830917   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:04.889784   71168 cri.go:89] found id: ""
	I0401 19:34:04.889815   71168 logs.go:276] 0 containers: []
	W0401 19:34:04.889826   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:04.889834   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:04.889902   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:04.931969   71168 cri.go:89] found id: ""
	I0401 19:34:04.931996   71168 logs.go:276] 0 containers: []
	W0401 19:34:04.932004   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:04.932010   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:04.932058   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:04.975668   71168 cri.go:89] found id: ""
	I0401 19:34:04.975689   71168 logs.go:276] 0 containers: []
	W0401 19:34:04.975696   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:04.975704   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:04.975715   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:05.032212   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:05.032246   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:05.047900   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:05.047924   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:05.132371   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:05.132394   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:05.132408   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:05.222591   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:05.222623   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:07.767686   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:07.784473   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:07.784542   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:07.828460   71168 cri.go:89] found id: ""
	I0401 19:34:07.828487   71168 logs.go:276] 0 containers: []
	W0401 19:34:07.828498   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:07.828505   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:07.828564   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:07.872760   71168 cri.go:89] found id: ""
	I0401 19:34:07.872786   71168 logs.go:276] 0 containers: []
	W0401 19:34:07.872797   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:07.872804   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:07.872862   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:07.914241   71168 cri.go:89] found id: ""
	I0401 19:34:07.914263   71168 logs.go:276] 0 containers: []
	W0401 19:34:07.914271   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:07.914276   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:07.914340   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:07.953757   71168 cri.go:89] found id: ""
	I0401 19:34:07.953784   71168 logs.go:276] 0 containers: []
	W0401 19:34:07.953795   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:07.953803   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:07.953869   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:08.825113   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:10.827081   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:08.504038   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:10.508973   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:07.608037   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:10.110321   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:07.994382   71168 cri.go:89] found id: ""
	I0401 19:34:07.994401   71168 logs.go:276] 0 containers: []
	W0401 19:34:07.994409   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:07.994414   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:07.994459   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:08.038178   71168 cri.go:89] found id: ""
	I0401 19:34:08.038202   71168 logs.go:276] 0 containers: []
	W0401 19:34:08.038213   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:08.038220   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:08.038282   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:08.077532   71168 cri.go:89] found id: ""
	I0401 19:34:08.077562   71168 logs.go:276] 0 containers: []
	W0401 19:34:08.077573   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:08.077580   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:08.077657   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:08.119825   71168 cri.go:89] found id: ""
	I0401 19:34:08.119845   71168 logs.go:276] 0 containers: []
	W0401 19:34:08.119855   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:08.119865   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:08.119878   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:08.207688   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:08.207724   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:08.253050   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:08.253085   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:08.309119   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:08.309152   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:08.325675   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:08.325704   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:08.410877   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:10.911211   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:10.925590   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:10.925657   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:10.964180   71168 cri.go:89] found id: ""
	I0401 19:34:10.964205   71168 logs.go:276] 0 containers: []
	W0401 19:34:10.964216   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:10.964224   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:10.964273   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:11.004492   71168 cri.go:89] found id: ""
	I0401 19:34:11.004515   71168 logs.go:276] 0 containers: []
	W0401 19:34:11.004526   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:11.004533   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:11.004588   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:11.048771   71168 cri.go:89] found id: ""
	I0401 19:34:11.048792   71168 logs.go:276] 0 containers: []
	W0401 19:34:11.048804   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:11.048810   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:11.048861   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:11.084956   71168 cri.go:89] found id: ""
	I0401 19:34:11.084982   71168 logs.go:276] 0 containers: []
	W0401 19:34:11.084992   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:11.084999   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:11.085043   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:11.128194   71168 cri.go:89] found id: ""
	I0401 19:34:11.128218   71168 logs.go:276] 0 containers: []
	W0401 19:34:11.128225   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:11.128230   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:11.128274   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:11.169884   71168 cri.go:89] found id: ""
	I0401 19:34:11.169908   71168 logs.go:276] 0 containers: []
	W0401 19:34:11.169918   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:11.169925   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:11.169988   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:11.213032   71168 cri.go:89] found id: ""
	I0401 19:34:11.213066   71168 logs.go:276] 0 containers: []
	W0401 19:34:11.213077   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:11.213084   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:11.213149   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:11.258391   71168 cri.go:89] found id: ""
	I0401 19:34:11.258414   71168 logs.go:276] 0 containers: []
	W0401 19:34:11.258422   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:11.258429   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:11.258445   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:11.341297   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:11.341328   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:11.388628   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:11.388659   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:11.442300   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:11.442326   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:11.457531   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:11.457561   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:11.561556   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:13.324598   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:15.325464   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:13.005005   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:15.505216   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:12.607201   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:14.607580   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:17.107659   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:14.062670   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:14.077384   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:14.077449   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:14.119421   71168 cri.go:89] found id: ""
	I0401 19:34:14.119444   71168 logs.go:276] 0 containers: []
	W0401 19:34:14.119455   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:14.119462   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:14.119518   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:14.158762   71168 cri.go:89] found id: ""
	I0401 19:34:14.158783   71168 logs.go:276] 0 containers: []
	W0401 19:34:14.158798   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:14.158805   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:14.158867   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:14.197024   71168 cri.go:89] found id: ""
	I0401 19:34:14.197052   71168 logs.go:276] 0 containers: []
	W0401 19:34:14.197060   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:14.197065   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:14.197115   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:14.235976   71168 cri.go:89] found id: ""
	I0401 19:34:14.236004   71168 logs.go:276] 0 containers: []
	W0401 19:34:14.236015   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:14.236021   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:14.236085   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:14.280596   71168 cri.go:89] found id: ""
	I0401 19:34:14.280623   71168 logs.go:276] 0 containers: []
	W0401 19:34:14.280635   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:14.280642   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:14.280703   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:14.322196   71168 cri.go:89] found id: ""
	I0401 19:34:14.322219   71168 logs.go:276] 0 containers: []
	W0401 19:34:14.322230   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:14.322239   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:14.322298   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:14.364572   71168 cri.go:89] found id: ""
	I0401 19:34:14.364596   71168 logs.go:276] 0 containers: []
	W0401 19:34:14.364607   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:14.364615   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:14.364662   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:14.406043   71168 cri.go:89] found id: ""
	I0401 19:34:14.406066   71168 logs.go:276] 0 containers: []
	W0401 19:34:14.406072   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:14.406082   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:14.406097   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:14.461841   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:14.461870   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:14.479960   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:14.479990   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:14.557039   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:14.557058   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:14.557070   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:14.641945   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:14.641975   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:17.192681   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:17.207913   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:17.207964   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:17.245596   71168 cri.go:89] found id: ""
	I0401 19:34:17.245618   71168 logs.go:276] 0 containers: []
	W0401 19:34:17.245625   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:17.245630   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:17.245701   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:17.310845   71168 cri.go:89] found id: ""
	I0401 19:34:17.310875   71168 logs.go:276] 0 containers: []
	W0401 19:34:17.310887   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:17.310894   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:17.310958   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:17.367726   71168 cri.go:89] found id: ""
	I0401 19:34:17.367753   71168 logs.go:276] 0 containers: []
	W0401 19:34:17.367764   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:17.367770   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:17.367833   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:17.410807   71168 cri.go:89] found id: ""
	I0401 19:34:17.410834   71168 logs.go:276] 0 containers: []
	W0401 19:34:17.410842   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:17.410847   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:17.410892   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:17.448242   71168 cri.go:89] found id: ""
	I0401 19:34:17.448268   71168 logs.go:276] 0 containers: []
	W0401 19:34:17.448278   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:17.448285   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:17.448337   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:17.486552   71168 cri.go:89] found id: ""
	I0401 19:34:17.486580   71168 logs.go:276] 0 containers: []
	W0401 19:34:17.486590   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:17.486595   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:17.486644   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:17.529947   71168 cri.go:89] found id: ""
	I0401 19:34:17.529975   71168 logs.go:276] 0 containers: []
	W0401 19:34:17.529986   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:17.529993   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:17.530052   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:17.571617   71168 cri.go:89] found id: ""
	I0401 19:34:17.571640   71168 logs.go:276] 0 containers: []
	W0401 19:34:17.571648   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:17.571656   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:17.571673   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:17.627326   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:17.627354   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:17.643409   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:17.643431   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:17.723772   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:17.723798   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:17.723811   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:17.803383   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:17.803414   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:17.325836   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:19.328447   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:17.509486   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:20.004341   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:19.606840   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:21.607646   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:20.348949   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:20.363311   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:20.363385   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:20.401558   71168 cri.go:89] found id: ""
	I0401 19:34:20.401585   71168 logs.go:276] 0 containers: []
	W0401 19:34:20.401595   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:20.401603   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:20.401686   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:20.445979   71168 cri.go:89] found id: ""
	I0401 19:34:20.446004   71168 logs.go:276] 0 containers: []
	W0401 19:34:20.446011   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:20.446016   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:20.446060   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:20.487819   71168 cri.go:89] found id: ""
	I0401 19:34:20.487844   71168 logs.go:276] 0 containers: []
	W0401 19:34:20.487854   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:20.487862   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:20.487921   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:20.532107   71168 cri.go:89] found id: ""
	I0401 19:34:20.532131   71168 logs.go:276] 0 containers: []
	W0401 19:34:20.532154   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:20.532186   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:20.532247   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:20.577727   71168 cri.go:89] found id: ""
	I0401 19:34:20.577749   71168 logs.go:276] 0 containers: []
	W0401 19:34:20.577756   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:20.577762   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:20.577841   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:20.616774   71168 cri.go:89] found id: ""
	I0401 19:34:20.616805   71168 logs.go:276] 0 containers: []
	W0401 19:34:20.616816   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:20.616824   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:20.616887   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:20.656122   71168 cri.go:89] found id: ""
	I0401 19:34:20.656150   71168 logs.go:276] 0 containers: []
	W0401 19:34:20.656160   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:20.656167   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:20.656226   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:20.701249   71168 cri.go:89] found id: ""
	I0401 19:34:20.701274   71168 logs.go:276] 0 containers: []
	W0401 19:34:20.701285   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:20.701295   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:20.701310   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:20.746979   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:20.747003   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:20.799197   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:20.799226   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:20.815771   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:20.815808   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:20.895179   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:20.895202   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:20.895218   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:21.826671   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:24.325896   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:26.326569   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:22.503727   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:24.503877   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:26.506643   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:24.107702   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:26.607285   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:23.481911   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:23.496820   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:23.496889   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:23.538292   71168 cri.go:89] found id: ""
	I0401 19:34:23.538314   71168 logs.go:276] 0 containers: []
	W0401 19:34:23.538322   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:23.538327   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:23.538372   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:23.579171   71168 cri.go:89] found id: ""
	I0401 19:34:23.579200   71168 logs.go:276] 0 containers: []
	W0401 19:34:23.579209   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:23.579214   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:23.579269   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:23.620377   71168 cri.go:89] found id: ""
	I0401 19:34:23.620399   71168 logs.go:276] 0 containers: []
	W0401 19:34:23.620410   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:23.620417   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:23.620477   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:23.663309   71168 cri.go:89] found id: ""
	I0401 19:34:23.663329   71168 logs.go:276] 0 containers: []
	W0401 19:34:23.663337   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:23.663342   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:23.663392   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:23.702724   71168 cri.go:89] found id: ""
	I0401 19:34:23.702755   71168 logs.go:276] 0 containers: []
	W0401 19:34:23.702772   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:23.702778   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:23.702836   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:23.742797   71168 cri.go:89] found id: ""
	I0401 19:34:23.742827   71168 logs.go:276] 0 containers: []
	W0401 19:34:23.742837   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:23.742845   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:23.742913   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:23.781299   71168 cri.go:89] found id: ""
	I0401 19:34:23.781350   71168 logs.go:276] 0 containers: []
	W0401 19:34:23.781367   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:23.781375   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:23.781440   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:23.828244   71168 cri.go:89] found id: ""
	I0401 19:34:23.828270   71168 logs.go:276] 0 containers: []
	W0401 19:34:23.828277   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:23.828284   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:23.828298   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:23.914758   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:23.914782   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:23.914797   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:23.993300   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:23.993332   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:24.037388   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:24.037424   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:24.090157   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:24.090198   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:26.609062   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:26.624241   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:26.624309   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:26.665813   71168 cri.go:89] found id: ""
	I0401 19:34:26.665840   71168 logs.go:276] 0 containers: []
	W0401 19:34:26.665848   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:26.665857   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:26.665917   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:26.709571   71168 cri.go:89] found id: ""
	I0401 19:34:26.709593   71168 logs.go:276] 0 containers: []
	W0401 19:34:26.709600   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:26.709606   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:26.709680   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:26.757286   71168 cri.go:89] found id: ""
	I0401 19:34:26.757309   71168 logs.go:276] 0 containers: []
	W0401 19:34:26.757319   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:26.757325   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:26.757386   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:26.795715   71168 cri.go:89] found id: ""
	I0401 19:34:26.795768   71168 logs.go:276] 0 containers: []
	W0401 19:34:26.795781   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:26.795788   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:26.795839   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:26.835985   71168 cri.go:89] found id: ""
	I0401 19:34:26.836011   71168 logs.go:276] 0 containers: []
	W0401 19:34:26.836022   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:26.836029   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:26.836094   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:26.878890   71168 cri.go:89] found id: ""
	I0401 19:34:26.878918   71168 logs.go:276] 0 containers: []
	W0401 19:34:26.878929   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:26.878936   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:26.878991   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:26.920161   71168 cri.go:89] found id: ""
	I0401 19:34:26.920189   71168 logs.go:276] 0 containers: []
	W0401 19:34:26.920199   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:26.920206   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:26.920262   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:26.961597   71168 cri.go:89] found id: ""
	I0401 19:34:26.961626   71168 logs.go:276] 0 containers: []
	W0401 19:34:26.961637   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:26.961663   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:26.961679   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:27.019814   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:27.019847   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:27.035535   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:27.035564   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:27.111755   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:27.111776   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:27.111790   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:27.194932   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:27.194964   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:28.827702   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:31.325488   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:29.005830   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:31.007294   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:29.107097   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:31.109807   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:29.738592   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:29.752851   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:29.752913   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:29.791808   71168 cri.go:89] found id: ""
	I0401 19:34:29.791863   71168 logs.go:276] 0 containers: []
	W0401 19:34:29.791875   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:29.791883   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:29.791944   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:29.836113   71168 cri.go:89] found id: ""
	I0401 19:34:29.836132   71168 logs.go:276] 0 containers: []
	W0401 19:34:29.836139   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:29.836144   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:29.836200   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:29.879005   71168 cri.go:89] found id: ""
	I0401 19:34:29.879039   71168 logs.go:276] 0 containers: []
	W0401 19:34:29.879050   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:29.879059   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:29.879122   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:29.919349   71168 cri.go:89] found id: ""
	I0401 19:34:29.919383   71168 logs.go:276] 0 containers: []
	W0401 19:34:29.919394   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:29.919400   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:29.919454   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:29.957252   71168 cri.go:89] found id: ""
	I0401 19:34:29.957275   71168 logs.go:276] 0 containers: []
	W0401 19:34:29.957287   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:29.957294   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:29.957354   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:30.003220   71168 cri.go:89] found id: ""
	I0401 19:34:30.003245   71168 logs.go:276] 0 containers: []
	W0401 19:34:30.003256   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:30.003263   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:30.003311   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:30.043873   71168 cri.go:89] found id: ""
	I0401 19:34:30.043900   71168 logs.go:276] 0 containers: []
	W0401 19:34:30.043921   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:30.043928   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:30.043989   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:30.082215   71168 cri.go:89] found id: ""
	I0401 19:34:30.082242   71168 logs.go:276] 0 containers: []
	W0401 19:34:30.082253   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:30.082263   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:30.082277   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:30.098676   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:30.098701   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:30.180857   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:30.180879   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:30.180897   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:30.269982   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:30.270016   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:30.317933   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:30.317967   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:32.874312   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:32.888687   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:32.888742   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:32.926222   71168 cri.go:89] found id: ""
	I0401 19:34:32.926244   71168 logs.go:276] 0 containers: []
	W0401 19:34:32.926252   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:32.926257   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:32.926307   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:32.964838   71168 cri.go:89] found id: ""
	I0401 19:34:32.964858   71168 logs.go:276] 0 containers: []
	W0401 19:34:32.964865   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:32.964870   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:32.964914   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:33.327670   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:35.826387   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:33.504338   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:36.005240   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:33.606596   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:35.607014   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:33.006903   71168 cri.go:89] found id: ""
	I0401 19:34:33.006920   71168 logs.go:276] 0 containers: []
	W0401 19:34:33.006927   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:33.006933   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:33.006983   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:33.045663   71168 cri.go:89] found id: ""
	I0401 19:34:33.045691   71168 logs.go:276] 0 containers: []
	W0401 19:34:33.045701   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:33.045709   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:33.045770   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:33.086262   71168 cri.go:89] found id: ""
	I0401 19:34:33.086290   71168 logs.go:276] 0 containers: []
	W0401 19:34:33.086298   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:33.086303   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:33.086368   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:33.128302   71168 cri.go:89] found id: ""
	I0401 19:34:33.128327   71168 logs.go:276] 0 containers: []
	W0401 19:34:33.128335   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:33.128341   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:33.128402   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:33.171155   71168 cri.go:89] found id: ""
	I0401 19:34:33.171189   71168 logs.go:276] 0 containers: []
	W0401 19:34:33.171200   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:33.171207   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:33.171270   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:33.210793   71168 cri.go:89] found id: ""
	I0401 19:34:33.210820   71168 logs.go:276] 0 containers: []
	W0401 19:34:33.210838   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:33.210848   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:33.210870   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:33.295035   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:33.295072   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:33.345381   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:33.345417   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:33.401082   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:33.401120   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:33.417029   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:33.417055   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:33.497027   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:35.997632   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:36.013106   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:36.013161   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:36.053013   71168 cri.go:89] found id: ""
	I0401 19:34:36.053040   71168 logs.go:276] 0 containers: []
	W0401 19:34:36.053050   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:36.053059   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:36.053116   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:36.092268   71168 cri.go:89] found id: ""
	I0401 19:34:36.092297   71168 logs.go:276] 0 containers: []
	W0401 19:34:36.092308   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:36.092315   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:36.092389   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:36.131347   71168 cri.go:89] found id: ""
	I0401 19:34:36.131391   71168 logs.go:276] 0 containers: []
	W0401 19:34:36.131402   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:36.131409   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:36.131468   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:36.171402   71168 cri.go:89] found id: ""
	I0401 19:34:36.171432   71168 logs.go:276] 0 containers: []
	W0401 19:34:36.171443   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:36.171449   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:36.171511   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:36.211239   71168 cri.go:89] found id: ""
	I0401 19:34:36.211272   71168 logs.go:276] 0 containers: []
	W0401 19:34:36.211283   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:36.211290   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:36.211354   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:36.251246   71168 cri.go:89] found id: ""
	I0401 19:34:36.251275   71168 logs.go:276] 0 containers: []
	W0401 19:34:36.251287   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:36.251294   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:36.251354   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:36.293140   71168 cri.go:89] found id: ""
	I0401 19:34:36.293162   71168 logs.go:276] 0 containers: []
	W0401 19:34:36.293169   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:36.293174   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:36.293231   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:36.330281   71168 cri.go:89] found id: ""
	I0401 19:34:36.330308   71168 logs.go:276] 0 containers: []
	W0401 19:34:36.330318   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:36.330328   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:36.330342   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:36.421753   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:36.421790   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:36.467555   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:36.467581   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:36.524747   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:36.524778   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:36.540946   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:36.540976   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:36.622452   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:38.326341   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:40.327267   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:38.503641   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:40.504555   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:38.107732   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:40.608535   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:39.122969   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:39.139092   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:39.139157   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:39.177337   71168 cri.go:89] found id: ""
	I0401 19:34:39.177368   71168 logs.go:276] 0 containers: []
	W0401 19:34:39.177379   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:39.177387   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:39.177449   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:39.216471   71168 cri.go:89] found id: ""
	I0401 19:34:39.216498   71168 logs.go:276] 0 containers: []
	W0401 19:34:39.216507   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:39.216512   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:39.216558   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:39.255526   71168 cri.go:89] found id: ""
	I0401 19:34:39.255550   71168 logs.go:276] 0 containers: []
	W0401 19:34:39.255557   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:39.255563   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:39.255623   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:39.294682   71168 cri.go:89] found id: ""
	I0401 19:34:39.294711   71168 logs.go:276] 0 containers: []
	W0401 19:34:39.294723   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:39.294735   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:39.294798   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:39.337416   71168 cri.go:89] found id: ""
	I0401 19:34:39.337437   71168 logs.go:276] 0 containers: []
	W0401 19:34:39.337444   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:39.337449   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:39.337510   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:39.384560   71168 cri.go:89] found id: ""
	I0401 19:34:39.384586   71168 logs.go:276] 0 containers: []
	W0401 19:34:39.384598   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:39.384608   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:39.384671   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:39.421459   71168 cri.go:89] found id: ""
	I0401 19:34:39.421480   71168 logs.go:276] 0 containers: []
	W0401 19:34:39.421488   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:39.421493   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:39.421540   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:39.460221   71168 cri.go:89] found id: ""
	I0401 19:34:39.460246   71168 logs.go:276] 0 containers: []
	W0401 19:34:39.460256   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:39.460264   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:39.460275   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:39.543800   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:39.543835   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:39.591012   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:39.591038   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:39.645994   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:39.646025   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:39.662223   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:39.662250   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:39.741574   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:42.242541   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:42.256933   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:42.257006   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:42.294268   71168 cri.go:89] found id: ""
	I0401 19:34:42.294297   71168 logs.go:276] 0 containers: []
	W0401 19:34:42.294308   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:42.294315   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:42.294370   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:42.331978   71168 cri.go:89] found id: ""
	I0401 19:34:42.331999   71168 logs.go:276] 0 containers: []
	W0401 19:34:42.332005   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:42.332013   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:42.332078   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:42.369858   71168 cri.go:89] found id: ""
	I0401 19:34:42.369885   71168 logs.go:276] 0 containers: []
	W0401 19:34:42.369895   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:42.369903   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:42.369989   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:42.412688   71168 cri.go:89] found id: ""
	I0401 19:34:42.412708   71168 logs.go:276] 0 containers: []
	W0401 19:34:42.412715   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:42.412720   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:42.412776   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:42.449180   71168 cri.go:89] found id: ""
	I0401 19:34:42.449209   71168 logs.go:276] 0 containers: []
	W0401 19:34:42.449217   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:42.449225   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:42.449283   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:42.488582   71168 cri.go:89] found id: ""
	I0401 19:34:42.488606   71168 logs.go:276] 0 containers: []
	W0401 19:34:42.488613   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:42.488618   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:42.488665   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:42.527883   71168 cri.go:89] found id: ""
	I0401 19:34:42.527915   71168 logs.go:276] 0 containers: []
	W0401 19:34:42.527924   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:42.527931   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:42.527993   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:42.564372   71168 cri.go:89] found id: ""
	I0401 19:34:42.564394   71168 logs.go:276] 0 containers: []
	W0401 19:34:42.564401   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:42.564408   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:42.564419   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:42.646940   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:42.646974   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:42.689323   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:42.689354   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:42.744996   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:42.745024   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:42.761404   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:42.761429   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:42.836643   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:42.825895   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:45.325856   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:42.504642   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:45.004315   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:43.110114   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:45.607093   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:45.337809   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:45.352936   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:45.353029   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:45.395073   71168 cri.go:89] found id: ""
	I0401 19:34:45.395098   71168 logs.go:276] 0 containers: []
	W0401 19:34:45.395106   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:45.395112   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:45.395160   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:45.433537   71168 cri.go:89] found id: ""
	I0401 19:34:45.433567   71168 logs.go:276] 0 containers: []
	W0401 19:34:45.433578   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:45.433586   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:45.433658   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:45.477108   71168 cri.go:89] found id: ""
	I0401 19:34:45.477138   71168 logs.go:276] 0 containers: []
	W0401 19:34:45.477150   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:45.477157   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:45.477217   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:45.520350   71168 cri.go:89] found id: ""
	I0401 19:34:45.520389   71168 logs.go:276] 0 containers: []
	W0401 19:34:45.520401   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:45.520408   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:45.520466   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:45.562871   71168 cri.go:89] found id: ""
	I0401 19:34:45.562901   71168 logs.go:276] 0 containers: []
	W0401 19:34:45.562911   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:45.562918   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:45.562988   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:45.619214   71168 cri.go:89] found id: ""
	I0401 19:34:45.619237   71168 logs.go:276] 0 containers: []
	W0401 19:34:45.619248   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:45.619255   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:45.619317   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:45.664361   71168 cri.go:89] found id: ""
	I0401 19:34:45.664387   71168 logs.go:276] 0 containers: []
	W0401 19:34:45.664398   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:45.664405   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:45.664463   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:45.701087   71168 cri.go:89] found id: ""
	I0401 19:34:45.701110   71168 logs.go:276] 0 containers: []
	W0401 19:34:45.701120   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:45.701128   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:45.701139   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:45.716839   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:45.716863   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:45.794609   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:45.794630   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:45.794642   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:45.883428   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:45.883464   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:45.934342   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:45.934374   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:47.825597   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:50.326528   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:47.505036   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:49.505287   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:51.505884   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:47.609038   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:50.106705   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:52.107802   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:48.492128   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:48.508674   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:48.508746   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:48.549522   71168 cri.go:89] found id: ""
	I0401 19:34:48.549545   71168 logs.go:276] 0 containers: []
	W0401 19:34:48.549555   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:48.549561   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:48.549619   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:48.587014   71168 cri.go:89] found id: ""
	I0401 19:34:48.587037   71168 logs.go:276] 0 containers: []
	W0401 19:34:48.587045   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:48.587051   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:48.587108   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:48.629591   71168 cri.go:89] found id: ""
	I0401 19:34:48.629620   71168 logs.go:276] 0 containers: []
	W0401 19:34:48.629630   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:48.629636   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:48.629707   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:48.669335   71168 cri.go:89] found id: ""
	I0401 19:34:48.669363   71168 logs.go:276] 0 containers: []
	W0401 19:34:48.669383   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:48.669400   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:48.669455   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:48.708322   71168 cri.go:89] found id: ""
	I0401 19:34:48.708350   71168 logs.go:276] 0 containers: []
	W0401 19:34:48.708356   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:48.708362   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:48.708407   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:48.750680   71168 cri.go:89] found id: ""
	I0401 19:34:48.750708   71168 logs.go:276] 0 containers: []
	W0401 19:34:48.750718   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:48.750726   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:48.750791   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:48.790946   71168 cri.go:89] found id: ""
	I0401 19:34:48.790974   71168 logs.go:276] 0 containers: []
	W0401 19:34:48.790984   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:48.790998   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:48.791055   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:48.828849   71168 cri.go:89] found id: ""
	I0401 19:34:48.828871   71168 logs.go:276] 0 containers: []
	W0401 19:34:48.828880   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:48.828889   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:48.828904   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:48.909182   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:48.909212   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:48.954285   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:48.954315   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:49.010340   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:49.010372   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:49.026493   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:49.026516   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:49.099662   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:51.599905   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:51.618094   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:51.618168   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:51.657003   71168 cri.go:89] found id: ""
	I0401 19:34:51.657028   71168 logs.go:276] 0 containers: []
	W0401 19:34:51.657038   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:51.657046   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:51.657104   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:51.696415   71168 cri.go:89] found id: ""
	I0401 19:34:51.696441   71168 logs.go:276] 0 containers: []
	W0401 19:34:51.696451   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:51.696456   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:51.696515   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:51.734416   71168 cri.go:89] found id: ""
	I0401 19:34:51.734445   71168 logs.go:276] 0 containers: []
	W0401 19:34:51.734457   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:51.734465   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:51.734523   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:51.774895   71168 cri.go:89] found id: ""
	I0401 19:34:51.774918   71168 logs.go:276] 0 containers: []
	W0401 19:34:51.774925   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:51.774931   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:51.774980   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:51.814602   71168 cri.go:89] found id: ""
	I0401 19:34:51.814623   71168 logs.go:276] 0 containers: []
	W0401 19:34:51.814631   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:51.814637   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:51.814687   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:51.856035   71168 cri.go:89] found id: ""
	I0401 19:34:51.856061   71168 logs.go:276] 0 containers: []
	W0401 19:34:51.856071   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:51.856078   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:51.856132   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:51.897415   71168 cri.go:89] found id: ""
	I0401 19:34:51.897440   71168 logs.go:276] 0 containers: []
	W0401 19:34:51.897451   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:51.897457   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:51.897516   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:51.937406   71168 cri.go:89] found id: ""
	I0401 19:34:51.937428   71168 logs.go:276] 0 containers: []
	W0401 19:34:51.937436   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:51.937443   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:51.937456   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:51.981508   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:51.981535   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:52.039956   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:52.039995   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:52.066403   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:52.066429   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:52.172509   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:52.172530   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:52.172541   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:52.827950   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:55.331369   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:54.004625   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:56.503197   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:54.607359   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:57.108257   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:54.761459   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:54.776972   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:54.777030   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:54.822945   71168 cri.go:89] found id: ""
	I0401 19:34:54.822983   71168 logs.go:276] 0 containers: []
	W0401 19:34:54.822996   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:54.823004   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:54.823066   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:54.861602   71168 cri.go:89] found id: ""
	I0401 19:34:54.861629   71168 logs.go:276] 0 containers: []
	W0401 19:34:54.861639   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:54.861662   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:54.861727   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:54.901283   71168 cri.go:89] found id: ""
	I0401 19:34:54.901309   71168 logs.go:276] 0 containers: []
	W0401 19:34:54.901319   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:54.901327   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:54.901385   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:54.940071   71168 cri.go:89] found id: ""
	I0401 19:34:54.940103   71168 logs.go:276] 0 containers: []
	W0401 19:34:54.940114   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:54.940121   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:54.940179   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:54.978447   71168 cri.go:89] found id: ""
	I0401 19:34:54.978474   71168 logs.go:276] 0 containers: []
	W0401 19:34:54.978485   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:54.978493   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:54.978563   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:55.021786   71168 cri.go:89] found id: ""
	I0401 19:34:55.021810   71168 logs.go:276] 0 containers: []
	W0401 19:34:55.021819   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:55.021827   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:55.021886   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:55.059861   71168 cri.go:89] found id: ""
	I0401 19:34:55.059889   71168 logs.go:276] 0 containers: []
	W0401 19:34:55.059899   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:55.059907   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:55.059963   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:55.104484   71168 cri.go:89] found id: ""
	I0401 19:34:55.104516   71168 logs.go:276] 0 containers: []
	W0401 19:34:55.104527   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:55.104537   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:55.104551   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:55.152197   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:55.152221   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:55.203900   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:55.203942   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:55.221553   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:55.221580   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:55.299651   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:55.299668   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:55.299680   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:57.877382   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:57.899186   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:57.899260   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:57.948146   71168 cri.go:89] found id: ""
	I0401 19:34:57.948182   71168 logs.go:276] 0 containers: []
	W0401 19:34:57.948192   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:57.948203   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:57.948270   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:57.826282   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:59.826598   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:58.504492   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:01.003480   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:59.607646   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:02.107162   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:58.017121   71168 cri.go:89] found id: ""
	I0401 19:34:58.017150   71168 logs.go:276] 0 containers: []
	W0401 19:34:58.017161   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:58.017168   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:58.017230   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:58.073881   71168 cri.go:89] found id: ""
	I0401 19:34:58.073905   71168 logs.go:276] 0 containers: []
	W0401 19:34:58.073916   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:58.073923   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:58.073979   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:58.115410   71168 cri.go:89] found id: ""
	I0401 19:34:58.115435   71168 logs.go:276] 0 containers: []
	W0401 19:34:58.115445   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:58.115452   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:58.115512   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:58.155452   71168 cri.go:89] found id: ""
	I0401 19:34:58.155481   71168 logs.go:276] 0 containers: []
	W0401 19:34:58.155492   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:58.155500   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:58.155562   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:58.197335   71168 cri.go:89] found id: ""
	I0401 19:34:58.197376   71168 logs.go:276] 0 containers: []
	W0401 19:34:58.197397   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:58.197407   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:58.197469   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:58.239782   71168 cri.go:89] found id: ""
	I0401 19:34:58.239808   71168 logs.go:276] 0 containers: []
	W0401 19:34:58.239815   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:58.239820   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:58.239870   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:58.280936   71168 cri.go:89] found id: ""
	I0401 19:34:58.280961   71168 logs.go:276] 0 containers: []
	W0401 19:34:58.280971   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:58.280982   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:58.280998   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:58.368357   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:58.368401   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:58.415104   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:58.415132   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:58.474719   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:58.474749   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:58.491004   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:58.491031   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:58.573999   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:01.074865   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:01.091751   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:01.091822   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:01.140053   71168 cri.go:89] found id: ""
	I0401 19:35:01.140079   71168 logs.go:276] 0 containers: []
	W0401 19:35:01.140089   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:01.140096   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:01.140154   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:01.184046   71168 cri.go:89] found id: ""
	I0401 19:35:01.184078   71168 logs.go:276] 0 containers: []
	W0401 19:35:01.184089   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:01.184096   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:01.184161   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:01.225962   71168 cri.go:89] found id: ""
	I0401 19:35:01.225989   71168 logs.go:276] 0 containers: []
	W0401 19:35:01.225999   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:01.226006   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:01.226072   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:01.267212   71168 cri.go:89] found id: ""
	I0401 19:35:01.267234   71168 logs.go:276] 0 containers: []
	W0401 19:35:01.267242   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:01.267247   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:01.267308   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:01.307039   71168 cri.go:89] found id: ""
	I0401 19:35:01.307066   71168 logs.go:276] 0 containers: []
	W0401 19:35:01.307074   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:01.307080   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:01.307132   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:01.347856   71168 cri.go:89] found id: ""
	I0401 19:35:01.347886   71168 logs.go:276] 0 containers: []
	W0401 19:35:01.347898   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:01.347905   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:01.347962   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:01.385893   71168 cri.go:89] found id: ""
	I0401 19:35:01.385923   71168 logs.go:276] 0 containers: []
	W0401 19:35:01.385933   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:01.385940   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:01.385999   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:01.422983   71168 cri.go:89] found id: ""
	I0401 19:35:01.423012   71168 logs.go:276] 0 containers: []
	W0401 19:35:01.423022   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:01.423033   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:01.423048   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:01.469842   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:01.469875   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:01.527536   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:01.527566   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:01.542332   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:01.542357   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:01.617252   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:01.617270   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:01.617284   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:02.325502   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:04.326603   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:06.328115   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:03.005979   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:05.504470   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:04.107681   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:06.607619   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:04.195171   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:04.211963   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:04.212015   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:04.252298   71168 cri.go:89] found id: ""
	I0401 19:35:04.252324   71168 logs.go:276] 0 containers: []
	W0401 19:35:04.252334   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:04.252342   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:04.252396   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:04.299619   71168 cri.go:89] found id: ""
	I0401 19:35:04.299649   71168 logs.go:276] 0 containers: []
	W0401 19:35:04.299659   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:04.299667   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:04.299725   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:04.347386   71168 cri.go:89] found id: ""
	I0401 19:35:04.347409   71168 logs.go:276] 0 containers: []
	W0401 19:35:04.347416   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:04.347426   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:04.347473   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:04.385902   71168 cri.go:89] found id: ""
	I0401 19:35:04.385929   71168 logs.go:276] 0 containers: []
	W0401 19:35:04.385937   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:04.385943   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:04.385993   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:04.425235   71168 cri.go:89] found id: ""
	I0401 19:35:04.425258   71168 logs.go:276] 0 containers: []
	W0401 19:35:04.425266   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:04.425271   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:04.425325   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:04.463849   71168 cri.go:89] found id: ""
	I0401 19:35:04.463881   71168 logs.go:276] 0 containers: []
	W0401 19:35:04.463891   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:04.463899   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:04.463974   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:04.501983   71168 cri.go:89] found id: ""
	I0401 19:35:04.502003   71168 logs.go:276] 0 containers: []
	W0401 19:35:04.502010   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:04.502016   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:04.502072   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:04.544082   71168 cri.go:89] found id: ""
	I0401 19:35:04.544103   71168 logs.go:276] 0 containers: []
	W0401 19:35:04.544113   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:04.544124   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:04.544141   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:04.600545   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:04.600578   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:04.617049   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:04.617075   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:04.696927   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:04.696945   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:04.696957   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:04.780024   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:04.780056   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:07.323161   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:07.339368   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:07.339432   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:07.379407   71168 cri.go:89] found id: ""
	I0401 19:35:07.379429   71168 logs.go:276] 0 containers: []
	W0401 19:35:07.379440   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:07.379452   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:07.379497   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:07.418700   71168 cri.go:89] found id: ""
	I0401 19:35:07.418728   71168 logs.go:276] 0 containers: []
	W0401 19:35:07.418737   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:07.418743   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:07.418788   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:07.457580   71168 cri.go:89] found id: ""
	I0401 19:35:07.457606   71168 logs.go:276] 0 containers: []
	W0401 19:35:07.457617   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:07.457624   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:07.457696   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:07.498211   71168 cri.go:89] found id: ""
	I0401 19:35:07.498240   71168 logs.go:276] 0 containers: []
	W0401 19:35:07.498249   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:07.498256   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:07.498318   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:07.539659   71168 cri.go:89] found id: ""
	I0401 19:35:07.539681   71168 logs.go:276] 0 containers: []
	W0401 19:35:07.539692   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:07.539699   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:07.539759   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:07.577414   71168 cri.go:89] found id: ""
	I0401 19:35:07.577440   71168 logs.go:276] 0 containers: []
	W0401 19:35:07.577450   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:07.577456   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:07.577520   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:07.623318   71168 cri.go:89] found id: ""
	I0401 19:35:07.623340   71168 logs.go:276] 0 containers: []
	W0401 19:35:07.623352   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:07.623358   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:07.623416   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:07.664791   71168 cri.go:89] found id: ""
	I0401 19:35:07.664823   71168 logs.go:276] 0 containers: []
	W0401 19:35:07.664834   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:07.664842   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:07.664854   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:07.722158   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:07.722186   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:07.737838   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:07.737876   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:07.813694   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:07.813717   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:07.813728   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:07.899698   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:07.899740   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:08.825778   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:10.825935   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:07.505933   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:10.003529   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:09.107076   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:11.108917   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:10.446184   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:10.460860   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:10.460927   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:10.505656   71168 cri.go:89] found id: ""
	I0401 19:35:10.505685   71168 logs.go:276] 0 containers: []
	W0401 19:35:10.505692   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:10.505698   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:10.505742   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:10.547771   71168 cri.go:89] found id: ""
	I0401 19:35:10.547796   71168 logs.go:276] 0 containers: []
	W0401 19:35:10.547814   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:10.547820   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:10.547876   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:10.584625   71168 cri.go:89] found id: ""
	I0401 19:35:10.584652   71168 logs.go:276] 0 containers: []
	W0401 19:35:10.584664   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:10.584671   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:10.584737   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:10.625512   71168 cri.go:89] found id: ""
	I0401 19:35:10.625541   71168 logs.go:276] 0 containers: []
	W0401 19:35:10.625552   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:10.625559   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:10.625618   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:10.664905   71168 cri.go:89] found id: ""
	I0401 19:35:10.664936   71168 logs.go:276] 0 containers: []
	W0401 19:35:10.664949   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:10.664955   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:10.665015   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:10.703043   71168 cri.go:89] found id: ""
	I0401 19:35:10.703071   71168 logs.go:276] 0 containers: []
	W0401 19:35:10.703082   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:10.703090   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:10.703149   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:10.747750   71168 cri.go:89] found id: ""
	I0401 19:35:10.747777   71168 logs.go:276] 0 containers: []
	W0401 19:35:10.747790   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:10.747796   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:10.747841   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:10.792944   71168 cri.go:89] found id: ""
	I0401 19:35:10.792970   71168 logs.go:276] 0 containers: []
	W0401 19:35:10.792980   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:10.792989   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:10.793004   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:10.854029   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:10.854058   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:10.868968   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:10.868991   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:10.940537   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:10.940564   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:10.940579   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:11.018201   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:11.018231   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:12.826117   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:14.826387   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:12.003995   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:14.503258   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:16.504686   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:13.608777   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:16.108992   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:13.562139   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:13.579370   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:13.579435   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:13.620811   71168 cri.go:89] found id: ""
	I0401 19:35:13.620838   71168 logs.go:276] 0 containers: []
	W0401 19:35:13.620847   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:13.620859   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:13.620919   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:13.661377   71168 cri.go:89] found id: ""
	I0401 19:35:13.661408   71168 logs.go:276] 0 containers: []
	W0401 19:35:13.661419   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:13.661427   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:13.661489   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:13.702413   71168 cri.go:89] found id: ""
	I0401 19:35:13.702436   71168 logs.go:276] 0 containers: []
	W0401 19:35:13.702445   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:13.702453   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:13.702519   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:13.748760   71168 cri.go:89] found id: ""
	I0401 19:35:13.748788   71168 logs.go:276] 0 containers: []
	W0401 19:35:13.748796   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:13.748803   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:13.748874   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:13.795438   71168 cri.go:89] found id: ""
	I0401 19:35:13.795460   71168 logs.go:276] 0 containers: []
	W0401 19:35:13.795472   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:13.795479   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:13.795537   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:13.835572   71168 cri.go:89] found id: ""
	I0401 19:35:13.835601   71168 logs.go:276] 0 containers: []
	W0401 19:35:13.835612   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:13.835619   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:13.835677   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:13.874301   71168 cri.go:89] found id: ""
	I0401 19:35:13.874327   71168 logs.go:276] 0 containers: []
	W0401 19:35:13.874336   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:13.874342   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:13.874387   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:13.914847   71168 cri.go:89] found id: ""
	I0401 19:35:13.914876   71168 logs.go:276] 0 containers: []
	W0401 19:35:13.914883   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:13.914891   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:13.914904   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:13.929329   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:13.929355   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:14.004332   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:14.004358   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:14.004373   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:14.084901   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:14.084935   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:14.134471   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:14.134500   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:16.693432   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:16.710258   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:16.710332   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:16.757213   71168 cri.go:89] found id: ""
	I0401 19:35:16.757243   71168 logs.go:276] 0 containers: []
	W0401 19:35:16.757254   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:16.757261   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:16.757320   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:16.797134   71168 cri.go:89] found id: ""
	I0401 19:35:16.797174   71168 logs.go:276] 0 containers: []
	W0401 19:35:16.797182   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:16.797188   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:16.797233   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:16.839502   71168 cri.go:89] found id: ""
	I0401 19:35:16.839530   71168 logs.go:276] 0 containers: []
	W0401 19:35:16.839541   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:16.839549   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:16.839609   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:16.881380   71168 cri.go:89] found id: ""
	I0401 19:35:16.881406   71168 logs.go:276] 0 containers: []
	W0401 19:35:16.881413   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:16.881419   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:16.881472   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:16.922968   71168 cri.go:89] found id: ""
	I0401 19:35:16.922991   71168 logs.go:276] 0 containers: []
	W0401 19:35:16.923002   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:16.923009   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:16.923069   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:16.961262   71168 cri.go:89] found id: ""
	I0401 19:35:16.961290   71168 logs.go:276] 0 containers: []
	W0401 19:35:16.961301   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:16.961310   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:16.961369   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:16.996901   71168 cri.go:89] found id: ""
	I0401 19:35:16.996929   71168 logs.go:276] 0 containers: []
	W0401 19:35:16.996940   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:16.996947   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:16.997004   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:17.038447   71168 cri.go:89] found id: ""
	I0401 19:35:17.038473   71168 logs.go:276] 0 containers: []
	W0401 19:35:17.038481   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:17.038489   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:17.038500   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:17.079979   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:17.080013   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:17.136973   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:17.137010   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:17.153083   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:17.153108   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:17.232055   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:17.232078   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:17.232096   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:17.326246   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:19.326903   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:20.818889   70687 pod_ready.go:81] duration metric: took 4m0.000381983s for pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace to be "Ready" ...
	E0401 19:35:20.818918   70687 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace to be "Ready" (will not retry!)
	I0401 19:35:20.818938   70687 pod_ready.go:38] duration metric: took 4m5.525170808s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:35:20.818967   70687 kubeadm.go:591] duration metric: took 4m13.404699267s to restartPrimaryControlPlane
	W0401 19:35:20.819026   70687 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0401 19:35:20.819059   70687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0401 19:35:19.004932   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:21.504514   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:18.607067   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:20.609619   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:19.813327   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:19.830168   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:19.830229   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:19.875502   71168 cri.go:89] found id: ""
	I0401 19:35:19.875524   71168 logs.go:276] 0 containers: []
	W0401 19:35:19.875532   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:19.875537   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:19.875591   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:19.916084   71168 cri.go:89] found id: ""
	I0401 19:35:19.916107   71168 logs.go:276] 0 containers: []
	W0401 19:35:19.916117   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:19.916125   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:19.916188   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:19.960673   71168 cri.go:89] found id: ""
	I0401 19:35:19.960699   71168 logs.go:276] 0 containers: []
	W0401 19:35:19.960710   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:19.960717   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:19.960796   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:19.998736   71168 cri.go:89] found id: ""
	I0401 19:35:19.998760   71168 logs.go:276] 0 containers: []
	W0401 19:35:19.998768   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:19.998776   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:19.998840   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:20.043382   71168 cri.go:89] found id: ""
	I0401 19:35:20.043408   71168 logs.go:276] 0 containers: []
	W0401 19:35:20.043418   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:20.043425   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:20.043492   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:20.086132   71168 cri.go:89] found id: ""
	I0401 19:35:20.086158   71168 logs.go:276] 0 containers: []
	W0401 19:35:20.086171   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:20.086178   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:20.086239   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:20.131052   71168 cri.go:89] found id: ""
	I0401 19:35:20.131074   71168 logs.go:276] 0 containers: []
	W0401 19:35:20.131081   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:20.131091   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:20.131151   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:20.174668   71168 cri.go:89] found id: ""
	I0401 19:35:20.174693   71168 logs.go:276] 0 containers: []
	W0401 19:35:20.174699   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:20.174707   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:20.174718   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:20.266503   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:20.266521   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:20.266534   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:20.351555   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:20.351586   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:20.400261   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:20.400289   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:20.455149   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:20.455183   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:23.510048   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:26.005267   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:23.109720   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:25.608633   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:22.972675   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:22.987481   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:22.987555   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:23.032429   71168 cri.go:89] found id: ""
	I0401 19:35:23.032453   71168 logs.go:276] 0 containers: []
	W0401 19:35:23.032461   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:23.032467   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:23.032522   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:23.073286   71168 cri.go:89] found id: ""
	I0401 19:35:23.073313   71168 logs.go:276] 0 containers: []
	W0401 19:35:23.073322   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:23.073330   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:23.073397   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:23.115424   71168 cri.go:89] found id: ""
	I0401 19:35:23.115447   71168 logs.go:276] 0 containers: []
	W0401 19:35:23.115454   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:23.115459   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:23.115506   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:23.164883   71168 cri.go:89] found id: ""
	I0401 19:35:23.164908   71168 logs.go:276] 0 containers: []
	W0401 19:35:23.164918   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:23.164925   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:23.164985   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:23.213617   71168 cri.go:89] found id: ""
	I0401 19:35:23.213656   71168 logs.go:276] 0 containers: []
	W0401 19:35:23.213668   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:23.213675   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:23.213787   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:23.264846   71168 cri.go:89] found id: ""
	I0401 19:35:23.264874   71168 logs.go:276] 0 containers: []
	W0401 19:35:23.264886   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:23.264893   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:23.264958   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:23.306467   71168 cri.go:89] found id: ""
	I0401 19:35:23.306495   71168 logs.go:276] 0 containers: []
	W0401 19:35:23.306506   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:23.306514   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:23.306566   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:23.358574   71168 cri.go:89] found id: ""
	I0401 19:35:23.358597   71168 logs.go:276] 0 containers: []
	W0401 19:35:23.358608   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:23.358619   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:23.358634   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:23.437486   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:23.437510   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:23.437525   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:23.555307   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:23.555350   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:23.601776   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:23.601808   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:23.666654   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:23.666688   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:26.184503   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:26.199924   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:26.199997   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:26.252151   71168 cri.go:89] found id: ""
	I0401 19:35:26.252181   71168 logs.go:276] 0 containers: []
	W0401 19:35:26.252192   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:26.252199   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:26.252266   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:26.299094   71168 cri.go:89] found id: ""
	I0401 19:35:26.299126   71168 logs.go:276] 0 containers: []
	W0401 19:35:26.299134   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:26.299139   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:26.299194   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:26.340483   71168 cri.go:89] found id: ""
	I0401 19:35:26.340516   71168 logs.go:276] 0 containers: []
	W0401 19:35:26.340533   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:26.340540   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:26.340599   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:26.387153   71168 cri.go:89] found id: ""
	I0401 19:35:26.387180   71168 logs.go:276] 0 containers: []
	W0401 19:35:26.387188   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:26.387194   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:26.387261   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:26.430746   71168 cri.go:89] found id: ""
	I0401 19:35:26.430773   71168 logs.go:276] 0 containers: []
	W0401 19:35:26.430781   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:26.430787   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:26.430854   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:26.478412   71168 cri.go:89] found id: ""
	I0401 19:35:26.478440   71168 logs.go:276] 0 containers: []
	W0401 19:35:26.478451   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:26.478458   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:26.478523   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:26.521120   71168 cri.go:89] found id: ""
	I0401 19:35:26.521150   71168 logs.go:276] 0 containers: []
	W0401 19:35:26.521161   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:26.521168   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:26.521229   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:26.564678   71168 cri.go:89] found id: ""
	I0401 19:35:26.564721   71168 logs.go:276] 0 containers: []
	W0401 19:35:26.564731   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:26.564742   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:26.564757   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:26.625271   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:26.625308   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:26.640505   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:26.640529   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:26.722753   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:26.722777   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:26.722795   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:26.830507   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:26.830551   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:28.505100   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:31.004387   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:28.107396   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:30.108080   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:29.386655   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:29.401232   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:29.401308   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:29.440479   71168 cri.go:89] found id: ""
	I0401 19:35:29.440511   71168 logs.go:276] 0 containers: []
	W0401 19:35:29.440522   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:29.440530   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:29.440590   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:29.479022   71168 cri.go:89] found id: ""
	I0401 19:35:29.479049   71168 logs.go:276] 0 containers: []
	W0401 19:35:29.479057   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:29.479062   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:29.479119   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:29.518179   71168 cri.go:89] found id: ""
	I0401 19:35:29.518208   71168 logs.go:276] 0 containers: []
	W0401 19:35:29.518216   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:29.518222   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:29.518281   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:29.556654   71168 cri.go:89] found id: ""
	I0401 19:35:29.556682   71168 logs.go:276] 0 containers: []
	W0401 19:35:29.556692   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:29.556712   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:29.556772   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:29.593258   71168 cri.go:89] found id: ""
	I0401 19:35:29.593287   71168 logs.go:276] 0 containers: []
	W0401 19:35:29.593295   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:29.593301   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:29.593349   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:29.637215   71168 cri.go:89] found id: ""
	I0401 19:35:29.637243   71168 logs.go:276] 0 containers: []
	W0401 19:35:29.637253   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:29.637261   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:29.637321   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:29.683052   71168 cri.go:89] found id: ""
	I0401 19:35:29.683090   71168 logs.go:276] 0 containers: []
	W0401 19:35:29.683100   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:29.683108   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:29.683164   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:29.730948   71168 cri.go:89] found id: ""
	I0401 19:35:29.730979   71168 logs.go:276] 0 containers: []
	W0401 19:35:29.730991   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:29.731001   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:29.731014   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:29.781969   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:29.782001   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:29.800700   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:29.800729   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:29.877200   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:29.877225   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:29.877244   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:29.958110   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:29.958144   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:32.501060   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:32.519551   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:32.519619   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:32.579776   71168 cri.go:89] found id: ""
	I0401 19:35:32.579802   71168 logs.go:276] 0 containers: []
	W0401 19:35:32.579813   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:32.579824   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:32.579886   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:32.643271   71168 cri.go:89] found id: ""
	I0401 19:35:32.643300   71168 logs.go:276] 0 containers: []
	W0401 19:35:32.643312   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:32.643322   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:32.643387   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:32.688576   71168 cri.go:89] found id: ""
	I0401 19:35:32.688605   71168 logs.go:276] 0 containers: []
	W0401 19:35:32.688614   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:32.688619   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:32.688678   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:32.729867   71168 cri.go:89] found id: ""
	I0401 19:35:32.729890   71168 logs.go:276] 0 containers: []
	W0401 19:35:32.729898   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:32.729906   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:32.729962   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:32.771485   71168 cri.go:89] found id: ""
	I0401 19:35:32.771508   71168 logs.go:276] 0 containers: []
	W0401 19:35:32.771515   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:32.771521   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:32.771574   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:32.809362   71168 cri.go:89] found id: ""
	I0401 19:35:32.809385   71168 logs.go:276] 0 containers: []
	W0401 19:35:32.809393   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:32.809398   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:32.809458   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:32.844916   71168 cri.go:89] found id: ""
	I0401 19:35:32.844941   71168 logs.go:276] 0 containers: []
	W0401 19:35:32.844950   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:32.844955   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:32.845000   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:32.884638   71168 cri.go:89] found id: ""
	I0401 19:35:32.884660   71168 logs.go:276] 0 containers: []
	W0401 19:35:32.884670   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:32.884680   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:32.884695   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:32.937462   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:32.937489   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:32.952842   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:32.952871   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0401 19:35:33.005516   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:35.504755   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:32.608051   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:35.106708   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:37.108135   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	W0401 19:35:33.035254   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:33.035278   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:33.035294   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:33.114963   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:33.114994   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:35.662190   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:35.675960   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:35.676016   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:35.717300   71168 cri.go:89] found id: ""
	I0401 19:35:35.717329   71168 logs.go:276] 0 containers: []
	W0401 19:35:35.717340   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:35.717347   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:35.717409   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:35.756687   71168 cri.go:89] found id: ""
	I0401 19:35:35.756713   71168 logs.go:276] 0 containers: []
	W0401 19:35:35.756723   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:35.756730   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:35.756788   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:35.796995   71168 cri.go:89] found id: ""
	I0401 19:35:35.797017   71168 logs.go:276] 0 containers: []
	W0401 19:35:35.797025   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:35.797030   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:35.797083   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:35.840419   71168 cri.go:89] found id: ""
	I0401 19:35:35.840444   71168 logs.go:276] 0 containers: []
	W0401 19:35:35.840455   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:35.840462   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:35.840523   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:35.880059   71168 cri.go:89] found id: ""
	I0401 19:35:35.880093   71168 logs.go:276] 0 containers: []
	W0401 19:35:35.880107   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:35.880113   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:35.880171   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:35.929491   71168 cri.go:89] found id: ""
	I0401 19:35:35.929515   71168 logs.go:276] 0 containers: []
	W0401 19:35:35.929523   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:35.929530   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:35.929584   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:35.968745   71168 cri.go:89] found id: ""
	I0401 19:35:35.968771   71168 logs.go:276] 0 containers: []
	W0401 19:35:35.968778   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:35.968784   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:35.968833   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:36.014294   71168 cri.go:89] found id: ""
	I0401 19:35:36.014318   71168 logs.go:276] 0 containers: []
	W0401 19:35:36.014328   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:36.014338   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:36.014359   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:36.068418   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:36.068450   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:36.086343   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:36.086367   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:36.172027   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:36.172053   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:36.172067   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:36.250046   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:36.250080   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:38.004007   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:40.004138   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:39.607714   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:42.107775   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:38.794261   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:38.809535   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:38.809597   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:38.849139   71168 cri.go:89] found id: ""
	I0401 19:35:38.849167   71168 logs.go:276] 0 containers: []
	W0401 19:35:38.849176   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:38.849181   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:38.849238   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:38.886787   71168 cri.go:89] found id: ""
	I0401 19:35:38.886811   71168 logs.go:276] 0 containers: []
	W0401 19:35:38.886821   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:38.886828   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:38.886891   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:38.923388   71168 cri.go:89] found id: ""
	I0401 19:35:38.923419   71168 logs.go:276] 0 containers: []
	W0401 19:35:38.923431   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:38.923438   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:38.923497   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:38.959583   71168 cri.go:89] found id: ""
	I0401 19:35:38.959608   71168 logs.go:276] 0 containers: []
	W0401 19:35:38.959619   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:38.959626   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:38.959682   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:38.998201   71168 cri.go:89] found id: ""
	I0401 19:35:38.998226   71168 logs.go:276] 0 containers: []
	W0401 19:35:38.998233   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:38.998238   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:38.998294   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:39.039669   71168 cri.go:89] found id: ""
	I0401 19:35:39.039692   71168 logs.go:276] 0 containers: []
	W0401 19:35:39.039703   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:39.039710   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:39.039767   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:39.077331   71168 cri.go:89] found id: ""
	I0401 19:35:39.077358   71168 logs.go:276] 0 containers: []
	W0401 19:35:39.077366   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:39.077371   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:39.077423   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:39.125999   71168 cri.go:89] found id: ""
	I0401 19:35:39.126021   71168 logs.go:276] 0 containers: []
	W0401 19:35:39.126031   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:39.126041   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:39.126054   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:39.183579   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:39.183612   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:39.201200   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:39.201227   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:39.282262   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:39.282280   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:39.282291   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:39.365340   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:39.365370   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:41.914909   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:41.929243   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:41.929317   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:41.975594   71168 cri.go:89] found id: ""
	I0401 19:35:41.975622   71168 logs.go:276] 0 containers: []
	W0401 19:35:41.975632   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:41.975639   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:41.975701   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:42.023558   71168 cri.go:89] found id: ""
	I0401 19:35:42.023585   71168 logs.go:276] 0 containers: []
	W0401 19:35:42.023596   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:42.023602   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:42.023662   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:42.074242   71168 cri.go:89] found id: ""
	I0401 19:35:42.074266   71168 logs.go:276] 0 containers: []
	W0401 19:35:42.074276   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:42.074283   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:42.074340   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:42.123327   71168 cri.go:89] found id: ""
	I0401 19:35:42.123358   71168 logs.go:276] 0 containers: []
	W0401 19:35:42.123370   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:42.123378   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:42.123452   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:42.168931   71168 cri.go:89] found id: ""
	I0401 19:35:42.168961   71168 logs.go:276] 0 containers: []
	W0401 19:35:42.168972   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:42.168980   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:42.169037   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:42.211747   71168 cri.go:89] found id: ""
	I0401 19:35:42.211774   71168 logs.go:276] 0 containers: []
	W0401 19:35:42.211784   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:42.211793   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:42.211849   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:42.251809   71168 cri.go:89] found id: ""
	I0401 19:35:42.251830   71168 logs.go:276] 0 containers: []
	W0401 19:35:42.251841   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:42.251849   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:42.251908   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:42.293266   71168 cri.go:89] found id: ""
	I0401 19:35:42.293361   71168 logs.go:276] 0 containers: []
	W0401 19:35:42.293377   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:42.293388   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:42.293405   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:42.364502   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:42.364553   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:42.381147   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:42.381180   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:42.464219   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:42.464238   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:42.464249   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:42.544564   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:42.544594   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:42.006061   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:44.504700   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:46.505615   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:44.606915   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:46.100004   70962 pod_ready.go:81] duration metric: took 4m0.000146584s for pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace to be "Ready" ...
	E0401 19:35:46.100029   70962 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace to be "Ready" (will not retry!)
	I0401 19:35:46.100044   70962 pod_ready.go:38] duration metric: took 4m10.491414096s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:35:46.100088   70962 kubeadm.go:591] duration metric: took 4m18.223285856s to restartPrimaryControlPlane
	W0401 19:35:46.100141   70962 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0401 19:35:46.100164   70962 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0401 19:35:45.105777   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:45.119911   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:45.119976   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:45.161871   71168 cri.go:89] found id: ""
	I0401 19:35:45.161890   71168 logs.go:276] 0 containers: []
	W0401 19:35:45.161897   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:45.161902   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:45.161949   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:45.198677   71168 cri.go:89] found id: ""
	I0401 19:35:45.198702   71168 logs.go:276] 0 containers: []
	W0401 19:35:45.198710   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:45.198715   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:45.198776   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:45.236938   71168 cri.go:89] found id: ""
	I0401 19:35:45.236972   71168 logs.go:276] 0 containers: []
	W0401 19:35:45.236983   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:45.236990   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:45.237052   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:45.280621   71168 cri.go:89] found id: ""
	I0401 19:35:45.280650   71168 logs.go:276] 0 containers: []
	W0401 19:35:45.280661   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:45.280668   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:45.280727   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:45.326794   71168 cri.go:89] found id: ""
	I0401 19:35:45.326818   71168 logs.go:276] 0 containers: []
	W0401 19:35:45.326827   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:45.326834   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:45.326892   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:45.369405   71168 cri.go:89] found id: ""
	I0401 19:35:45.369431   71168 logs.go:276] 0 containers: []
	W0401 19:35:45.369441   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:45.369446   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:45.369501   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:45.407609   71168 cri.go:89] found id: ""
	I0401 19:35:45.407635   71168 logs.go:276] 0 containers: []
	W0401 19:35:45.407643   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:45.407648   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:45.407720   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:45.444848   71168 cri.go:89] found id: ""
	I0401 19:35:45.444871   71168 logs.go:276] 0 containers: []
	W0401 19:35:45.444881   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:45.444891   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:45.444911   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:45.531938   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:45.531957   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:45.531972   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:45.617109   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:45.617141   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:45.663559   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:45.663591   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:45.717622   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:45.717670   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:49.004037   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:51.004650   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:48.234834   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:48.250543   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:48.250606   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:48.294396   71168 cri.go:89] found id: ""
	I0401 19:35:48.294423   71168 logs.go:276] 0 containers: []
	W0401 19:35:48.294432   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:48.294439   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:48.294504   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:48.336866   71168 cri.go:89] found id: ""
	I0401 19:35:48.336892   71168 logs.go:276] 0 containers: []
	W0401 19:35:48.336902   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:48.336908   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:48.336965   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:48.376031   71168 cri.go:89] found id: ""
	I0401 19:35:48.376065   71168 logs.go:276] 0 containers: []
	W0401 19:35:48.376076   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:48.376084   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:48.376142   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:48.414975   71168 cri.go:89] found id: ""
	I0401 19:35:48.414995   71168 logs.go:276] 0 containers: []
	W0401 19:35:48.415003   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:48.415008   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:48.415058   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:48.453484   71168 cri.go:89] found id: ""
	I0401 19:35:48.453513   71168 logs.go:276] 0 containers: []
	W0401 19:35:48.453524   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:48.453532   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:48.453593   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:48.487712   71168 cri.go:89] found id: ""
	I0401 19:35:48.487739   71168 logs.go:276] 0 containers: []
	W0401 19:35:48.487749   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:48.487757   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:48.487815   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:48.533331   71168 cri.go:89] found id: ""
	I0401 19:35:48.533364   71168 logs.go:276] 0 containers: []
	W0401 19:35:48.533375   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:48.533383   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:48.533442   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:48.574103   71168 cri.go:89] found id: ""
	I0401 19:35:48.574131   71168 logs.go:276] 0 containers: []
	W0401 19:35:48.574139   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:48.574147   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:48.574160   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:48.632068   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:48.632098   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:48.649342   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:48.649369   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:48.721799   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:48.721822   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:48.721836   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:48.821549   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:48.821584   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:51.364852   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:51.380281   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:51.380362   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:51.423383   71168 cri.go:89] found id: ""
	I0401 19:35:51.423412   71168 logs.go:276] 0 containers: []
	W0401 19:35:51.423422   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:51.423430   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:51.423490   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:51.470331   71168 cri.go:89] found id: ""
	I0401 19:35:51.470359   71168 logs.go:276] 0 containers: []
	W0401 19:35:51.470370   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:51.470378   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:51.470441   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:51.520310   71168 cri.go:89] found id: ""
	I0401 19:35:51.520339   71168 logs.go:276] 0 containers: []
	W0401 19:35:51.520350   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:51.520358   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:51.520414   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:51.568681   71168 cri.go:89] found id: ""
	I0401 19:35:51.568706   71168 logs.go:276] 0 containers: []
	W0401 19:35:51.568716   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:51.568724   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:51.568843   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:51.615146   71168 cri.go:89] found id: ""
	I0401 19:35:51.615174   71168 logs.go:276] 0 containers: []
	W0401 19:35:51.615185   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:51.615193   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:51.615256   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:51.658678   71168 cri.go:89] found id: ""
	I0401 19:35:51.658703   71168 logs.go:276] 0 containers: []
	W0401 19:35:51.658712   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:51.658720   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:51.658791   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:51.700071   71168 cri.go:89] found id: ""
	I0401 19:35:51.700097   71168 logs.go:276] 0 containers: []
	W0401 19:35:51.700108   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:51.700114   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:51.700177   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:51.746772   71168 cri.go:89] found id: ""
	I0401 19:35:51.746798   71168 logs.go:276] 0 containers: []
	W0401 19:35:51.746809   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:51.746826   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:51.746849   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:51.762321   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:51.762350   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:51.843300   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:51.843322   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:51.843337   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:51.919059   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:51.919090   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:51.965899   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:51.965925   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:53.564613   70687 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.745530657s)
	I0401 19:35:53.564696   70687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:35:53.582161   70687 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:35:53.593313   70687 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:35:53.604441   70687 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:35:53.604460   70687 kubeadm.go:156] found existing configuration files:
	
	I0401 19:35:53.604502   70687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 19:35:53.615367   70687 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:35:53.615426   70687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:35:53.626375   70687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 19:35:53.636924   70687 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:35:53.636975   70687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:35:53.647493   70687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 19:35:53.657319   70687 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:35:53.657373   70687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:35:53.667422   70687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 19:35:53.677235   70687 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:35:53.677308   70687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:35:53.688043   70687 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 19:35:53.894204   70687 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 19:35:53.504486   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:55.505966   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:54.523484   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:54.542004   71168 kubeadm.go:591] duration metric: took 4m4.024054342s to restartPrimaryControlPlane
	W0401 19:35:54.542067   71168 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0401 19:35:54.542088   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0401 19:35:55.179619   71168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:35:55.196424   71168 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:35:55.209517   71168 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:35:55.222643   71168 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:35:55.222664   71168 kubeadm.go:156] found existing configuration files:
	
	I0401 19:35:55.222714   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 19:35:55.234756   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:35:55.234813   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:35:55.246725   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 19:35:55.258440   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:35:55.258499   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:35:55.270106   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 19:35:55.280724   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:35:55.280776   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:35:55.293630   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 19:35:55.305588   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:35:55.305660   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:35:55.318308   71168 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 19:35:55.574896   71168 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 19:35:58.004494   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:00.505168   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:02.622337   70687 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0401 19:36:02.622433   70687 kubeadm.go:309] [preflight] Running pre-flight checks
	I0401 19:36:02.622548   70687 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 19:36:02.622659   70687 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 19:36:02.622794   70687 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 19:36:02.622883   70687 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 19:36:02.624550   70687 out.go:204]   - Generating certificates and keys ...
	I0401 19:36:02.624640   70687 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0401 19:36:02.624734   70687 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0401 19:36:02.624861   70687 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0401 19:36:02.624952   70687 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0401 19:36:02.625042   70687 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0401 19:36:02.625114   70687 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0401 19:36:02.625206   70687 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0401 19:36:02.625271   70687 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0401 19:36:02.625337   70687 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0401 19:36:02.625398   70687 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0401 19:36:02.625430   70687 kubeadm.go:309] [certs] Using the existing "sa" key
	I0401 19:36:02.625475   70687 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 19:36:02.625519   70687 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 19:36:02.625567   70687 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 19:36:02.625630   70687 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 19:36:02.625744   70687 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 19:36:02.625825   70687 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 19:36:02.625938   70687 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 19:36:02.626041   70687 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 19:36:02.627616   70687 out.go:204]   - Booting up control plane ...
	I0401 19:36:02.627744   70687 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 19:36:02.627812   70687 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 19:36:02.627878   70687 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 19:36:02.627976   70687 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 19:36:02.628046   70687 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 19:36:02.628098   70687 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0401 19:36:02.628273   70687 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 19:36:02.628354   70687 kubeadm.go:309] [apiclient] All control plane components are healthy after 5.502318 seconds
	I0401 19:36:02.628467   70687 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 19:36:02.628587   70687 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 19:36:02.628642   70687 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 19:36:02.628800   70687 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-882095 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 19:36:02.628849   70687 kubeadm.go:309] [bootstrap-token] Using token: 821cxx.fac41nwqi8u5mwgu
	I0401 19:36:02.630202   70687 out.go:204]   - Configuring RBAC rules ...
	I0401 19:36:02.630328   70687 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 19:36:02.630413   70687 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 19:36:02.630593   70687 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 19:36:02.630794   70687 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 19:36:02.630941   70687 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 19:36:02.631049   70687 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 19:36:02.631205   70687 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 19:36:02.631255   70687 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0401 19:36:02.631318   70687 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0401 19:36:02.631326   70687 kubeadm.go:309] 
	I0401 19:36:02.631412   70687 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0401 19:36:02.631421   70687 kubeadm.go:309] 
	I0401 19:36:02.631527   70687 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0401 19:36:02.631534   70687 kubeadm.go:309] 
	I0401 19:36:02.631560   70687 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0401 19:36:02.631649   70687 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 19:36:02.631721   70687 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 19:36:02.631731   70687 kubeadm.go:309] 
	I0401 19:36:02.631810   70687 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0401 19:36:02.631822   70687 kubeadm.go:309] 
	I0401 19:36:02.631896   70687 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 19:36:02.631910   70687 kubeadm.go:309] 
	I0401 19:36:02.631986   70687 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0401 19:36:02.632088   70687 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 19:36:02.632181   70687 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 19:36:02.632190   70687 kubeadm.go:309] 
	I0401 19:36:02.632319   70687 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 19:36:02.632427   70687 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0401 19:36:02.632437   70687 kubeadm.go:309] 
	I0401 19:36:02.632532   70687 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 821cxx.fac41nwqi8u5mwgu \
	I0401 19:36:02.632695   70687 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b8a0197ad47aa27a5800307c57228d22e61e4d31af785fa8a896f2b7fab267b8 \
	I0401 19:36:02.632726   70687 kubeadm.go:309] 	--control-plane 
	I0401 19:36:02.632736   70687 kubeadm.go:309] 
	I0401 19:36:02.632860   70687 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0401 19:36:02.632875   70687 kubeadm.go:309] 
	I0401 19:36:02.632983   70687 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 821cxx.fac41nwqi8u5mwgu \
	I0401 19:36:02.633118   70687 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b8a0197ad47aa27a5800307c57228d22e61e4d31af785fa8a896f2b7fab267b8 
	I0401 19:36:02.633132   70687 cni.go:84] Creating CNI manager for ""
	I0401 19:36:02.633138   70687 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:36:02.634595   70687 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0401 19:36:02.635812   70687 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0401 19:36:02.671750   70687 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0401 19:36:02.705562   70687 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 19:36:02.705657   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:02.705671   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-882095 minikube.k8s.io/updated_at=2024_04_01T19_36_02_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2 minikube.k8s.io/name=embed-certs-882095 minikube.k8s.io/primary=true
	I0401 19:36:02.762626   70687 ops.go:34] apiserver oom_adj: -16
	I0401 19:36:03.065957   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:03.566513   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:04.066178   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:04.566321   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:05.066798   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:05.566877   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:06.066520   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:03.004878   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:05.505057   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:06.566982   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:07.066931   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:07.566107   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:08.066843   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:08.566186   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:09.066550   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:09.566205   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:10.066287   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:10.566902   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:11.066656   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:08.005380   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:10.504026   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:11.566894   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:12.066235   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:12.566599   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:13.066132   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:13.566865   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:14.066759   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:14.566435   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:15.066907   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:15.566851   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:16.066880   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:16.158125   70687 kubeadm.go:1107] duration metric: took 13.452541301s to wait for elevateKubeSystemPrivileges
	W0401 19:36:16.158168   70687 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0401 19:36:16.158176   70687 kubeadm.go:393] duration metric: took 5m8.800288084s to StartCluster
	I0401 19:36:16.158195   70687 settings.go:142] acquiring lock: {Name:mk5cd3d9600680d3808ad7ff6310a5e71b09e71d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:36:16.158268   70687 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 19:36:16.159976   70687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/kubeconfig: {Name:mkbd988e40ba29769e9f8a43c4d876f38e957f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:36:16.160254   70687 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.190 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 19:36:16.162239   70687 out.go:177] * Verifying Kubernetes components...
	I0401 19:36:16.160346   70687 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0401 19:36:16.162276   70687 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-882095"
	I0401 19:36:16.162311   70687 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-882095"
	W0401 19:36:16.162320   70687 addons.go:243] addon storage-provisioner should already be in state true
	I0401 19:36:16.162339   70687 addons.go:69] Setting default-storageclass=true in profile "embed-certs-882095"
	I0401 19:36:16.162348   70687 addons.go:69] Setting metrics-server=true in profile "embed-certs-882095"
	I0401 19:36:16.162363   70687 addons.go:234] Setting addon metrics-server=true in "embed-certs-882095"
	W0401 19:36:16.162371   70687 addons.go:243] addon metrics-server should already be in state true
	I0401 19:36:16.162377   70687 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-882095"
	I0401 19:36:16.162384   70687 host.go:66] Checking if "embed-certs-882095" exists ...
	I0401 19:36:16.162345   70687 host.go:66] Checking if "embed-certs-882095" exists ...
	I0401 19:36:16.163767   70687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:36:16.160484   70687 config.go:182] Loaded profile config "embed-certs-882095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 19:36:16.162673   70687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:16.162687   70687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:16.163886   70687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:16.163900   70687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:16.162704   70687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:16.163963   70687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:16.180743   70687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41647
	I0401 19:36:16.180759   70687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46707
	I0401 19:36:16.180746   70687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44419
	I0401 19:36:16.181334   70687 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:16.181342   70687 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:16.181369   70687 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:16.181830   70687 main.go:141] libmachine: Using API Version  1
	I0401 19:36:16.181848   70687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:16.181973   70687 main.go:141] libmachine: Using API Version  1
	I0401 19:36:16.181991   70687 main.go:141] libmachine: Using API Version  1
	I0401 19:36:16.182001   70687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:16.182007   70687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:16.182187   70687 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:16.182360   70687 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:16.182393   70687 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:16.182592   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetState
	I0401 19:36:16.182726   70687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:16.182753   70687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:16.182829   70687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:16.182871   70687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:16.186198   70687 addons.go:234] Setting addon default-storageclass=true in "embed-certs-882095"
	W0401 19:36:16.186226   70687 addons.go:243] addon default-storageclass should already be in state true
	I0401 19:36:16.186258   70687 host.go:66] Checking if "embed-certs-882095" exists ...
	I0401 19:36:16.186603   70687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:16.186636   70687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:16.198494   70687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36915
	I0401 19:36:16.198862   70687 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:16.199298   70687 main.go:141] libmachine: Using API Version  1
	I0401 19:36:16.199315   70687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:16.199777   70687 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:16.200056   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetState
	I0401 19:36:16.201955   70687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39769
	I0401 19:36:16.202167   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:36:16.202416   70687 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:16.204728   70687 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:36:16.202891   70687 main.go:141] libmachine: Using API Version  1
	I0401 19:36:16.205309   70687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35751
	I0401 19:36:16.207964   70687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:16.208022   70687 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 19:36:16.208038   70687 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 19:36:16.208057   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:36:16.208345   70687 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:16.208482   70687 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:16.208550   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetState
	I0401 19:36:16.209106   70687 main.go:141] libmachine: Using API Version  1
	I0401 19:36:16.209121   70687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:16.209764   70687 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:16.210220   70687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:16.210258   70687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:16.211015   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:36:16.213549   70687 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 19:36:16.212105   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:36:16.215606   70687 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 19:36:16.213577   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:36:16.215625   70687 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 19:36:16.215632   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:36:16.212867   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:36:16.215647   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:36:16.215791   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:36:16.215913   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:36:16.216028   70687 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/embed-certs-882095/id_rsa Username:docker}
	I0401 19:36:16.218302   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:36:16.218924   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:36:16.218948   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:36:16.219174   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:36:16.219340   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:36:16.219496   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:36:16.219818   70687 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/embed-certs-882095/id_rsa Username:docker}
	I0401 19:36:16.227813   70687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35001
	I0401 19:36:16.228198   70687 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:16.228612   70687 main.go:141] libmachine: Using API Version  1
	I0401 19:36:16.228635   70687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:16.228989   70687 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:16.229159   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetState
	I0401 19:36:16.230712   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:36:16.230969   70687 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 19:36:16.230987   70687 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 19:36:16.231003   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:36:16.233712   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:36:16.234102   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:36:16.234126   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:36:16.234273   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:36:16.234435   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:36:16.234593   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:36:16.234753   70687 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/embed-certs-882095/id_rsa Username:docker}
	I0401 19:36:16.332504   70687 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:36:16.354423   70687 node_ready.go:35] waiting up to 6m0s for node "embed-certs-882095" to be "Ready" ...
	I0401 19:36:16.363527   70687 node_ready.go:49] node "embed-certs-882095" has status "Ready":"True"
	I0401 19:36:16.363555   70687 node_ready.go:38] duration metric: took 9.10669ms for node "embed-certs-882095" to be "Ready" ...
	I0401 19:36:16.363567   70687 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:36:16.369606   70687 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-fx6hf" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:16.435769   70687 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 19:36:16.435793   70687 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 19:36:16.450934   70687 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 19:36:16.468137   70687 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 19:36:16.474209   70687 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 19:36:16.474233   70687 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 19:36:13.003028   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:15.004924   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:16.530201   70687 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 19:36:16.530222   70687 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0401 19:36:16.607557   70687 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 19:36:17.044156   70687 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:17.044183   70687 main.go:141] libmachine: (embed-certs-882095) Calling .Close
	I0401 19:36:17.044165   70687 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:17.044244   70687 main.go:141] libmachine: (embed-certs-882095) Calling .Close
	I0401 19:36:17.044569   70687 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:17.044606   70687 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:17.044617   70687 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:17.044624   70687 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:17.044630   70687 main.go:141] libmachine: (embed-certs-882095) Calling .Close
	I0401 19:36:17.044639   70687 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:17.044656   70687 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:17.044657   70687 main.go:141] libmachine: (embed-certs-882095) DBG | Closing plugin on server side
	I0401 19:36:17.044670   70687 main.go:141] libmachine: (embed-certs-882095) Calling .Close
	I0401 19:36:17.044616   70687 main.go:141] libmachine: (embed-certs-882095) DBG | Closing plugin on server side
	I0401 19:36:17.044947   70687 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:17.044963   70687 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:17.044964   70687 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:17.044973   70687 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:17.045019   70687 main.go:141] libmachine: (embed-certs-882095) DBG | Closing plugin on server side
	I0401 19:36:17.058441   70687 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:17.058469   70687 main.go:141] libmachine: (embed-certs-882095) Calling .Close
	I0401 19:36:17.058718   70687 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:17.058735   70687 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:17.276263   70687 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:17.276283   70687 main.go:141] libmachine: (embed-certs-882095) Calling .Close
	I0401 19:36:17.276548   70687 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:17.276562   70687 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:17.276571   70687 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:17.276584   70687 main.go:141] libmachine: (embed-certs-882095) Calling .Close
	I0401 19:36:17.276823   70687 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:17.276837   70687 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:17.276852   70687 addons.go:470] Verifying addon metrics-server=true in "embed-certs-882095"
	I0401 19:36:17.278536   70687 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0401 19:36:17.279740   70687 addons.go:505] duration metric: took 1.119396s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0401 19:36:18.412746   70687 pod_ready.go:102] pod "coredns-76f75df574-fx6hf" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:19.378799   70687 pod_ready.go:92] pod "coredns-76f75df574-fx6hf" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:19.378819   70687 pod_ready.go:81] duration metric: took 3.009189982s for pod "coredns-76f75df574-fx6hf" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.378828   70687 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-hwbw6" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.384482   70687 pod_ready.go:92] pod "coredns-76f75df574-hwbw6" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:19.384498   70687 pod_ready.go:81] duration metric: took 5.664781ms for pod "coredns-76f75df574-hwbw6" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.384507   70687 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.390258   70687 pod_ready.go:92] pod "etcd-embed-certs-882095" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:19.390274   70687 pod_ready.go:81] duration metric: took 5.761319ms for pod "etcd-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.390281   70687 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.395592   70687 pod_ready.go:92] pod "kube-apiserver-embed-certs-882095" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:19.395611   70687 pod_ready.go:81] duration metric: took 5.323181ms for pod "kube-apiserver-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.395622   70687 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.400979   70687 pod_ready.go:92] pod "kube-controller-manager-embed-certs-882095" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:19.400994   70687 pod_ready.go:81] duration metric: took 5.365282ms for pod "kube-controller-manager-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.401002   70687 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mbs4m" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.775009   70687 pod_ready.go:92] pod "kube-proxy-mbs4m" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:19.775036   70687 pod_ready.go:81] duration metric: took 374.027521ms for pod "kube-proxy-mbs4m" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.775047   70687 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:20.174962   70687 pod_ready.go:92] pod "kube-scheduler-embed-certs-882095" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:20.174986   70687 pod_ready.go:81] duration metric: took 399.930828ms for pod "kube-scheduler-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:20.174994   70687 pod_ready.go:38] duration metric: took 3.811414774s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:36:20.175006   70687 api_server.go:52] waiting for apiserver process to appear ...
	I0401 19:36:20.175064   70687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:36:20.191452   70687 api_server.go:72] duration metric: took 4.031156406s to wait for apiserver process to appear ...
	I0401 19:36:20.191477   70687 api_server.go:88] waiting for apiserver healthz status ...
	I0401 19:36:20.191498   70687 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0401 19:36:20.196706   70687 api_server.go:279] https://192.168.39.190:8443/healthz returned 200:
	ok
	I0401 19:36:20.197772   70687 api_server.go:141] control plane version: v1.29.3
	I0401 19:36:20.197791   70687 api_server.go:131] duration metric: took 6.308074ms to wait for apiserver health ...
	I0401 19:36:20.197799   70687 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 19:36:20.380616   70687 system_pods.go:59] 9 kube-system pods found
	I0401 19:36:20.380645   70687 system_pods.go:61] "coredns-76f75df574-fx6hf" [1c07b740-3374-4a54-a786-784b23ec6b83] Running
	I0401 19:36:20.380651   70687 system_pods.go:61] "coredns-76f75df574-hwbw6" [7b12145a-2689-47e9-9724-d80790ed079c] Running
	I0401 19:36:20.380657   70687 system_pods.go:61] "etcd-embed-certs-882095" [3848d128-2fde-42f5-9543-b8d0343ba15b] Running
	I0401 19:36:20.380663   70687 system_pods.go:61] "kube-apiserver-embed-certs-882095" [116c5cd1-2d04-4a85-96e9-bd1e6af4cba4] Running
	I0401 19:36:20.380668   70687 system_pods.go:61] "kube-controller-manager-embed-certs-882095" [8a2282cf-2a87-4cee-a482-355e92048642] Running
	I0401 19:36:20.380672   70687 system_pods.go:61] "kube-proxy-mbs4m" [ffccbae0-7538-4a75-a6ce-afce49865f07] Running
	I0401 19:36:20.380676   70687 system_pods.go:61] "kube-scheduler-embed-certs-882095" [d2554007-1c9c-4238-809a-72aae1fb7de3] Running
	I0401 19:36:20.380684   70687 system_pods.go:61] "metrics-server-57f55c9bc5-dktr6" [c6adfcab-c746-4ad8-abe2-8b300389a4f5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:36:20.380689   70687 system_pods.go:61] "storage-provisioner" [bcff0d1d-a555-4b25-9aa5-7ab1188c21fd] Running
	I0401 19:36:20.380700   70687 system_pods.go:74] duration metric: took 182.895079ms to wait for pod list to return data ...
	I0401 19:36:20.380711   70687 default_sa.go:34] waiting for default service account to be created ...
	I0401 19:36:20.574739   70687 default_sa.go:45] found service account: "default"
	I0401 19:36:20.574771   70687 default_sa.go:55] duration metric: took 194.049249ms for default service account to be created ...
	I0401 19:36:20.574785   70687 system_pods.go:116] waiting for k8s-apps to be running ...
	I0401 19:36:20.781600   70687 system_pods.go:86] 9 kube-system pods found
	I0401 19:36:20.781630   70687 system_pods.go:89] "coredns-76f75df574-fx6hf" [1c07b740-3374-4a54-a786-784b23ec6b83] Running
	I0401 19:36:20.781638   70687 system_pods.go:89] "coredns-76f75df574-hwbw6" [7b12145a-2689-47e9-9724-d80790ed079c] Running
	I0401 19:36:20.781658   70687 system_pods.go:89] "etcd-embed-certs-882095" [3848d128-2fde-42f5-9543-b8d0343ba15b] Running
	I0401 19:36:20.781664   70687 system_pods.go:89] "kube-apiserver-embed-certs-882095" [116c5cd1-2d04-4a85-96e9-bd1e6af4cba4] Running
	I0401 19:36:20.781672   70687 system_pods.go:89] "kube-controller-manager-embed-certs-882095" [8a2282cf-2a87-4cee-a482-355e92048642] Running
	I0401 19:36:20.781678   70687 system_pods.go:89] "kube-proxy-mbs4m" [ffccbae0-7538-4a75-a6ce-afce49865f07] Running
	I0401 19:36:20.781686   70687 system_pods.go:89] "kube-scheduler-embed-certs-882095" [d2554007-1c9c-4238-809a-72aae1fb7de3] Running
	I0401 19:36:20.781695   70687 system_pods.go:89] "metrics-server-57f55c9bc5-dktr6" [c6adfcab-c746-4ad8-abe2-8b300389a4f5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:36:20.781705   70687 system_pods.go:89] "storage-provisioner" [bcff0d1d-a555-4b25-9aa5-7ab1188c21fd] Running
	I0401 19:36:20.781722   70687 system_pods.go:126] duration metric: took 206.928658ms to wait for k8s-apps to be running ...
	I0401 19:36:20.781738   70687 system_svc.go:44] waiting for kubelet service to be running ....
	I0401 19:36:20.781789   70687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:36:20.798910   70687 system_svc.go:56] duration metric: took 17.163227ms WaitForService to wait for kubelet
	I0401 19:36:20.798940   70687 kubeadm.go:576] duration metric: took 4.638649198s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 19:36:20.798962   70687 node_conditions.go:102] verifying NodePressure condition ...
	I0401 19:36:20.975011   70687 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 19:36:20.975034   70687 node_conditions.go:123] node cpu capacity is 2
	I0401 19:36:20.975045   70687 node_conditions.go:105] duration metric: took 176.077669ms to run NodePressure ...
	I0401 19:36:20.975055   70687 start.go:240] waiting for startup goroutines ...
	I0401 19:36:20.975061   70687 start.go:245] waiting for cluster config update ...
	I0401 19:36:20.975070   70687 start.go:254] writing updated cluster config ...
	I0401 19:36:20.975313   70687 ssh_runner.go:195] Run: rm -f paused
	I0401 19:36:21.024261   70687 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0401 19:36:21.026583   70687 out.go:177] * Done! kubectl is now configured to use "embed-certs-882095" cluster and "default" namespace by default
	I0401 19:36:17.504621   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:20.003964   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:18.623277   70962 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.523094705s)
	I0401 19:36:18.623344   70962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:36:18.640939   70962 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:36:18.653983   70962 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:36:18.666162   70962 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:36:18.666182   70962 kubeadm.go:156] found existing configuration files:
	
	I0401 19:36:18.666233   70962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0401 19:36:18.679043   70962 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:36:18.679092   70962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:36:18.690185   70962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0401 19:36:18.703017   70962 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:36:18.703078   70962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:36:18.714986   70962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0401 19:36:18.727138   70962 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:36:18.727188   70962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:36:18.737886   70962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0401 19:36:18.748013   70962 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:36:18.748064   70962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:36:18.758552   70962 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 19:36:18.988309   70962 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 19:36:22.004400   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:24.004510   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:26.504264   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:28.053408   70962 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0401 19:36:28.053478   70962 kubeadm.go:309] [preflight] Running pre-flight checks
	I0401 19:36:28.053544   70962 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 19:36:28.053677   70962 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 19:36:28.053837   70962 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 19:36:28.053953   70962 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 19:36:28.055426   70962 out.go:204]   - Generating certificates and keys ...
	I0401 19:36:28.055513   70962 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0401 19:36:28.055614   70962 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0401 19:36:28.055742   70962 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0401 19:36:28.055834   70962 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0401 19:36:28.055942   70962 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0401 19:36:28.056022   70962 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0401 19:36:28.056104   70962 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0401 19:36:28.056167   70962 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0401 19:36:28.056250   70962 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0401 19:36:28.056331   70962 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0401 19:36:28.056371   70962 kubeadm.go:309] [certs] Using the existing "sa" key
	I0401 19:36:28.056449   70962 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 19:36:28.056531   70962 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 19:36:28.056600   70962 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 19:36:28.056677   70962 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 19:36:28.056772   70962 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 19:36:28.056870   70962 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 19:36:28.057006   70962 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 19:36:28.057100   70962 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 19:36:28.058575   70962 out.go:204]   - Booting up control plane ...
	I0401 19:36:28.058693   70962 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 19:36:28.058773   70962 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 19:36:28.058830   70962 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 19:36:28.058923   70962 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 19:36:28.058998   70962 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 19:36:28.059032   70962 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0401 19:36:28.059201   70962 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 19:36:28.059307   70962 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003148 seconds
	I0401 19:36:28.059432   70962 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 19:36:28.059592   70962 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 19:36:28.059665   70962 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 19:36:28.059892   70962 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-734648 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 19:36:28.059966   70962 kubeadm.go:309] [bootstrap-token] Using token: x76swh.zbuhmc8jrh5hodf9
	I0401 19:36:28.061321   70962 out.go:204]   - Configuring RBAC rules ...
	I0401 19:36:28.061450   70962 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 19:36:28.061577   70962 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 19:36:28.061803   70962 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 19:36:28.061993   70962 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 19:36:28.062153   70962 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 19:36:28.062252   70962 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 19:36:28.062363   70962 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 19:36:28.062422   70962 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0401 19:36:28.062481   70962 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0401 19:36:28.062493   70962 kubeadm.go:309] 
	I0401 19:36:28.062556   70962 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0401 19:36:28.062569   70962 kubeadm.go:309] 
	I0401 19:36:28.062686   70962 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0401 19:36:28.062697   70962 kubeadm.go:309] 
	I0401 19:36:28.062727   70962 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0401 19:36:28.062805   70962 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 19:36:28.062872   70962 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 19:36:28.062886   70962 kubeadm.go:309] 
	I0401 19:36:28.062959   70962 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0401 19:36:28.062969   70962 kubeadm.go:309] 
	I0401 19:36:28.063050   70962 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 19:36:28.063061   70962 kubeadm.go:309] 
	I0401 19:36:28.063103   70962 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0401 19:36:28.063172   70962 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 19:36:28.063234   70962 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 19:36:28.063240   70962 kubeadm.go:309] 
	I0401 19:36:28.063337   70962 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 19:36:28.063440   70962 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0401 19:36:28.063453   70962 kubeadm.go:309] 
	I0401 19:36:28.063559   70962 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token x76swh.zbuhmc8jrh5hodf9 \
	I0401 19:36:28.063676   70962 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b8a0197ad47aa27a5800307c57228d22e61e4d31af785fa8a896f2b7fab267b8 \
	I0401 19:36:28.063725   70962 kubeadm.go:309] 	--control-plane 
	I0401 19:36:28.063734   70962 kubeadm.go:309] 
	I0401 19:36:28.063835   70962 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0401 19:36:28.063844   70962 kubeadm.go:309] 
	I0401 19:36:28.063955   70962 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token x76swh.zbuhmc8jrh5hodf9 \
	I0401 19:36:28.064092   70962 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b8a0197ad47aa27a5800307c57228d22e61e4d31af785fa8a896f2b7fab267b8 
	I0401 19:36:28.064105   70962 cni.go:84] Creating CNI manager for ""
	I0401 19:36:28.064114   70962 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:36:28.065560   70962 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0401 19:36:28.505029   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:31.005436   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:28.066823   70962 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0401 19:36:28.089595   70962 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0401 19:36:28.150074   70962 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 19:36:28.150195   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:28.150206   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-734648 minikube.k8s.io/updated_at=2024_04_01T19_36_28_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2 minikube.k8s.io/name=default-k8s-diff-port-734648 minikube.k8s.io/primary=true
	I0401 19:36:28.494391   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:28.529148   70962 ops.go:34] apiserver oom_adj: -16
	I0401 19:36:28.994780   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:29.494976   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:29.994627   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:30.495192   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:30.995334   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:31.494861   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:31.994576   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:33.505264   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:35.506298   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:32.495185   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:32.995090   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:33.494755   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:33.994758   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:34.494609   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:34.995423   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:35.495219   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:35.994557   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:36.495175   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:36.994857   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:37.494725   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:37.994846   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:38.494687   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:38.994615   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:39.494929   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:39.994514   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:40.494838   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:40.994846   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:41.105036   70962 kubeadm.go:1107] duration metric: took 12.954907711s to wait for elevateKubeSystemPrivileges
	W0401 19:36:41.105072   70962 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0401 19:36:41.105080   70962 kubeadm.go:393] duration metric: took 5m13.291890816s to StartCluster
	I0401 19:36:41.105098   70962 settings.go:142] acquiring lock: {Name:mk5cd3d9600680d3808ad7ff6310a5e71b09e71d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:36:41.105193   70962 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 19:36:41.107226   70962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/kubeconfig: {Name:mkbd988e40ba29769e9f8a43c4d876f38e957f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:36:41.107451   70962 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.145 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 19:36:41.109245   70962 out.go:177] * Verifying Kubernetes components...
	I0401 19:36:41.107543   70962 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0401 19:36:41.107682   70962 config.go:182] Loaded profile config "default-k8s-diff-port-734648": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 19:36:41.110583   70962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:36:41.110596   70962 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-734648"
	I0401 19:36:41.110621   70962 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-734648"
	I0401 19:36:41.110620   70962 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-734648"
	I0401 19:36:41.110652   70962 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-734648"
	I0401 19:36:41.110588   70962 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-734648"
	W0401 19:36:41.110665   70962 addons.go:243] addon metrics-server should already be in state true
	I0401 19:36:41.110685   70962 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-734648"
	W0401 19:36:41.110699   70962 addons.go:243] addon storage-provisioner should already be in state true
	I0401 19:36:41.110700   70962 host.go:66] Checking if "default-k8s-diff-port-734648" exists ...
	I0401 19:36:41.110727   70962 host.go:66] Checking if "default-k8s-diff-port-734648" exists ...
	I0401 19:36:41.111032   70962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:41.111039   70962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:41.111062   70962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:41.111098   70962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:41.111126   70962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:41.111158   70962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:41.129376   70962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46657
	I0401 19:36:41.130833   70962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38623
	I0401 19:36:41.131158   70962 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:41.131258   70962 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:41.131761   70962 main.go:141] libmachine: Using API Version  1
	I0401 19:36:41.131786   70962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:41.132119   70962 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:41.132313   70962 main.go:141] libmachine: Using API Version  1
	I0401 19:36:41.132437   70962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:41.132477   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetState
	I0401 19:36:41.133129   70962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36213
	I0401 19:36:41.133449   70962 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:41.133456   70962 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:41.133871   70962 main.go:141] libmachine: Using API Version  1
	I0401 19:36:41.133894   70962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:41.133990   70962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:41.134021   70962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:41.134159   70962 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:41.134572   70962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:41.134609   70962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:41.143808   70962 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-734648"
	W0401 19:36:41.143829   70962 addons.go:243] addon default-storageclass should already be in state true
	I0401 19:36:41.143858   70962 host.go:66] Checking if "default-k8s-diff-port-734648" exists ...
	I0401 19:36:41.144202   70962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:41.144241   70962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:41.154009   70962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38703
	I0401 19:36:41.156112   70962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45449
	I0401 19:36:41.156579   70962 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:41.157085   70962 main.go:141] libmachine: Using API Version  1
	I0401 19:36:41.157112   70962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:41.157458   70962 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:41.157631   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetState
	I0401 19:36:41.157891   70962 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:41.158593   70962 main.go:141] libmachine: Using API Version  1
	I0401 19:36:41.158615   70962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:41.158924   70962 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:41.159123   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetState
	I0401 19:36:41.160683   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:36:41.162801   70962 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 19:36:41.164275   70962 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 19:36:41.164292   70962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 19:36:41.164310   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:36:41.162762   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:36:41.163321   70962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39643
	I0401 19:36:41.166161   70962 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:36:38.004666   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:40.005118   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:41.164866   70962 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:41.167473   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:36:41.167806   70962 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 19:36:41.167833   70962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 19:36:41.167850   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:36:41.168056   70962 main.go:141] libmachine: Using API Version  1
	I0401 19:36:41.168074   70962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:41.168145   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:36:41.168163   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:36:41.168194   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:36:41.168353   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:36:41.168429   70962 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:41.168583   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:36:41.168723   70962 sshutil.go:53] new ssh client: &{IP:192.168.61.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/default-k8s-diff-port-734648/id_rsa Username:docker}
	I0401 19:36:41.169323   70962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:41.169374   70962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:41.170857   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:36:41.171269   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:36:41.171323   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:36:41.171412   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:36:41.171576   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:36:41.171723   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:36:41.171860   70962 sshutil.go:53] new ssh client: &{IP:192.168.61.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/default-k8s-diff-port-734648/id_rsa Username:docker}
	I0401 19:36:41.191280   70962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42133
	I0401 19:36:41.191576   70962 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:41.192122   70962 main.go:141] libmachine: Using API Version  1
	I0401 19:36:41.192152   70962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:41.192511   70962 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:41.192673   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetState
	I0401 19:36:41.194286   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:36:41.194528   70962 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 19:36:41.194546   70962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 19:36:41.194564   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:36:41.197639   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:36:41.198235   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:36:41.198259   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:36:41.198296   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:36:41.198491   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:36:41.198670   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:36:41.198857   70962 sshutil.go:53] new ssh client: &{IP:192.168.61.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/default-k8s-diff-port-734648/id_rsa Username:docker}
	I0401 19:36:41.308472   70962 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:36:41.334121   70962 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-734648" to be "Ready" ...
	I0401 19:36:41.343898   70962 node_ready.go:49] node "default-k8s-diff-port-734648" has status "Ready":"True"
	I0401 19:36:41.343943   70962 node_ready.go:38] duration metric: took 9.780821ms for node "default-k8s-diff-port-734648" to be "Ready" ...
	I0401 19:36:41.343952   70962 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:36:41.352294   70962 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:41.362318   70962 pod_ready.go:92] pod "etcd-default-k8s-diff-port-734648" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:41.362345   70962 pod_ready.go:81] duration metric: took 10.020335ms for pod "etcd-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:41.362358   70962 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:41.367338   70962 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-734648" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:41.367356   70962 pod_ready.go:81] duration metric: took 4.990987ms for pod "kube-apiserver-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:41.367364   70962 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:41.372379   70962 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-734648" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:41.372401   70962 pod_ready.go:81] duration metric: took 5.030239ms for pod "kube-controller-manager-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:41.372412   70962 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:41.377862   70962 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:41.377881   70962 pod_ready.go:81] duration metric: took 5.460968ms for pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:41.377891   70962 pod_ready.go:38] duration metric: took 33.929349ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:36:41.377915   70962 api_server.go:52] waiting for apiserver process to appear ...
	I0401 19:36:41.377965   70962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:36:41.396518   70962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 19:36:41.407024   70962 api_server.go:72] duration metric: took 299.545156ms to wait for apiserver process to appear ...
	I0401 19:36:41.407049   70962 api_server.go:88] waiting for apiserver healthz status ...
	I0401 19:36:41.407068   70962 api_server.go:253] Checking apiserver healthz at https://192.168.61.145:8444/healthz ...
	I0401 19:36:41.411429   70962 api_server.go:279] https://192.168.61.145:8444/healthz returned 200:
	ok
	I0401 19:36:41.412620   70962 api_server.go:141] control plane version: v1.29.3
	I0401 19:36:41.412640   70962 api_server.go:131] duration metric: took 5.58478ms to wait for apiserver health ...
	I0401 19:36:41.412646   70962 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 19:36:41.426474   70962 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 19:36:41.426500   70962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 19:36:41.447003   70962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 19:36:41.470135   70962 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 19:36:41.470153   70962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 19:36:41.526684   70962 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 19:36:41.526710   70962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0401 19:36:41.540871   70962 system_pods.go:59] 4 kube-system pods found
	I0401 19:36:41.540894   70962 system_pods.go:61] "etcd-default-k8s-diff-port-734648" [7b60f629-8a15-420e-936c-872a0d55ce74] Running
	I0401 19:36:41.540900   70962 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-734648" [811a3391-02c8-43dd-9129-3fc50a4fab41] Running
	I0401 19:36:41.540905   70962 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-734648" [4b57b14a-5f46-482f-8661-8fa500db5390] Running
	I0401 19:36:41.540908   70962 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-734648" [e0fb5e6b-aaa8-45ba-9df9-be947cbbdb80] Running
	I0401 19:36:41.540914   70962 system_pods.go:74] duration metric: took 128.262683ms to wait for pod list to return data ...
	I0401 19:36:41.540920   70962 default_sa.go:34] waiting for default service account to be created ...
	I0401 19:36:41.625507   70962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 19:36:41.750232   70962 default_sa.go:45] found service account: "default"
	I0401 19:36:41.750261   70962 default_sa.go:55] duration metric: took 209.334562ms for default service account to be created ...
	I0401 19:36:41.750273   70962 system_pods.go:116] waiting for k8s-apps to be running ...
	I0401 19:36:41.968623   70962 system_pods.go:86] 7 kube-system pods found
	I0401 19:36:41.968651   70962 system_pods.go:89] "coredns-76f75df574-lwsms" [9f432161-c5e3-42fa-8857-8e61959511b0] Pending
	I0401 19:36:41.968657   70962 system_pods.go:89] "coredns-76f75df574-ws9cc" [65660abf-9856-4df4-a07b-854cfd8e3fc6] Pending
	I0401 19:36:41.968663   70962 system_pods.go:89] "etcd-default-k8s-diff-port-734648" [7b60f629-8a15-420e-936c-872a0d55ce74] Running
	I0401 19:36:41.968669   70962 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-734648" [811a3391-02c8-43dd-9129-3fc50a4fab41] Running
	I0401 19:36:41.968675   70962 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-734648" [4b57b14a-5f46-482f-8661-8fa500db5390] Running
	I0401 19:36:41.968683   70962 system_pods.go:89] "kube-proxy-p8wrc" [2f6b37e6-b3f9-44b6-8ff9-e8fd781ef1a3] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:36:41.968690   70962 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-734648" [e0fb5e6b-aaa8-45ba-9df9-be947cbbdb80] Running
	I0401 19:36:41.968712   70962 retry.go:31] will retry after 288.42332ms: missing components: kube-dns, kube-proxy
	I0401 19:36:42.231814   70962 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:42.231848   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .Close
	I0401 19:36:42.231904   70962 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:42.231925   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .Close
	I0401 19:36:42.232160   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | Closing plugin on server side
	I0401 19:36:42.232161   70962 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:42.232179   70962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:42.232187   70962 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:42.232191   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | Closing plugin on server side
	I0401 19:36:42.232199   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .Close
	I0401 19:36:42.232223   70962 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:42.232235   70962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:42.232244   70962 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:42.232255   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .Close
	I0401 19:36:42.232431   70962 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:42.232478   70962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:42.232578   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | Closing plugin on server side
	I0401 19:36:42.232612   70962 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:42.232629   70962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:42.251515   70962 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:42.251538   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .Close
	I0401 19:36:42.251795   70962 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:42.251809   70962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:42.267102   70962 system_pods.go:86] 8 kube-system pods found
	I0401 19:36:42.267135   70962 system_pods.go:89] "coredns-76f75df574-lwsms" [9f432161-c5e3-42fa-8857-8e61959511b0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:36:42.267148   70962 system_pods.go:89] "coredns-76f75df574-ws9cc" [65660abf-9856-4df4-a07b-854cfd8e3fc6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:36:42.267163   70962 system_pods.go:89] "etcd-default-k8s-diff-port-734648" [7b60f629-8a15-420e-936c-872a0d55ce74] Running
	I0401 19:36:42.267181   70962 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-734648" [811a3391-02c8-43dd-9129-3fc50a4fab41] Running
	I0401 19:36:42.267187   70962 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-734648" [4b57b14a-5f46-482f-8661-8fa500db5390] Running
	I0401 19:36:42.267196   70962 system_pods.go:89] "kube-proxy-p8wrc" [2f6b37e6-b3f9-44b6-8ff9-e8fd781ef1a3] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:36:42.267204   70962 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-734648" [e0fb5e6b-aaa8-45ba-9df9-be947cbbdb80] Running
	I0401 19:36:42.267222   70962 system_pods.go:89] "storage-provisioner" [8509e661-1b53-4018-b6b0-b6a5e242768d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:36:42.267244   70962 retry.go:31] will retry after 336.906399ms: missing components: kube-dns, kube-proxy
	I0401 19:36:42.632180   70962 system_pods.go:86] 9 kube-system pods found
	I0401 19:36:42.632212   70962 system_pods.go:89] "coredns-76f75df574-lwsms" [9f432161-c5e3-42fa-8857-8e61959511b0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:36:42.632223   70962 system_pods.go:89] "coredns-76f75df574-ws9cc" [65660abf-9856-4df4-a07b-854cfd8e3fc6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:36:42.632232   70962 system_pods.go:89] "etcd-default-k8s-diff-port-734648" [7b60f629-8a15-420e-936c-872a0d55ce74] Running
	I0401 19:36:42.632240   70962 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-734648" [811a3391-02c8-43dd-9129-3fc50a4fab41] Running
	I0401 19:36:42.632247   70962 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-734648" [4b57b14a-5f46-482f-8661-8fa500db5390] Running
	I0401 19:36:42.632257   70962 system_pods.go:89] "kube-proxy-p8wrc" [2f6b37e6-b3f9-44b6-8ff9-e8fd781ef1a3] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:36:42.632264   70962 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-734648" [e0fb5e6b-aaa8-45ba-9df9-be947cbbdb80] Running
	I0401 19:36:42.632275   70962 system_pods.go:89] "metrics-server-57f55c9bc5-fj5x5" [e25fa51c-d80e-4ddc-898f-3b9903746537] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:36:42.632289   70962 system_pods.go:89] "storage-provisioner" [8509e661-1b53-4018-b6b0-b6a5e242768d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:36:42.632313   70962 retry.go:31] will retry after 406.571029ms: missing components: kube-dns, kube-proxy
	I0401 19:36:42.739308   70962 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.113759645s)
	I0401 19:36:42.739364   70962 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:42.739383   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .Close
	I0401 19:36:42.739822   70962 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:42.739842   70962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:42.739859   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | Closing plugin on server side
	I0401 19:36:42.739867   70962 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:42.739890   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .Close
	I0401 19:36:42.740171   70962 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:42.740186   70962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:42.740198   70962 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-734648"
	I0401 19:36:42.742233   70962 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0401 19:36:42.743265   70962 addons.go:505] duration metric: took 1.635721448s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0401 19:36:43.053149   70962 system_pods.go:86] 9 kube-system pods found
	I0401 19:36:43.053183   70962 system_pods.go:89] "coredns-76f75df574-lwsms" [9f432161-c5e3-42fa-8857-8e61959511b0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:36:43.053195   70962 system_pods.go:89] "coredns-76f75df574-ws9cc" [65660abf-9856-4df4-a07b-854cfd8e3fc6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:36:43.053205   70962 system_pods.go:89] "etcd-default-k8s-diff-port-734648" [7b60f629-8a15-420e-936c-872a0d55ce74] Running
	I0401 19:36:43.053215   70962 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-734648" [811a3391-02c8-43dd-9129-3fc50a4fab41] Running
	I0401 19:36:43.053223   70962 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-734648" [4b57b14a-5f46-482f-8661-8fa500db5390] Running
	I0401 19:36:43.053235   70962 system_pods.go:89] "kube-proxy-p8wrc" [2f6b37e6-b3f9-44b6-8ff9-e8fd781ef1a3] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:36:43.053240   70962 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-734648" [e0fb5e6b-aaa8-45ba-9df9-be947cbbdb80] Running
	I0401 19:36:43.053249   70962 system_pods.go:89] "metrics-server-57f55c9bc5-fj5x5" [e25fa51c-d80e-4ddc-898f-3b9903746537] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:36:43.053258   70962 system_pods.go:89] "storage-provisioner" [8509e661-1b53-4018-b6b0-b6a5e242768d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:36:43.053275   70962 retry.go:31] will retry after 524.250739ms: missing components: kube-dns, kube-proxy
	I0401 19:36:43.591419   70962 system_pods.go:86] 9 kube-system pods found
	I0401 19:36:43.591451   70962 system_pods.go:89] "coredns-76f75df574-lwsms" [9f432161-c5e3-42fa-8857-8e61959511b0] Running
	I0401 19:36:43.591463   70962 system_pods.go:89] "coredns-76f75df574-ws9cc" [65660abf-9856-4df4-a07b-854cfd8e3fc6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:36:43.591471   70962 system_pods.go:89] "etcd-default-k8s-diff-port-734648" [7b60f629-8a15-420e-936c-872a0d55ce74] Running
	I0401 19:36:43.591480   70962 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-734648" [811a3391-02c8-43dd-9129-3fc50a4fab41] Running
	I0401 19:36:43.591487   70962 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-734648" [4b57b14a-5f46-482f-8661-8fa500db5390] Running
	I0401 19:36:43.591493   70962 system_pods.go:89] "kube-proxy-p8wrc" [2f6b37e6-b3f9-44b6-8ff9-e8fd781ef1a3] Running
	I0401 19:36:43.591498   70962 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-734648" [e0fb5e6b-aaa8-45ba-9df9-be947cbbdb80] Running
	I0401 19:36:43.591508   70962 system_pods.go:89] "metrics-server-57f55c9bc5-fj5x5" [e25fa51c-d80e-4ddc-898f-3b9903746537] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:36:43.591517   70962 system_pods.go:89] "storage-provisioner" [8509e661-1b53-4018-b6b0-b6a5e242768d] Running
	I0401 19:36:43.591529   70962 system_pods.go:126] duration metric: took 1.841248999s to wait for k8s-apps to be running ...
	I0401 19:36:43.591561   70962 system_svc.go:44] waiting for kubelet service to be running ....
	I0401 19:36:43.591613   70962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:36:43.611873   70962 system_svc.go:56] duration metric: took 20.296001ms WaitForService to wait for kubelet
	I0401 19:36:43.611907   70962 kubeadm.go:576] duration metric: took 2.504430824s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 19:36:43.611930   70962 node_conditions.go:102] verifying NodePressure condition ...
	I0401 19:36:43.617697   70962 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 19:36:43.617720   70962 node_conditions.go:123] node cpu capacity is 2
	I0401 19:36:43.617732   70962 node_conditions.go:105] duration metric: took 5.796357ms to run NodePressure ...
	I0401 19:36:43.617745   70962 start.go:240] waiting for startup goroutines ...
	I0401 19:36:43.617754   70962 start.go:245] waiting for cluster config update ...
	I0401 19:36:43.617765   70962 start.go:254] writing updated cluster config ...
	I0401 19:36:43.618023   70962 ssh_runner.go:195] Run: rm -f paused
	I0401 19:36:43.666581   70962 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0401 19:36:43.668685   70962 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-734648" cluster and "default" namespace by default
	I0401 19:36:42.505149   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:45.003855   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:47.004247   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:49.504898   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:51.505403   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:54.005163   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:56.503395   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:58.503791   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:00.504001   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:02.504193   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:05.003540   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:07.003582   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:09.503975   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:12.005037   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:14.503460   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:16.504630   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:19.004307   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:21.004909   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:23.503286   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:25.503469   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:27.503520   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:30.004792   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:32.503693   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:35.005137   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:37.504848   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:39.504961   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:41.510644   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:44.004680   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:46.005118   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:51.561231   71168 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0401 19:37:51.561356   71168 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0401 19:37:51.563350   71168 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0401 19:37:51.563417   71168 kubeadm.go:309] [preflight] Running pre-flight checks
	I0401 19:37:51.563497   71168 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 19:37:51.563596   71168 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 19:37:51.563711   71168 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 19:37:51.563797   71168 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 19:37:51.565710   71168 out.go:204]   - Generating certificates and keys ...
	I0401 19:37:51.565809   71168 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0401 19:37:51.565908   71168 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0401 19:37:51.566051   71168 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0401 19:37:51.566136   71168 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0401 19:37:51.566230   71168 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0401 19:37:51.566325   71168 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0401 19:37:51.566402   71168 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0401 19:37:51.566464   71168 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0401 19:37:51.566580   71168 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0401 19:37:51.566688   71168 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0401 19:37:51.566727   71168 kubeadm.go:309] [certs] Using the existing "sa" key
	I0401 19:37:51.566774   71168 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 19:37:51.566822   71168 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 19:37:51.566917   71168 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 19:37:51.567001   71168 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 19:37:51.567068   71168 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 19:37:51.567210   71168 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 19:37:51.567314   71168 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 19:37:51.567371   71168 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0401 19:37:51.567473   71168 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 19:37:48.504708   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:51.005355   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:51.569285   71168 out.go:204]   - Booting up control plane ...
	I0401 19:37:51.569394   71168 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 19:37:51.569498   71168 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 19:37:51.569568   71168 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 19:37:51.569661   71168 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 19:37:51.569802   71168 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 19:37:51.569866   71168 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0401 19:37:51.569957   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:37:51.570195   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:37:51.570287   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:37:51.570514   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:37:51.570589   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:37:51.570769   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:37:51.570859   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:37:51.571033   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:37:51.571134   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:37:51.571342   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:37:51.571351   71168 kubeadm.go:309] 
	I0401 19:37:51.571394   71168 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0401 19:37:51.571453   71168 kubeadm.go:309] 		timed out waiting for the condition
	I0401 19:37:51.571475   71168 kubeadm.go:309] 
	I0401 19:37:51.571521   71168 kubeadm.go:309] 	This error is likely caused by:
	I0401 19:37:51.571558   71168 kubeadm.go:309] 		- The kubelet is not running
	I0401 19:37:51.571676   71168 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0401 19:37:51.571687   71168 kubeadm.go:309] 
	I0401 19:37:51.571824   71168 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0401 19:37:51.571880   71168 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0401 19:37:51.571921   71168 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0401 19:37:51.571931   71168 kubeadm.go:309] 
	I0401 19:37:51.572077   71168 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0401 19:37:51.572198   71168 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0401 19:37:51.572209   71168 kubeadm.go:309] 
	I0401 19:37:51.572359   71168 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0401 19:37:51.572477   71168 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0401 19:37:51.572576   71168 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0401 19:37:51.572676   71168 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0401 19:37:51.572731   71168 kubeadm.go:309] 
	W0401 19:37:51.572793   71168 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0401 19:37:51.572851   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0401 19:37:52.428554   71168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:37:52.445151   71168 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:37:52.456989   71168 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:37:52.457010   71168 kubeadm.go:156] found existing configuration files:
	
	I0401 19:37:52.457053   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 19:37:52.468305   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:37:52.468375   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:37:52.479305   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 19:37:52.489703   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:37:52.489753   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:37:52.501023   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 19:37:52.512418   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:37:52.512480   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:37:52.523850   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 19:37:52.534358   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:37:52.534425   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:37:52.546135   71168 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 19:37:52.779427   71168 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 19:37:52.997253   70284 pod_ready.go:81] duration metric: took 4m0.000092266s for pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace to be "Ready" ...
	E0401 19:37:52.997287   70284 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace to be "Ready" (will not retry!)
	I0401 19:37:52.997309   70284 pod_ready.go:38] duration metric: took 4m43.911595731s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:37:52.997333   70284 kubeadm.go:591] duration metric: took 5m31.840082505s to restartPrimaryControlPlane
	W0401 19:37:52.997393   70284 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0401 19:37:52.997421   70284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0401 19:38:25.458760   70284 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.46129187s)
	I0401 19:38:25.458845   70284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:38:25.476633   70284 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:38:25.487615   70284 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:38:25.498590   70284 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:38:25.498616   70284 kubeadm.go:156] found existing configuration files:
	
	I0401 19:38:25.498701   70284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 19:38:25.509063   70284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:38:25.509128   70284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:38:25.519806   70284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 19:38:25.530433   70284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:38:25.530488   70284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:38:25.540979   70284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 19:38:25.550786   70284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:38:25.550847   70284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:38:25.561979   70284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 19:38:25.571832   70284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:38:25.571898   70284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:38:25.582501   70284 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 19:38:25.646956   70284 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0-rc.0
	I0401 19:38:25.647046   70284 kubeadm.go:309] [preflight] Running pre-flight checks
	I0401 19:38:25.825328   70284 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 19:38:25.825459   70284 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 19:38:25.825574   70284 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 19:38:26.066201   70284 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 19:38:26.069071   70284 out.go:204]   - Generating certificates and keys ...
	I0401 19:38:26.069170   70284 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0401 19:38:26.069260   70284 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0401 19:38:26.069402   70284 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0401 19:38:26.069493   70284 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0401 19:38:26.069588   70284 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0401 19:38:26.069703   70284 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0401 19:38:26.069765   70284 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0401 19:38:26.069822   70284 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0401 19:38:26.069986   70284 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0401 19:38:26.070644   70284 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0401 19:38:26.071149   70284 kubeadm.go:309] [certs] Using the existing "sa" key
	I0401 19:38:26.071308   70284 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 19:38:26.204651   70284 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 19:38:26.368926   70284 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 19:38:26.586004   70284 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 19:38:26.710851   70284 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 19:38:26.858015   70284 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 19:38:26.858741   70284 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 19:38:26.863879   70284 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 19:38:26.865794   70284 out.go:204]   - Booting up control plane ...
	I0401 19:38:26.865898   70284 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 19:38:26.865984   70284 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 19:38:26.866081   70284 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 19:38:26.886171   70284 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 19:38:26.887118   70284 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 19:38:26.887177   70284 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0401 19:38:27.021053   70284 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0401 19:38:27.021142   70284 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0401 19:38:28.023462   70284 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002303634s
	I0401 19:38:28.023549   70284 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0401 19:38:34.026967   70284 kubeadm.go:309] [api-check] The API server is healthy after 6.003391014s
	I0401 19:38:34.044095   70284 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 19:38:34.061716   70284 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 19:38:34.092708   70284 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 19:38:34.093037   70284 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-472858 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 19:38:34.111758   70284 kubeadm.go:309] [bootstrap-token] Using token: 45cmca.rj16278sw3ueq3us
	I0401 19:38:34.113211   70284 out.go:204]   - Configuring RBAC rules ...
	I0401 19:38:34.113333   70284 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 19:38:34.122292   70284 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 19:38:34.133114   70284 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 19:38:34.138441   70284 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 19:38:34.143964   70284 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 19:38:34.148675   70284 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 19:38:34.438167   70284 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 19:38:34.885250   70284 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0401 19:38:35.439990   70284 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0401 19:38:35.441439   70284 kubeadm.go:309] 
	I0401 19:38:35.441532   70284 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0401 19:38:35.441545   70284 kubeadm.go:309] 
	I0401 19:38:35.441659   70284 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0401 19:38:35.441690   70284 kubeadm.go:309] 
	I0401 19:38:35.441752   70284 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0401 19:38:35.441845   70284 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 19:38:35.441930   70284 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 19:38:35.441938   70284 kubeadm.go:309] 
	I0401 19:38:35.442014   70284 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0401 19:38:35.442028   70284 kubeadm.go:309] 
	I0401 19:38:35.442067   70284 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 19:38:35.442073   70284 kubeadm.go:309] 
	I0401 19:38:35.442120   70284 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0401 19:38:35.442186   70284 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 19:38:35.442295   70284 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 19:38:35.442307   70284 kubeadm.go:309] 
	I0401 19:38:35.442426   70284 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 19:38:35.442552   70284 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0401 19:38:35.442565   70284 kubeadm.go:309] 
	I0401 19:38:35.442643   70284 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 45cmca.rj16278sw3ueq3us \
	I0401 19:38:35.442766   70284 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b8a0197ad47aa27a5800307c57228d22e61e4d31af785fa8a896f2b7fab267b8 \
	I0401 19:38:35.442803   70284 kubeadm.go:309] 	--control-plane 
	I0401 19:38:35.442813   70284 kubeadm.go:309] 
	I0401 19:38:35.442922   70284 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0401 19:38:35.442936   70284 kubeadm.go:309] 
	I0401 19:38:35.443008   70284 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 45cmca.rj16278sw3ueq3us \
	I0401 19:38:35.443097   70284 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b8a0197ad47aa27a5800307c57228d22e61e4d31af785fa8a896f2b7fab267b8 
	I0401 19:38:35.443436   70284 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 19:38:35.443530   70284 cni.go:84] Creating CNI manager for ""
	I0401 19:38:35.443546   70284 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:38:35.445089   70284 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0401 19:38:35.446328   70284 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0401 19:38:35.459788   70284 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0401 19:38:35.486202   70284 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 19:38:35.486300   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:35.486308   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-472858 minikube.k8s.io/updated_at=2024_04_01T19_38_35_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2 minikube.k8s.io/name=no-preload-472858 minikube.k8s.io/primary=true
	I0401 19:38:35.700677   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:35.731567   70284 ops.go:34] apiserver oom_adj: -16
	I0401 19:38:36.200955   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:36.701003   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:37.201632   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:37.700719   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:38.201316   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:38.701334   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:39.201609   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:39.701034   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:40.201771   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:40.700786   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:41.201750   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:41.701709   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:42.201682   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:42.700838   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:43.201123   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:43.701587   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:44.200860   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:44.700795   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:45.200850   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:45.701273   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:46.201701   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:46.701450   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:47.201496   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:47.701351   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:47.800239   70284 kubeadm.go:1107] duration metric: took 12.313994383s to wait for elevateKubeSystemPrivileges
	W0401 19:38:47.800287   70284 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0401 19:38:47.800298   70284 kubeadm.go:393] duration metric: took 6m26.705086714s to StartCluster
	I0401 19:38:47.800320   70284 settings.go:142] acquiring lock: {Name:mk5cd3d9600680d3808ad7ff6310a5e71b09e71d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:38:47.800410   70284 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 19:38:47.802818   70284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/kubeconfig: {Name:mkbd988e40ba29769e9f8a43c4d876f38e957f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:38:47.803132   70284 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.119 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 19:38:47.805445   70284 out.go:177] * Verifying Kubernetes components...
	I0401 19:38:47.803273   70284 config.go:182] Loaded profile config "no-preload-472858": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0401 19:38:47.803252   70284 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0401 19:38:47.806734   70284 addons.go:69] Setting storage-provisioner=true in profile "no-preload-472858"
	I0401 19:38:47.806761   70284 addons.go:69] Setting default-storageclass=true in profile "no-preload-472858"
	I0401 19:38:47.806774   70284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:38:47.806777   70284 addons.go:69] Setting metrics-server=true in profile "no-preload-472858"
	I0401 19:38:47.806802   70284 addons.go:234] Setting addon metrics-server=true in "no-preload-472858"
	W0401 19:38:47.806815   70284 addons.go:243] addon metrics-server should already be in state true
	I0401 19:38:47.806850   70284 host.go:66] Checking if "no-preload-472858" exists ...
	I0401 19:38:47.806802   70284 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-472858"
	I0401 19:38:47.806768   70284 addons.go:234] Setting addon storage-provisioner=true in "no-preload-472858"
	W0401 19:38:47.807229   70284 addons.go:243] addon storage-provisioner should already be in state true
	I0401 19:38:47.807257   70284 host.go:66] Checking if "no-preload-472858" exists ...
	I0401 19:38:47.807289   70284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:38:47.807332   70284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:38:47.807340   70284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:38:47.807366   70284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:38:47.807620   70284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:38:47.807690   70284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:38:47.823665   70284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38305
	I0401 19:38:47.823684   70284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35487
	I0401 19:38:47.824174   70284 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:38:47.824205   70284 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:38:47.824709   70284 main.go:141] libmachine: Using API Version  1
	I0401 19:38:47.824732   70284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:38:47.824838   70284 main.go:141] libmachine: Using API Version  1
	I0401 19:38:47.824867   70284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:38:47.825094   70284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:38:47.825276   70284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:38:47.825700   70284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:38:47.825746   70284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:38:47.825844   70284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:38:47.825866   70284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:38:47.826415   70284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38845
	I0401 19:38:47.826845   70284 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:38:47.827305   70284 main.go:141] libmachine: Using API Version  1
	I0401 19:38:47.827330   70284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:38:47.827800   70284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:38:47.828004   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetState
	I0401 19:38:47.831735   70284 addons.go:234] Setting addon default-storageclass=true in "no-preload-472858"
	W0401 19:38:47.831760   70284 addons.go:243] addon default-storageclass should already be in state true
	I0401 19:38:47.831791   70284 host.go:66] Checking if "no-preload-472858" exists ...
	I0401 19:38:47.832170   70284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:38:47.832218   70284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:38:47.842050   70284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42037
	I0401 19:38:47.842479   70284 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:38:47.842963   70284 main.go:141] libmachine: Using API Version  1
	I0401 19:38:47.842983   70284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:38:47.843354   70284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:38:47.843513   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetState
	I0401 19:38:47.845360   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:38:47.845430   70284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33357
	I0401 19:38:47.847622   70284 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:38:47.845959   70284 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:38:47.847568   70284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38785
	I0401 19:38:47.849255   70284 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 19:38:47.849283   70284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 19:38:47.849303   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:38:47.849356   70284 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:38:47.849524   70284 main.go:141] libmachine: Using API Version  1
	I0401 19:38:47.849536   70284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:38:47.850173   70284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:38:47.850228   70284 main.go:141] libmachine: Using API Version  1
	I0401 19:38:47.850238   70284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:38:47.850362   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetState
	I0401 19:38:47.851206   70284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:38:47.851773   70284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:38:47.851803   70284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:38:47.852404   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:38:47.854167   70284 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 19:38:47.853141   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:38:47.853926   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:38:47.855729   70284 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 19:38:47.855746   70284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 19:38:47.855763   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:38:47.855728   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:38:47.855809   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:38:47.855854   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:38:47.856000   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:38:47.856160   70284 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/no-preload-472858/id_rsa Username:docker}
	I0401 19:38:47.858726   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:38:47.859782   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:38:47.859826   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:38:47.859948   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:38:47.860138   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:38:47.860310   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:38:47.860593   70284 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/no-preload-472858/id_rsa Username:docker}
	I0401 19:38:47.870182   70284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34517
	I0401 19:38:47.870616   70284 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:38:47.871182   70284 main.go:141] libmachine: Using API Version  1
	I0401 19:38:47.871203   70284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:38:47.871561   70284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:38:47.871947   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetState
	I0401 19:38:47.873606   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:38:47.873931   70284 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 19:38:47.873949   70284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 19:38:47.873967   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:38:47.876826   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:38:47.877259   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:38:47.877286   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:38:47.877389   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:38:47.877672   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:38:47.877816   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:38:47.877974   70284 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/no-preload-472858/id_rsa Username:docker}
	I0401 19:38:48.053731   70284 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:38:48.081160   70284 node_ready.go:35] waiting up to 6m0s for node "no-preload-472858" to be "Ready" ...
	I0401 19:38:48.107976   70284 node_ready.go:49] node "no-preload-472858" has status "Ready":"True"
	I0401 19:38:48.107998   70284 node_ready.go:38] duration metric: took 26.793115ms for node "no-preload-472858" to be "Ready" ...
	I0401 19:38:48.108009   70284 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:38:48.115968   70284 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:38:48.158349   70284 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 19:38:48.158383   70284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 19:38:48.166047   70284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 19:38:48.181902   70284 pod_ready.go:92] pod "etcd-no-preload-472858" in "kube-system" namespace has status "Ready":"True"
	I0401 19:38:48.181922   70284 pod_ready.go:81] duration metric: took 65.920299ms for pod "etcd-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:38:48.181935   70284 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:38:48.199372   70284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 19:38:48.232110   70284 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 19:38:48.232140   70284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 19:38:48.251891   70284 pod_ready.go:92] pod "kube-apiserver-no-preload-472858" in "kube-system" namespace has status "Ready":"True"
	I0401 19:38:48.251914   70284 pod_ready.go:81] duration metric: took 69.970077ms for pod "kube-apiserver-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:38:48.251929   70284 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:38:48.309605   70284 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 19:38:48.309627   70284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0401 19:38:48.325907   70284 pod_ready.go:92] pod "kube-controller-manager-no-preload-472858" in "kube-system" namespace has status "Ready":"True"
	I0401 19:38:48.325928   70284 pod_ready.go:81] duration metric: took 73.991711ms for pod "kube-controller-manager-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:38:48.325938   70284 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:38:48.373418   70284 pod_ready.go:92] pod "kube-scheduler-no-preload-472858" in "kube-system" namespace has status "Ready":"True"
	I0401 19:38:48.373448   70284 pod_ready.go:81] duration metric: took 47.503272ms for pod "kube-scheduler-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:38:48.373456   70284 pod_ready.go:38] duration metric: took 265.436317ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:38:48.373479   70284 api_server.go:52] waiting for apiserver process to appear ...
	I0401 19:38:48.373543   70284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:38:48.396444   70284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 19:38:48.564838   70284 main.go:141] libmachine: Making call to close driver server
	I0401 19:38:48.564860   70284 main.go:141] libmachine: (no-preload-472858) Calling .Close
	I0401 19:38:48.565180   70284 main.go:141] libmachine: (no-preload-472858) DBG | Closing plugin on server side
	I0401 19:38:48.565197   70284 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:38:48.565227   70284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:38:48.565247   70284 main.go:141] libmachine: Making call to close driver server
	I0401 19:38:48.565258   70284 main.go:141] libmachine: (no-preload-472858) Calling .Close
	I0401 19:38:48.565489   70284 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:38:48.565506   70284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:38:48.579332   70284 main.go:141] libmachine: Making call to close driver server
	I0401 19:38:48.579355   70284 main.go:141] libmachine: (no-preload-472858) Calling .Close
	I0401 19:38:48.579599   70284 main.go:141] libmachine: (no-preload-472858) DBG | Closing plugin on server side
	I0401 19:38:48.579637   70284 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:38:48.579645   70284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:38:48.884887   70284 main.go:141] libmachine: Making call to close driver server
	I0401 19:38:48.884920   70284 main.go:141] libmachine: (no-preload-472858) Calling .Close
	I0401 19:38:48.884938   70284 api_server.go:72] duration metric: took 1.08176251s to wait for apiserver process to appear ...
	I0401 19:38:48.884958   70284 api_server.go:88] waiting for apiserver healthz status ...
	I0401 19:38:48.885018   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:38:48.885232   70284 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:38:48.885252   70284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:38:48.885260   70284 main.go:141] libmachine: Making call to close driver server
	I0401 19:38:48.885269   70284 main.go:141] libmachine: (no-preload-472858) Calling .Close
	I0401 19:38:48.885236   70284 main.go:141] libmachine: (no-preload-472858) DBG | Closing plugin on server side
	I0401 19:38:48.885519   70284 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:38:48.887182   70284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:38:48.885555   70284 main.go:141] libmachine: (no-preload-472858) DBG | Closing plugin on server side
	I0401 19:38:48.895737   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 200:
	ok
	I0401 19:38:48.899521   70284 api_server.go:141] control plane version: v1.30.0-rc.0
	I0401 19:38:48.899539   70284 api_server.go:131] duration metric: took 14.574989ms to wait for apiserver health ...
	I0401 19:38:48.899547   70284 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 19:38:48.914064   70284 system_pods.go:59] 8 kube-system pods found
	I0401 19:38:48.914090   70284 system_pods.go:61] "coredns-7db6d8ff4d-8285w" [c450ac4a-974e-4322-9857-fb65792a142b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:48.914106   70284 system_pods.go:61] "coredns-7db6d8ff4d-wmbsp" [7a73f081-42f4-4854-8785-25e54eb0a391] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:48.914112   70284 system_pods.go:61] "etcd-no-preload-472858" [d96862c6-4b97-4239-a79a-e877f2825eb6] Running
	I0401 19:38:48.914117   70284 system_pods.go:61] "kube-apiserver-no-preload-472858" [78418540-b912-4457-98ef-94cf57cf9379] Running
	I0401 19:38:48.914122   70284 system_pods.go:61] "kube-controller-manager-no-preload-472858" [4a48aaa7-c47f-4d1f-aace-f02d2f24c791] Running
	I0401 19:38:48.914126   70284 system_pods.go:61] "kube-proxy-5dmtl" [c243321b-b01a-4fd5-895a-888d18ee8527] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:38:48.914134   70284 system_pods.go:61] "kube-scheduler-no-preload-472858" [3564e7d0-f6cc-4584-a2cc-39fc6f884836] Running
	I0401 19:38:48.914138   70284 system_pods.go:61] "storage-provisioner" [844e010a-3bee-4fd1-942f-10fa50306617] Pending
	I0401 19:38:48.914146   70284 system_pods.go:74] duration metric: took 14.594359ms to wait for pod list to return data ...
	I0401 19:38:48.914156   70284 default_sa.go:34] waiting for default service account to be created ...
	I0401 19:38:48.924790   70284 default_sa.go:45] found service account: "default"
	I0401 19:38:48.924814   70284 default_sa.go:55] duration metric: took 10.649887ms for default service account to be created ...
	I0401 19:38:48.924825   70284 system_pods.go:116] waiting for k8s-apps to be running ...
	I0401 19:38:48.930993   70284 system_pods.go:86] 8 kube-system pods found
	I0401 19:38:48.931020   70284 system_pods.go:89] "coredns-7db6d8ff4d-8285w" [c450ac4a-974e-4322-9857-fb65792a142b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:48.931037   70284 system_pods.go:89] "coredns-7db6d8ff4d-wmbsp" [7a73f081-42f4-4854-8785-25e54eb0a391] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:48.931047   70284 system_pods.go:89] "etcd-no-preload-472858" [d96862c6-4b97-4239-a79a-e877f2825eb6] Running
	I0401 19:38:48.931056   70284 system_pods.go:89] "kube-apiserver-no-preload-472858" [78418540-b912-4457-98ef-94cf57cf9379] Running
	I0401 19:38:48.931066   70284 system_pods.go:89] "kube-controller-manager-no-preload-472858" [4a48aaa7-c47f-4d1f-aace-f02d2f24c791] Running
	I0401 19:38:48.931074   70284 system_pods.go:89] "kube-proxy-5dmtl" [c243321b-b01a-4fd5-895a-888d18ee8527] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:38:48.931089   70284 system_pods.go:89] "kube-scheduler-no-preload-472858" [3564e7d0-f6cc-4584-a2cc-39fc6f884836] Running
	I0401 19:38:48.931098   70284 system_pods.go:89] "storage-provisioner" [844e010a-3bee-4fd1-942f-10fa50306617] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:38:48.931117   70284 retry.go:31] will retry after 297.45527ms: missing components: kube-dns, kube-proxy
	I0401 19:38:49.123999   70284 main.go:141] libmachine: Making call to close driver server
	I0401 19:38:49.124019   70284 main.go:141] libmachine: (no-preload-472858) Calling .Close
	I0401 19:38:49.124344   70284 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:38:49.124394   70284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:38:49.124406   70284 main.go:141] libmachine: Making call to close driver server
	I0401 19:38:49.124414   70284 main.go:141] libmachine: (no-preload-472858) Calling .Close
	I0401 19:38:49.124356   70284 main.go:141] libmachine: (no-preload-472858) DBG | Closing plugin on server side
	I0401 19:38:49.124627   70284 main.go:141] libmachine: (no-preload-472858) DBG | Closing plugin on server side
	I0401 19:38:49.124661   70284 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:38:49.124677   70284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:38:49.124690   70284 addons.go:470] Verifying addon metrics-server=true in "no-preload-472858"
	I0401 19:38:49.127415   70284 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0401 19:38:49.129047   70284 addons.go:505] duration metric: took 1.325796036s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0401 19:38:49.236094   70284 system_pods.go:86] 9 kube-system pods found
	I0401 19:38:49.236127   70284 system_pods.go:89] "coredns-7db6d8ff4d-8285w" [c450ac4a-974e-4322-9857-fb65792a142b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:49.236136   70284 system_pods.go:89] "coredns-7db6d8ff4d-wmbsp" [7a73f081-42f4-4854-8785-25e54eb0a391] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:49.236145   70284 system_pods.go:89] "etcd-no-preload-472858" [d96862c6-4b97-4239-a79a-e877f2825eb6] Running
	I0401 19:38:49.236152   70284 system_pods.go:89] "kube-apiserver-no-preload-472858" [78418540-b912-4457-98ef-94cf57cf9379] Running
	I0401 19:38:49.236159   70284 system_pods.go:89] "kube-controller-manager-no-preload-472858" [4a48aaa7-c47f-4d1f-aace-f02d2f24c791] Running
	I0401 19:38:49.236168   70284 system_pods.go:89] "kube-proxy-5dmtl" [c243321b-b01a-4fd5-895a-888d18ee8527] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:38:49.236175   70284 system_pods.go:89] "kube-scheduler-no-preload-472858" [3564e7d0-f6cc-4584-a2cc-39fc6f884836] Running
	I0401 19:38:49.236185   70284 system_pods.go:89] "metrics-server-569cc877fc-wj2tt" [5259722c-3d0b-468f-b941-419806e91177] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:38:49.236198   70284 system_pods.go:89] "storage-provisioner" [844e010a-3bee-4fd1-942f-10fa50306617] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:38:49.236218   70284 retry.go:31] will retry after 287.299528ms: missing components: kube-dns, kube-proxy
	I0401 19:38:49.530606   70284 system_pods.go:86] 9 kube-system pods found
	I0401 19:38:49.530643   70284 system_pods.go:89] "coredns-7db6d8ff4d-8285w" [c450ac4a-974e-4322-9857-fb65792a142b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:49.530654   70284 system_pods.go:89] "coredns-7db6d8ff4d-wmbsp" [7a73f081-42f4-4854-8785-25e54eb0a391] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:49.530663   70284 system_pods.go:89] "etcd-no-preload-472858" [d96862c6-4b97-4239-a79a-e877f2825eb6] Running
	I0401 19:38:49.530670   70284 system_pods.go:89] "kube-apiserver-no-preload-472858" [78418540-b912-4457-98ef-94cf57cf9379] Running
	I0401 19:38:49.530678   70284 system_pods.go:89] "kube-controller-manager-no-preload-472858" [4a48aaa7-c47f-4d1f-aace-f02d2f24c791] Running
	I0401 19:38:49.530687   70284 system_pods.go:89] "kube-proxy-5dmtl" [c243321b-b01a-4fd5-895a-888d18ee8527] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:38:49.530697   70284 system_pods.go:89] "kube-scheduler-no-preload-472858" [3564e7d0-f6cc-4584-a2cc-39fc6f884836] Running
	I0401 19:38:49.530711   70284 system_pods.go:89] "metrics-server-569cc877fc-wj2tt" [5259722c-3d0b-468f-b941-419806e91177] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:38:49.530721   70284 system_pods.go:89] "storage-provisioner" [844e010a-3bee-4fd1-942f-10fa50306617] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:38:49.530744   70284 retry.go:31] will retry after 435.286919ms: missing components: kube-dns, kube-proxy
	I0401 19:38:49.974049   70284 system_pods.go:86] 9 kube-system pods found
	I0401 19:38:49.974090   70284 system_pods.go:89] "coredns-7db6d8ff4d-8285w" [c450ac4a-974e-4322-9857-fb65792a142b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:49.974103   70284 system_pods.go:89] "coredns-7db6d8ff4d-wmbsp" [7a73f081-42f4-4854-8785-25e54eb0a391] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:49.974113   70284 system_pods.go:89] "etcd-no-preload-472858" [d96862c6-4b97-4239-a79a-e877f2825eb6] Running
	I0401 19:38:49.974121   70284 system_pods.go:89] "kube-apiserver-no-preload-472858" [78418540-b912-4457-98ef-94cf57cf9379] Running
	I0401 19:38:49.974128   70284 system_pods.go:89] "kube-controller-manager-no-preload-472858" [4a48aaa7-c47f-4d1f-aace-f02d2f24c791] Running
	I0401 19:38:49.974142   70284 system_pods.go:89] "kube-proxy-5dmtl" [c243321b-b01a-4fd5-895a-888d18ee8527] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:38:49.974153   70284 system_pods.go:89] "kube-scheduler-no-preload-472858" [3564e7d0-f6cc-4584-a2cc-39fc6f884836] Running
	I0401 19:38:49.974168   70284 system_pods.go:89] "metrics-server-569cc877fc-wj2tt" [5259722c-3d0b-468f-b941-419806e91177] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:38:49.974181   70284 system_pods.go:89] "storage-provisioner" [844e010a-3bee-4fd1-942f-10fa50306617] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:38:49.974203   70284 retry.go:31] will retry after 577.959209ms: missing components: kube-dns, kube-proxy
	I0401 19:38:50.558750   70284 system_pods.go:86] 9 kube-system pods found
	I0401 19:38:50.558780   70284 system_pods.go:89] "coredns-7db6d8ff4d-8285w" [c450ac4a-974e-4322-9857-fb65792a142b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:50.558787   70284 system_pods.go:89] "coredns-7db6d8ff4d-wmbsp" [7a73f081-42f4-4854-8785-25e54eb0a391] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:50.558795   70284 system_pods.go:89] "etcd-no-preload-472858" [d96862c6-4b97-4239-a79a-e877f2825eb6] Running
	I0401 19:38:50.558805   70284 system_pods.go:89] "kube-apiserver-no-preload-472858" [78418540-b912-4457-98ef-94cf57cf9379] Running
	I0401 19:38:50.558812   70284 system_pods.go:89] "kube-controller-manager-no-preload-472858" [4a48aaa7-c47f-4d1f-aace-f02d2f24c791] Running
	I0401 19:38:50.558820   70284 system_pods.go:89] "kube-proxy-5dmtl" [c243321b-b01a-4fd5-895a-888d18ee8527] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:38:50.558833   70284 system_pods.go:89] "kube-scheduler-no-preload-472858" [3564e7d0-f6cc-4584-a2cc-39fc6f884836] Running
	I0401 19:38:50.558840   70284 system_pods.go:89] "metrics-server-569cc877fc-wj2tt" [5259722c-3d0b-468f-b941-419806e91177] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:38:50.558846   70284 system_pods.go:89] "storage-provisioner" [844e010a-3bee-4fd1-942f-10fa50306617] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:38:50.558863   70284 retry.go:31] will retry after 723.380101ms: missing components: kube-dns, kube-proxy
	I0401 19:38:51.291450   70284 system_pods.go:86] 9 kube-system pods found
	I0401 19:38:51.291487   70284 system_pods.go:89] "coredns-7db6d8ff4d-8285w" [c450ac4a-974e-4322-9857-fb65792a142b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:51.291498   70284 system_pods.go:89] "coredns-7db6d8ff4d-wmbsp" [7a73f081-42f4-4854-8785-25e54eb0a391] Running
	I0401 19:38:51.291508   70284 system_pods.go:89] "etcd-no-preload-472858" [d96862c6-4b97-4239-a79a-e877f2825eb6] Running
	I0401 19:38:51.291514   70284 system_pods.go:89] "kube-apiserver-no-preload-472858" [78418540-b912-4457-98ef-94cf57cf9379] Running
	I0401 19:38:51.291521   70284 system_pods.go:89] "kube-controller-manager-no-preload-472858" [4a48aaa7-c47f-4d1f-aace-f02d2f24c791] Running
	I0401 19:38:51.291527   70284 system_pods.go:89] "kube-proxy-5dmtl" [c243321b-b01a-4fd5-895a-888d18ee8527] Running
	I0401 19:38:51.291532   70284 system_pods.go:89] "kube-scheduler-no-preload-472858" [3564e7d0-f6cc-4584-a2cc-39fc6f884836] Running
	I0401 19:38:51.291543   70284 system_pods.go:89] "metrics-server-569cc877fc-wj2tt" [5259722c-3d0b-468f-b941-419806e91177] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:38:51.291551   70284 system_pods.go:89] "storage-provisioner" [844e010a-3bee-4fd1-942f-10fa50306617] Running
	I0401 19:38:51.291559   70284 system_pods.go:126] duration metric: took 2.366728733s to wait for k8s-apps to be running ...
	I0401 19:38:51.291576   70284 system_svc.go:44] waiting for kubelet service to be running ....
	I0401 19:38:51.291622   70284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:38:51.310224   70284 system_svc.go:56] duration metric: took 18.63923ms WaitForService to wait for kubelet
	I0401 19:38:51.310250   70284 kubeadm.go:576] duration metric: took 3.50708191s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 19:38:51.310269   70284 node_conditions.go:102] verifying NodePressure condition ...
	I0401 19:38:51.312899   70284 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 19:38:51.312919   70284 node_conditions.go:123] node cpu capacity is 2
	I0401 19:38:51.312930   70284 node_conditions.go:105] duration metric: took 2.654739ms to run NodePressure ...
	I0401 19:38:51.312945   70284 start.go:240] waiting for startup goroutines ...
	I0401 19:38:51.312958   70284 start.go:245] waiting for cluster config update ...
	I0401 19:38:51.312985   70284 start.go:254] writing updated cluster config ...
	I0401 19:38:51.313269   70284 ssh_runner.go:195] Run: rm -f paused
	I0401 19:38:51.365041   70284 start.go:600] kubectl: 1.29.3, cluster: 1.30.0-rc.0 (minor skew: 1)
	I0401 19:38:51.367173   70284 out.go:177] * Done! kubectl is now configured to use "no-preload-472858" cluster and "default" namespace by default
	I0401 19:39:48.856665   71168 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0401 19:39:48.856779   71168 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0401 19:39:48.858840   71168 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0401 19:39:48.858896   71168 kubeadm.go:309] [preflight] Running pre-flight checks
	I0401 19:39:48.858987   71168 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 19:39:48.859122   71168 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 19:39:48.859222   71168 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 19:39:48.859314   71168 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 19:39:48.861104   71168 out.go:204]   - Generating certificates and keys ...
	I0401 19:39:48.861202   71168 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0401 19:39:48.861277   71168 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0401 19:39:48.861381   71168 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0401 19:39:48.861492   71168 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0401 19:39:48.861596   71168 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0401 19:39:48.861699   71168 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0401 19:39:48.861791   71168 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0401 19:39:48.861897   71168 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0401 19:39:48.862009   71168 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0401 19:39:48.862118   71168 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0401 19:39:48.862176   71168 kubeadm.go:309] [certs] Using the existing "sa" key
	I0401 19:39:48.862260   71168 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 19:39:48.862338   71168 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 19:39:48.862420   71168 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 19:39:48.862480   71168 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 19:39:48.862527   71168 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 19:39:48.862618   71168 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 19:39:48.862693   71168 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 19:39:48.862734   71168 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0401 19:39:48.862804   71168 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 19:39:48.864199   71168 out.go:204]   - Booting up control plane ...
	I0401 19:39:48.864291   71168 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 19:39:48.864359   71168 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 19:39:48.864420   71168 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 19:39:48.864504   71168 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 19:39:48.864712   71168 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 19:39:48.864788   71168 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0401 19:39:48.864871   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:39:48.865069   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:39:48.865153   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:39:48.865344   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:39:48.865453   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:39:48.865674   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:39:48.865755   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:39:48.865989   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:39:48.866095   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:39:48.866269   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:39:48.866285   71168 kubeadm.go:309] 
	I0401 19:39:48.866343   71168 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0401 19:39:48.866402   71168 kubeadm.go:309] 		timed out waiting for the condition
	I0401 19:39:48.866414   71168 kubeadm.go:309] 
	I0401 19:39:48.866458   71168 kubeadm.go:309] 	This error is likely caused by:
	I0401 19:39:48.866506   71168 kubeadm.go:309] 		- The kubelet is not running
	I0401 19:39:48.866651   71168 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0401 19:39:48.866665   71168 kubeadm.go:309] 
	I0401 19:39:48.866816   71168 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0401 19:39:48.866865   71168 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0401 19:39:48.866895   71168 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0401 19:39:48.866901   71168 kubeadm.go:309] 
	I0401 19:39:48.866989   71168 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0401 19:39:48.867061   71168 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0401 19:39:48.867070   71168 kubeadm.go:309] 
	I0401 19:39:48.867194   71168 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0401 19:39:48.867327   71168 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0401 19:39:48.867417   71168 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0401 19:39:48.867526   71168 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0401 19:39:48.867555   71168 kubeadm.go:309] 
	I0401 19:39:48.867633   71168 kubeadm.go:393] duration metric: took 7m58.404831893s to StartCluster
	I0401 19:39:48.867702   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:39:48.867764   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:39:48.922329   71168 cri.go:89] found id: ""
	I0401 19:39:48.922359   71168 logs.go:276] 0 containers: []
	W0401 19:39:48.922369   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:39:48.922377   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:39:48.922435   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:39:48.966212   71168 cri.go:89] found id: ""
	I0401 19:39:48.966235   71168 logs.go:276] 0 containers: []
	W0401 19:39:48.966243   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:39:48.966248   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:39:48.966309   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:39:49.015141   71168 cri.go:89] found id: ""
	I0401 19:39:49.015171   71168 logs.go:276] 0 containers: []
	W0401 19:39:49.015182   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:39:49.015189   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:39:49.015249   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:39:49.053042   71168 cri.go:89] found id: ""
	I0401 19:39:49.053067   71168 logs.go:276] 0 containers: []
	W0401 19:39:49.053077   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:39:49.053085   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:39:49.053144   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:39:49.093880   71168 cri.go:89] found id: ""
	I0401 19:39:49.093906   71168 logs.go:276] 0 containers: []
	W0401 19:39:49.093914   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:39:49.093923   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:39:49.093976   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:39:49.129730   71168 cri.go:89] found id: ""
	I0401 19:39:49.129752   71168 logs.go:276] 0 containers: []
	W0401 19:39:49.129760   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:39:49.129766   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:39:49.129818   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:39:49.171075   71168 cri.go:89] found id: ""
	I0401 19:39:49.171107   71168 logs.go:276] 0 containers: []
	W0401 19:39:49.171118   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:39:49.171125   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:39:49.171204   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:39:49.208279   71168 cri.go:89] found id: ""
	I0401 19:39:49.208308   71168 logs.go:276] 0 containers: []
	W0401 19:39:49.208319   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:39:49.208330   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:39:49.208345   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:39:49.294128   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:39:49.294148   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:39:49.294162   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:39:49.400930   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:39:49.400963   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:39:49.443111   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:39:49.443140   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:39:49.501382   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:39:49.501417   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0401 19:39:49.516418   71168 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0401 19:39:49.516461   71168 out.go:239] * 
	W0401 19:39:49.516521   71168 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0401 19:39:49.516591   71168 out.go:239] * 
	W0401 19:39:49.517377   71168 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 19:39:49.520389   71168 out.go:177] 
	W0401 19:39:49.521593   71168 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0401 19:39:49.521639   71168 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0401 19:39:49.521686   71168 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0401 19:39:49.523181   71168 out.go:177] 
	
	
	==> CRI-O <==
	Apr 01 19:45:45 default-k8s-diff-port-734648 crio[701]: time="2024-04-01 19:45:45.807686470Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712000745807652839,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=82da4206-94b6-4b7d-aa9b-b5b37cb61b45 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:45:45 default-k8s-diff-port-734648 crio[701]: time="2024-04-01 19:45:45.809972862Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=52fa0f63-446b-4c86-ad7b-e6cbd0fb5817 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:45:45 default-k8s-diff-port-734648 crio[701]: time="2024-04-01 19:45:45.810084031Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=52fa0f63-446b-4c86-ad7b-e6cbd0fb5817 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:45:45 default-k8s-diff-port-734648 crio[701]: time="2024-04-01 19:45:45.810386988Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6dc0b65d110410481e29207b68fd411d2bd22658f09549893d66d3baea1811b3,PodSandboxId:cd511767fbb1ca284eb254d85f510c9b24e116139b793576308717b0db582200,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712000202977043094,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ws9cc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65660abf-9856-4df4-a07b-854cfd8e3fc6,},Annotations:map[string]string{io.kubernetes.container.hash: 19f45f1d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:169206bebd575d2b244c81fa4c7a04e2731c4120950cb9682db1ac25ecb157eb,PodSandboxId:4a2085bf15f4a78576d864f29f817138f2316d8fe75dbf4b18b7cb9dc613914f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712000202908783013,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p8wrc,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 2f6b37e6-b3f9-44b6-8ff9-e8fd781ef1a3,},Annotations:map[string]string{io.kubernetes.container.hash: 4637946c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43690c88cc3da8e174d4465cd9001ba1e623e51cabaadd6e11de58cc57579c5c,PodSandboxId:56e5910ae49204b21ab793599a00718ba2ba59d72ac3342752d4187443784cf5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712000202848201931,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 8509e661-1b53-4018-b6b0-b6a5e242768d,},Annotations:map[string]string{io.kubernetes.container.hash: d51dde38,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd8b1e6605269a97ce86dc1d3da6272b70b139eaffac1d261e5997ce76baa3d8,PodSandboxId:d50186ea9ce7d412a704fb1b828fe13ffe09f06e259559c513465737817fcefd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712000202820765184,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lwsms,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f432161-c5e3-42fa-8857-
8e61959511b0,},Annotations:map[string]string{io.kubernetes.container.hash: 4bfcc750,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84d63bce9959717a8c3ae86d594587c1e2f33bbff95b4d3e917aa026ef54971b,PodSandboxId:53df00131c7c4f8241bdfcc68ca2a5d6f5f054e4f54812dbc8ed0427699818ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:171200018163675350
8,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-734648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1a481b9900ab0be9d25c3fe5e5d2391,},Annotations:map[string]string{io.kubernetes.container.hash: 905d1f56,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ae9f91816078e0969df5e5c46a0ddfcaa977ab2e24d9b86467993539907c542,PodSandboxId:503c77c8e91fcb1ba507756e6279d3c224f74cc45e1a5b4b6556691b49be7b19,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712000181632649887,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-734648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6daf38c602a88c1e0fb7f5442cfff11,},Annotations:map[string]string{io.kubernetes.container.hash: 99af7f03,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2008ec87c0a6ade1d9facfeca980ba3877ca7b17ab2487c9f2ac1a6ae724592f,PodSandboxId:54c30c5b21cb68ff7ecc0a15a3796630513b6b8babd4136595b48a15c8c0e46a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712000181575330253,Labels:map[string]string{io.kube
rnetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-734648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3e266be05bf5ad064ffb4f6640d02a4,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3afe593e5630743da819e3ffa3d83347db0ea26c75f9962f53299aeb7908971,PodSandboxId:ccbdd6f2242b983fb5c3d9b66cd328655b1df693bebd4d3d791de4ccee015de0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712000181516361047,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-734648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9d8710c71f52a6a07b9c8992a48c4ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:703de106af68a16664634902a8a17d7cab4162929e3bbb227023c02a54aa2ccb,PodSandboxId:3a5d118f886790f3832a5749b7c9c52e926c08a30a4f2bdfe8e9d95fc72d5608,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711999890335692084,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-734648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1a481b9900ab0be9d25c3fe5e5d2391,},Annotations:map[string]string{io.kubernetes.container.hash: 905d1f56,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=52fa0f63-446b-4c86-ad7b-e6cbd0fb5817 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:45:45 default-k8s-diff-port-734648 crio[701]: time="2024-04-01 19:45:45.854740104Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=742dc2b7-3e39-4e40-9fe7-ea6cdd7247bc name=/runtime.v1.RuntimeService/Version
	Apr 01 19:45:45 default-k8s-diff-port-734648 crio[701]: time="2024-04-01 19:45:45.854817273Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=742dc2b7-3e39-4e40-9fe7-ea6cdd7247bc name=/runtime.v1.RuntimeService/Version
	Apr 01 19:45:45 default-k8s-diff-port-734648 crio[701]: time="2024-04-01 19:45:45.856060230Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8967df40-8da8-4028-9ba0-fb3e2741c104 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:45:45 default-k8s-diff-port-734648 crio[701]: time="2024-04-01 19:45:45.856464052Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712000745856440630,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8967df40-8da8-4028-9ba0-fb3e2741c104 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:45:45 default-k8s-diff-port-734648 crio[701]: time="2024-04-01 19:45:45.857268475Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7af7b16e-4df7-42bc-9e05-4c6d6fe4f00c name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:45:45 default-k8s-diff-port-734648 crio[701]: time="2024-04-01 19:45:45.857360641Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7af7b16e-4df7-42bc-9e05-4c6d6fe4f00c name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:45:45 default-k8s-diff-port-734648 crio[701]: time="2024-04-01 19:45:45.857555232Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6dc0b65d110410481e29207b68fd411d2bd22658f09549893d66d3baea1811b3,PodSandboxId:cd511767fbb1ca284eb254d85f510c9b24e116139b793576308717b0db582200,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712000202977043094,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ws9cc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65660abf-9856-4df4-a07b-854cfd8e3fc6,},Annotations:map[string]string{io.kubernetes.container.hash: 19f45f1d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:169206bebd575d2b244c81fa4c7a04e2731c4120950cb9682db1ac25ecb157eb,PodSandboxId:4a2085bf15f4a78576d864f29f817138f2316d8fe75dbf4b18b7cb9dc613914f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712000202908783013,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p8wrc,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 2f6b37e6-b3f9-44b6-8ff9-e8fd781ef1a3,},Annotations:map[string]string{io.kubernetes.container.hash: 4637946c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43690c88cc3da8e174d4465cd9001ba1e623e51cabaadd6e11de58cc57579c5c,PodSandboxId:56e5910ae49204b21ab793599a00718ba2ba59d72ac3342752d4187443784cf5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712000202848201931,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 8509e661-1b53-4018-b6b0-b6a5e242768d,},Annotations:map[string]string{io.kubernetes.container.hash: d51dde38,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd8b1e6605269a97ce86dc1d3da6272b70b139eaffac1d261e5997ce76baa3d8,PodSandboxId:d50186ea9ce7d412a704fb1b828fe13ffe09f06e259559c513465737817fcefd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712000202820765184,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lwsms,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f432161-c5e3-42fa-8857-
8e61959511b0,},Annotations:map[string]string{io.kubernetes.container.hash: 4bfcc750,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84d63bce9959717a8c3ae86d594587c1e2f33bbff95b4d3e917aa026ef54971b,PodSandboxId:53df00131c7c4f8241bdfcc68ca2a5d6f5f054e4f54812dbc8ed0427699818ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:171200018163675350
8,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-734648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1a481b9900ab0be9d25c3fe5e5d2391,},Annotations:map[string]string{io.kubernetes.container.hash: 905d1f56,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ae9f91816078e0969df5e5c46a0ddfcaa977ab2e24d9b86467993539907c542,PodSandboxId:503c77c8e91fcb1ba507756e6279d3c224f74cc45e1a5b4b6556691b49be7b19,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712000181632649887,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-734648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6daf38c602a88c1e0fb7f5442cfff11,},Annotations:map[string]string{io.kubernetes.container.hash: 99af7f03,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2008ec87c0a6ade1d9facfeca980ba3877ca7b17ab2487c9f2ac1a6ae724592f,PodSandboxId:54c30c5b21cb68ff7ecc0a15a3796630513b6b8babd4136595b48a15c8c0e46a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712000181575330253,Labels:map[string]string{io.kube
rnetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-734648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3e266be05bf5ad064ffb4f6640d02a4,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3afe593e5630743da819e3ffa3d83347db0ea26c75f9962f53299aeb7908971,PodSandboxId:ccbdd6f2242b983fb5c3d9b66cd328655b1df693bebd4d3d791de4ccee015de0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712000181516361047,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-734648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9d8710c71f52a6a07b9c8992a48c4ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:703de106af68a16664634902a8a17d7cab4162929e3bbb227023c02a54aa2ccb,PodSandboxId:3a5d118f886790f3832a5749b7c9c52e926c08a30a4f2bdfe8e9d95fc72d5608,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711999890335692084,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-734648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1a481b9900ab0be9d25c3fe5e5d2391,},Annotations:map[string]string{io.kubernetes.container.hash: 905d1f56,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7af7b16e-4df7-42bc-9e05-4c6d6fe4f00c name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:45:45 default-k8s-diff-port-734648 crio[701]: time="2024-04-01 19:45:45.905474591Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b4438959-0bfe-4454-a8fd-eef43a783e93 name=/runtime.v1.RuntimeService/Version
	Apr 01 19:45:45 default-k8s-diff-port-734648 crio[701]: time="2024-04-01 19:45:45.905751727Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b4438959-0bfe-4454-a8fd-eef43a783e93 name=/runtime.v1.RuntimeService/Version
	Apr 01 19:45:45 default-k8s-diff-port-734648 crio[701]: time="2024-04-01 19:45:45.907406943Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eb9a389f-cff9-4790-95c6-2d36fbc5c220 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:45:45 default-k8s-diff-port-734648 crio[701]: time="2024-04-01 19:45:45.907830371Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712000745907805380,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eb9a389f-cff9-4790-95c6-2d36fbc5c220 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:45:45 default-k8s-diff-port-734648 crio[701]: time="2024-04-01 19:45:45.908499241Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=03d3308a-5a2a-4770-8472-4da7f86047ca name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:45:45 default-k8s-diff-port-734648 crio[701]: time="2024-04-01 19:45:45.908603177Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=03d3308a-5a2a-4770-8472-4da7f86047ca name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:45:45 default-k8s-diff-port-734648 crio[701]: time="2024-04-01 19:45:45.909029364Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6dc0b65d110410481e29207b68fd411d2bd22658f09549893d66d3baea1811b3,PodSandboxId:cd511767fbb1ca284eb254d85f510c9b24e116139b793576308717b0db582200,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712000202977043094,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ws9cc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65660abf-9856-4df4-a07b-854cfd8e3fc6,},Annotations:map[string]string{io.kubernetes.container.hash: 19f45f1d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:169206bebd575d2b244c81fa4c7a04e2731c4120950cb9682db1ac25ecb157eb,PodSandboxId:4a2085bf15f4a78576d864f29f817138f2316d8fe75dbf4b18b7cb9dc613914f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712000202908783013,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p8wrc,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 2f6b37e6-b3f9-44b6-8ff9-e8fd781ef1a3,},Annotations:map[string]string{io.kubernetes.container.hash: 4637946c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43690c88cc3da8e174d4465cd9001ba1e623e51cabaadd6e11de58cc57579c5c,PodSandboxId:56e5910ae49204b21ab793599a00718ba2ba59d72ac3342752d4187443784cf5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712000202848201931,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 8509e661-1b53-4018-b6b0-b6a5e242768d,},Annotations:map[string]string{io.kubernetes.container.hash: d51dde38,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd8b1e6605269a97ce86dc1d3da6272b70b139eaffac1d261e5997ce76baa3d8,PodSandboxId:d50186ea9ce7d412a704fb1b828fe13ffe09f06e259559c513465737817fcefd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712000202820765184,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lwsms,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f432161-c5e3-42fa-8857-
8e61959511b0,},Annotations:map[string]string{io.kubernetes.container.hash: 4bfcc750,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84d63bce9959717a8c3ae86d594587c1e2f33bbff95b4d3e917aa026ef54971b,PodSandboxId:53df00131c7c4f8241bdfcc68ca2a5d6f5f054e4f54812dbc8ed0427699818ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:171200018163675350
8,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-734648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1a481b9900ab0be9d25c3fe5e5d2391,},Annotations:map[string]string{io.kubernetes.container.hash: 905d1f56,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ae9f91816078e0969df5e5c46a0ddfcaa977ab2e24d9b86467993539907c542,PodSandboxId:503c77c8e91fcb1ba507756e6279d3c224f74cc45e1a5b4b6556691b49be7b19,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712000181632649887,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-734648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6daf38c602a88c1e0fb7f5442cfff11,},Annotations:map[string]string{io.kubernetes.container.hash: 99af7f03,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2008ec87c0a6ade1d9facfeca980ba3877ca7b17ab2487c9f2ac1a6ae724592f,PodSandboxId:54c30c5b21cb68ff7ecc0a15a3796630513b6b8babd4136595b48a15c8c0e46a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712000181575330253,Labels:map[string]string{io.kube
rnetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-734648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3e266be05bf5ad064ffb4f6640d02a4,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3afe593e5630743da819e3ffa3d83347db0ea26c75f9962f53299aeb7908971,PodSandboxId:ccbdd6f2242b983fb5c3d9b66cd328655b1df693bebd4d3d791de4ccee015de0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712000181516361047,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-734648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9d8710c71f52a6a07b9c8992a48c4ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:703de106af68a16664634902a8a17d7cab4162929e3bbb227023c02a54aa2ccb,PodSandboxId:3a5d118f886790f3832a5749b7c9c52e926c08a30a4f2bdfe8e9d95fc72d5608,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711999890335692084,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-734648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1a481b9900ab0be9d25c3fe5e5d2391,},Annotations:map[string]string{io.kubernetes.container.hash: 905d1f56,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=03d3308a-5a2a-4770-8472-4da7f86047ca name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:45:45 default-k8s-diff-port-734648 crio[701]: time="2024-04-01 19:45:45.952505413Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f0b0e50c-609a-4686-b6b1-5dee3955c39f name=/runtime.v1.RuntimeService/Version
	Apr 01 19:45:45 default-k8s-diff-port-734648 crio[701]: time="2024-04-01 19:45:45.952674678Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f0b0e50c-609a-4686-b6b1-5dee3955c39f name=/runtime.v1.RuntimeService/Version
	Apr 01 19:45:45 default-k8s-diff-port-734648 crio[701]: time="2024-04-01 19:45:45.954184076Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a9cb4ede-cd46-42e9-b32b-5a94002ef9bc name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:45:45 default-k8s-diff-port-734648 crio[701]: time="2024-04-01 19:45:45.954578805Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712000745954557761,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a9cb4ede-cd46-42e9-b32b-5a94002ef9bc name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:45:45 default-k8s-diff-port-734648 crio[701]: time="2024-04-01 19:45:45.955240895Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2a6a7488-5c03-4b50-9ad0-42d446b721e1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:45:45 default-k8s-diff-port-734648 crio[701]: time="2024-04-01 19:45:45.955330896Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2a6a7488-5c03-4b50-9ad0-42d446b721e1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:45:45 default-k8s-diff-port-734648 crio[701]: time="2024-04-01 19:45:45.955525220Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6dc0b65d110410481e29207b68fd411d2bd22658f09549893d66d3baea1811b3,PodSandboxId:cd511767fbb1ca284eb254d85f510c9b24e116139b793576308717b0db582200,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712000202977043094,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ws9cc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65660abf-9856-4df4-a07b-854cfd8e3fc6,},Annotations:map[string]string{io.kubernetes.container.hash: 19f45f1d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:169206bebd575d2b244c81fa4c7a04e2731c4120950cb9682db1ac25ecb157eb,PodSandboxId:4a2085bf15f4a78576d864f29f817138f2316d8fe75dbf4b18b7cb9dc613914f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712000202908783013,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p8wrc,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 2f6b37e6-b3f9-44b6-8ff9-e8fd781ef1a3,},Annotations:map[string]string{io.kubernetes.container.hash: 4637946c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43690c88cc3da8e174d4465cd9001ba1e623e51cabaadd6e11de58cc57579c5c,PodSandboxId:56e5910ae49204b21ab793599a00718ba2ba59d72ac3342752d4187443784cf5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712000202848201931,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 8509e661-1b53-4018-b6b0-b6a5e242768d,},Annotations:map[string]string{io.kubernetes.container.hash: d51dde38,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd8b1e6605269a97ce86dc1d3da6272b70b139eaffac1d261e5997ce76baa3d8,PodSandboxId:d50186ea9ce7d412a704fb1b828fe13ffe09f06e259559c513465737817fcefd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712000202820765184,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lwsms,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f432161-c5e3-42fa-8857-
8e61959511b0,},Annotations:map[string]string{io.kubernetes.container.hash: 4bfcc750,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84d63bce9959717a8c3ae86d594587c1e2f33bbff95b4d3e917aa026ef54971b,PodSandboxId:53df00131c7c4f8241bdfcc68ca2a5d6f5f054e4f54812dbc8ed0427699818ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:171200018163675350
8,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-734648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1a481b9900ab0be9d25c3fe5e5d2391,},Annotations:map[string]string{io.kubernetes.container.hash: 905d1f56,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ae9f91816078e0969df5e5c46a0ddfcaa977ab2e24d9b86467993539907c542,PodSandboxId:503c77c8e91fcb1ba507756e6279d3c224f74cc45e1a5b4b6556691b49be7b19,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712000181632649887,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-734648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6daf38c602a88c1e0fb7f5442cfff11,},Annotations:map[string]string{io.kubernetes.container.hash: 99af7f03,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2008ec87c0a6ade1d9facfeca980ba3877ca7b17ab2487c9f2ac1a6ae724592f,PodSandboxId:54c30c5b21cb68ff7ecc0a15a3796630513b6b8babd4136595b48a15c8c0e46a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712000181575330253,Labels:map[string]string{io.kube
rnetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-734648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3e266be05bf5ad064ffb4f6640d02a4,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3afe593e5630743da819e3ffa3d83347db0ea26c75f9962f53299aeb7908971,PodSandboxId:ccbdd6f2242b983fb5c3d9b66cd328655b1df693bebd4d3d791de4ccee015de0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712000181516361047,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-734648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9d8710c71f52a6a07b9c8992a48c4ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:703de106af68a16664634902a8a17d7cab4162929e3bbb227023c02a54aa2ccb,PodSandboxId:3a5d118f886790f3832a5749b7c9c52e926c08a30a4f2bdfe8e9d95fc72d5608,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711999890335692084,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-734648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1a481b9900ab0be9d25c3fe5e5d2391,},Annotations:map[string]string{io.kubernetes.container.hash: 905d1f56,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2a6a7488-5c03-4b50-9ad0-42d446b721e1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6dc0b65d11041       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   cd511767fbb1c       coredns-76f75df574-ws9cc
	169206bebd575       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392   9 minutes ago       Running             kube-proxy                0                   4a2085bf15f4a       kube-proxy-p8wrc
	43690c88cc3da       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   56e5910ae4920       storage-provisioner
	dd8b1e6605269       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   d50186ea9ce7d       coredns-76f75df574-lwsms
	84d63bce99597       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   9 minutes ago       Running             kube-apiserver            2                   53df00131c7c4       kube-apiserver-default-k8s-diff-port-734648
	1ae9f91816078       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   503c77c8e91fc       etcd-default-k8s-diff-port-734648
	2008ec87c0a6a       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b   9 minutes ago       Running             kube-scheduler            2                   54c30c5b21cb6       kube-scheduler-default-k8s-diff-port-734648
	a3afe593e5630       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3   9 minutes ago       Running             kube-controller-manager   2                   ccbdd6f2242b9       kube-controller-manager-default-k8s-diff-port-734648
	703de106af68a       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   14 minutes ago      Exited              kube-apiserver            1                   3a5d118f88679       kube-apiserver-default-k8s-diff-port-734648
	
	
	==> coredns [6dc0b65d110410481e29207b68fd411d2bd22658f09549893d66d3baea1811b3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [dd8b1e6605269a97ce86dc1d3da6272b70b139eaffac1d261e5997ce76baa3d8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-734648
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-734648
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2
	                    minikube.k8s.io/name=default-k8s-diff-port-734648
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_01T19_36_28_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Apr 2024 19:36:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-734648
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Apr 2024 19:45:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Apr 2024 19:41:54 +0000   Mon, 01 Apr 2024 19:36:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Apr 2024 19:41:54 +0000   Mon, 01 Apr 2024 19:36:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Apr 2024 19:41:54 +0000   Mon, 01 Apr 2024 19:36:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Apr 2024 19:41:54 +0000   Mon, 01 Apr 2024 19:36:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.145
	  Hostname:    default-k8s-diff-port-734648
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 013fe2ac987f4bf29814991554d9e27d
	  System UUID:                013fe2ac-987f-4bf2-9814-991554d9e27d
	  Boot ID:                    da921e59-4a04-4b3f-883d-bbec1f31759d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-lwsms                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m5s
	  kube-system                 coredns-76f75df574-ws9cc                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m5s
	  kube-system                 etcd-default-k8s-diff-port-734648                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-apiserver-default-k8s-diff-port-734648             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-734648    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-proxy-p8wrc                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m5s
	  kube-system                 kube-scheduler-default-k8s-diff-port-734648             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 metrics-server-57f55c9bc5-fj5x5                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m4s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m2s                   kube-proxy       
	  Normal  Starting                 9m26s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m25s (x8 over 9m26s)  kubelet          Node default-k8s-diff-port-734648 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m25s (x8 over 9m26s)  kubelet          Node default-k8s-diff-port-734648 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m25s (x7 over 9m26s)  kubelet          Node default-k8s-diff-port-734648 status is now: NodeHasSufficientPID
	  Normal  Starting                 9m18s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m18s                  kubelet          Node default-k8s-diff-port-734648 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m18s                  kubelet          Node default-k8s-diff-port-734648 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m18s                  kubelet          Node default-k8s-diff-port-734648 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m18s                  kubelet          Node default-k8s-diff-port-734648 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m18s                  kubelet          Node default-k8s-diff-port-734648 status is now: NodeReady
	  Normal  RegisteredNode           9m6s                   node-controller  Node default-k8s-diff-port-734648 event: Registered Node default-k8s-diff-port-734648 in Controller
	
	
	==> dmesg <==
	[  +0.052680] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043425] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.742503] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.480087] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.688648] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.417269] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.060406] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063164] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.182298] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.158244] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +0.334524] systemd-fstab-generator[684]: Ignoring "noauto" option for root device
	[  +5.168883] systemd-fstab-generator[785]: Ignoring "noauto" option for root device
	[  +0.081733] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.498645] systemd-fstab-generator[918]: Ignoring "noauto" option for root device
	[  +5.632921] kauditd_printk_skb: 97 callbacks suppressed
	[  +8.783745] kauditd_printk_skb: 74 callbacks suppressed
	[Apr 1 19:36] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.833848] systemd-fstab-generator[3427]: Ignoring "noauto" option for root device
	[  +7.328393] systemd-fstab-generator[3757]: Ignoring "noauto" option for root device
	[  +0.120529] kauditd_printk_skb: 54 callbacks suppressed
	[ +13.333887] systemd-fstab-generator[3950]: Ignoring "noauto" option for root device
	[  +0.085674] kauditd_printk_skb: 12 callbacks suppressed
	[Apr 1 19:37] kauditd_printk_skb: 78 callbacks suppressed
	
	
	==> etcd [1ae9f91816078e0969df5e5c46a0ddfcaa977ab2e24d9b86467993539907c542] <==
	{"level":"info","ts":"2024-04-01T19:36:22.402172Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-01T19:36:22.402206Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-01T19:36:22.411713Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-01T19:36:22.412051Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"209571b9f0ad8882","initial-advertise-peer-urls":["https://192.168.61.145:2380"],"listen-peer-urls":["https://192.168.61.145:2380"],"advertise-client-urls":["https://192.168.61.145:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.145:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-01T19:36:22.412107Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-01T19:36:22.412199Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.145:2380"}
	{"level":"info","ts":"2024-04-01T19:36:22.412231Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.145:2380"}
	{"level":"info","ts":"2024-04-01T19:36:22.442919Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"209571b9f0ad8882 is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-01T19:36:22.443032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"209571b9f0ad8882 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-01T19:36:22.443164Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"209571b9f0ad8882 received MsgPreVoteResp from 209571b9f0ad8882 at term 1"}
	{"level":"info","ts":"2024-04-01T19:36:22.443277Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"209571b9f0ad8882 became candidate at term 2"}
	{"level":"info","ts":"2024-04-01T19:36:22.443306Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"209571b9f0ad8882 received MsgVoteResp from 209571b9f0ad8882 at term 2"}
	{"level":"info","ts":"2024-04-01T19:36:22.443436Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"209571b9f0ad8882 became leader at term 2"}
	{"level":"info","ts":"2024-04-01T19:36:22.443465Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 209571b9f0ad8882 elected leader 209571b9f0ad8882 at term 2"}
	{"level":"info","ts":"2024-04-01T19:36:22.448142Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-01T19:36:22.452298Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"209571b9f0ad8882","local-member-attributes":"{Name:default-k8s-diff-port-734648 ClientURLs:[https://192.168.61.145:2379]}","request-path":"/0/members/209571b9f0ad8882/attributes","cluster-id":"2cb522128dbb8e4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-01T19:36:22.452632Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-01T19:36:22.452923Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-01T19:36:22.453574Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-01T19:36:22.456042Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-01T19:36:22.45482Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-01T19:36:22.465433Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.145:2379"}
	{"level":"info","ts":"2024-04-01T19:36:22.488473Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2cb522128dbb8e4","local-member-id":"209571b9f0ad8882","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-01T19:36:22.516966Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-01T19:36:22.517027Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 19:45:46 up 14 min,  0 users,  load average: 0.15, 0.19, 0.16
	Linux default-k8s-diff-port-734648 5.10.207 #1 SMP Wed Mar 27 22:02:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [703de106af68a16664634902a8a17d7cab4162929e3bbb227023c02a54aa2ccb] <==
	W0401 19:36:16.963324       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:36:16.993035       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:36:17.000032       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:36:17.030114       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:36:17.055217       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:36:17.069620       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:36:17.116766       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:36:17.143706       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:36:17.296822       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:36:17.357634       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:36:17.361460       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:36:17.411104       1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:36:17.422261       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:36:17.437300       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:36:17.444120       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:36:17.482460       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:36:17.504736       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:36:17.538005       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:36:17.549107       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:36:17.607183       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:36:17.671099       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:36:17.833721       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:36:17.882769       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:36:17.973813       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:36:18.272025       1 logging.go:59] [core] [Channel #4 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [84d63bce9959717a8c3ae86d594587c1e2f33bbff95b4d3e917aa026ef54971b] <==
	I0401 19:39:43.381348       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0401 19:41:24.592718       1 handler_proxy.go:93] no RequestInfo found in the context
	E0401 19:41:24.592946       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0401 19:41:25.593679       1 handler_proxy.go:93] no RequestInfo found in the context
	E0401 19:41:25.593775       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0401 19:41:25.593786       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0401 19:41:25.594078       1 handler_proxy.go:93] no RequestInfo found in the context
	E0401 19:41:25.594217       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0401 19:41:25.595524       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0401 19:42:25.594510       1 handler_proxy.go:93] no RequestInfo found in the context
	E0401 19:42:25.594614       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0401 19:42:25.594625       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0401 19:42:25.595982       1 handler_proxy.go:93] no RequestInfo found in the context
	E0401 19:42:25.596058       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0401 19:42:25.596069       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0401 19:44:25.596009       1 handler_proxy.go:93] no RequestInfo found in the context
	W0401 19:44:25.596326       1 handler_proxy.go:93] no RequestInfo found in the context
	E0401 19:44:25.596441       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0401 19:44:25.596467       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0401 19:44:25.596514       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0401 19:44:25.598273       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [a3afe593e5630743da819e3ffa3d83347db0ea26c75f9962f53299aeb7908971] <==
	I0401 19:40:11.390754       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:40:40.936760       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:40:41.399319       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:41:10.942566       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:41:11.407819       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:41:40.948225       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:41:41.415401       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:42:10.956105       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:42:11.424777       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0401 19:42:30.231356       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="267.03µs"
	E0401 19:42:40.962543       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:42:41.434157       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0401 19:42:44.229309       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="92.838µs"
	E0401 19:43:10.968831       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:43:11.447738       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:43:40.977341       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:43:41.459422       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:44:10.987533       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:44:11.469328       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:44:40.993558       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:44:41.477938       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:45:10.998653       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:45:11.486943       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:45:41.006289       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:45:41.496544       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [169206bebd575d2b244c81fa4c7a04e2731c4120950cb9682db1ac25ecb157eb] <==
	I0401 19:36:43.541719       1 server_others.go:72] "Using iptables proxy"
	I0401 19:36:43.590267       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.61.145"]
	I0401 19:36:43.655565       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0401 19:36:43.655584       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0401 19:36:43.655599       1 server_others.go:168] "Using iptables Proxier"
	I0401 19:36:43.658649       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0401 19:36:43.658927       1 server.go:865] "Version info" version="v1.29.3"
	I0401 19:36:43.659006       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 19:36:43.660257       1 config.go:188] "Starting service config controller"
	I0401 19:36:43.660326       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0401 19:36:43.660374       1 config.go:97] "Starting endpoint slice config controller"
	I0401 19:36:43.660391       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0401 19:36:43.661110       1 config.go:315] "Starting node config controller"
	I0401 19:36:43.662230       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0401 19:36:43.761472       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0401 19:36:43.761613       1 shared_informer.go:318] Caches are synced for service config
	I0401 19:36:43.762989       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [2008ec87c0a6ade1d9facfeca980ba3877ca7b17ab2487c9f2ac1a6ae724592f] <==
	W0401 19:36:24.613402       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0401 19:36:24.613414       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0401 19:36:25.466071       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0401 19:36:25.466182       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0401 19:36:25.495536       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0401 19:36:25.496983       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0401 19:36:25.502715       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0401 19:36:25.504178       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0401 19:36:25.580747       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0401 19:36:25.580777       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0401 19:36:25.610091       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0401 19:36:25.610143       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0401 19:36:25.656230       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0401 19:36:25.656298       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0401 19:36:25.723531       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0401 19:36:25.724523       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0401 19:36:25.785742       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0401 19:36:25.785837       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0401 19:36:25.810558       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0401 19:36:25.810618       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0401 19:36:25.856344       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0401 19:36:25.856369       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0401 19:36:25.880810       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0401 19:36:25.881105       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0401 19:36:27.599951       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 01 19:43:28 default-k8s-diff-port-734648 kubelet[3764]: E0401 19:43:28.320119    3764 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 01 19:43:28 default-k8s-diff-port-734648 kubelet[3764]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 01 19:43:28 default-k8s-diff-port-734648 kubelet[3764]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 01 19:43:28 default-k8s-diff-port-734648 kubelet[3764]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 01 19:43:28 default-k8s-diff-port-734648 kubelet[3764]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 01 19:43:36 default-k8s-diff-port-734648 kubelet[3764]: E0401 19:43:36.207621    3764 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fj5x5" podUID="e25fa51c-d80e-4ddc-898f-3b9903746537"
	Apr 01 19:43:48 default-k8s-diff-port-734648 kubelet[3764]: E0401 19:43:48.209445    3764 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fj5x5" podUID="e25fa51c-d80e-4ddc-898f-3b9903746537"
	Apr 01 19:44:03 default-k8s-diff-port-734648 kubelet[3764]: E0401 19:44:03.208063    3764 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fj5x5" podUID="e25fa51c-d80e-4ddc-898f-3b9903746537"
	Apr 01 19:44:18 default-k8s-diff-port-734648 kubelet[3764]: E0401 19:44:18.208714    3764 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fj5x5" podUID="e25fa51c-d80e-4ddc-898f-3b9903746537"
	Apr 01 19:44:28 default-k8s-diff-port-734648 kubelet[3764]: E0401 19:44:28.320173    3764 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 01 19:44:28 default-k8s-diff-port-734648 kubelet[3764]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 01 19:44:28 default-k8s-diff-port-734648 kubelet[3764]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 01 19:44:28 default-k8s-diff-port-734648 kubelet[3764]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 01 19:44:28 default-k8s-diff-port-734648 kubelet[3764]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 01 19:44:30 default-k8s-diff-port-734648 kubelet[3764]: E0401 19:44:30.208005    3764 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fj5x5" podUID="e25fa51c-d80e-4ddc-898f-3b9903746537"
	Apr 01 19:44:43 default-k8s-diff-port-734648 kubelet[3764]: E0401 19:44:43.208276    3764 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fj5x5" podUID="e25fa51c-d80e-4ddc-898f-3b9903746537"
	Apr 01 19:44:56 default-k8s-diff-port-734648 kubelet[3764]: E0401 19:44:56.207795    3764 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fj5x5" podUID="e25fa51c-d80e-4ddc-898f-3b9903746537"
	Apr 01 19:45:11 default-k8s-diff-port-734648 kubelet[3764]: E0401 19:45:11.208288    3764 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fj5x5" podUID="e25fa51c-d80e-4ddc-898f-3b9903746537"
	Apr 01 19:45:24 default-k8s-diff-port-734648 kubelet[3764]: E0401 19:45:24.210145    3764 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fj5x5" podUID="e25fa51c-d80e-4ddc-898f-3b9903746537"
	Apr 01 19:45:28 default-k8s-diff-port-734648 kubelet[3764]: E0401 19:45:28.319255    3764 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 01 19:45:28 default-k8s-diff-port-734648 kubelet[3764]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 01 19:45:28 default-k8s-diff-port-734648 kubelet[3764]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 01 19:45:28 default-k8s-diff-port-734648 kubelet[3764]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 01 19:45:28 default-k8s-diff-port-734648 kubelet[3764]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 01 19:45:36 default-k8s-diff-port-734648 kubelet[3764]: E0401 19:45:36.210038    3764 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fj5x5" podUID="e25fa51c-d80e-4ddc-898f-3b9903746537"
	
	
	==> storage-provisioner [43690c88cc3da8e174d4465cd9001ba1e623e51cabaadd6e11de58cc57579c5c] <==
	I0401 19:36:43.417923       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0401 19:36:43.527519       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0401 19:36:43.527772       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0401 19:36:43.557206       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0401 19:36:43.557371       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-734648_e60a0bed-0065-437b-ba83-53f20be1a273!
	I0401 19:36:43.558461       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"119f664a-8113-4f88-ae73-d7c294587be6", APIVersion:"v1", ResourceVersion:"443", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-734648_e60a0bed-0065-437b-ba83-53f20be1a273 became leader
	I0401 19:36:43.658487       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-734648_e60a0bed-0065-437b-ba83-53f20be1a273!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-734648 -n default-k8s-diff-port-734648
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-734648 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-fj5x5
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-734648 describe pod metrics-server-57f55c9bc5-fj5x5
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-734648 describe pod metrics-server-57f55c9bc5-fj5x5: exit status 1 (63.189658ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-fj5x5" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-734648 describe pod metrics-server-57f55c9bc5-fj5x5: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0401 19:38:52.855197   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/client.crt: no such file or directory
E0401 19:39:16.856650   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/functional-784295/client.crt: no such file or directory
E0401 19:39:22.367860   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/auto-408543/client.crt: no such file or directory
E0401 19:39:37.798574   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/client.crt: no such file or directory
E0401 19:39:45.496245   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/calico-408543/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-472858 -n no-preload-472858
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-04-01 19:47:51.958719967 +0000 UTC m=+6101.414271157
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-472858 -n no-preload-472858
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-472858 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-472858 logs -n 25: (2.219681432s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p bridge-408543 sudo cat                              | bridge-408543                | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |                |                     |                     |
	| ssh     | -p bridge-408543 sudo                                  | bridge-408543                | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | containerd config dump                                 |                              |         |                |                     |                     |
	| ssh     | -p bridge-408543 sudo                                  | bridge-408543                | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | systemctl status crio --all                            |                              |         |                |                     |                     |
	|         | --full --no-pager                                      |                              |         |                |                     |                     |
	| ssh     | -p bridge-408543 sudo                                  | bridge-408543                | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |                |                     |                     |
	| ssh     | -p bridge-408543 sudo find                             | bridge-408543                | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |                |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |                |                     |                     |
	| ssh     | -p bridge-408543 sudo crio                             | bridge-408543                | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | config                                                 |                              |         |                |                     |                     |
	| delete  | -p bridge-408543                                       | bridge-408543                | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	| delete  | -p                                                     | disable-driver-mounts-580301 | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | disable-driver-mounts-580301                           |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-734648 | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:24 UTC |
	|         | default-k8s-diff-port-734648                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p no-preload-472858             | no-preload-472858            | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p no-preload-472858                                   | no-preload-472858            | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-882095            | embed-certs-882095           | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:24 UTC | 01 Apr 24 19:24 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-882095                                  | embed-certs-882095           | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:24 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-734648  | default-k8s-diff-port-734648 | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:25 UTC | 01 Apr 24 19:25 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-734648 | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:25 UTC |                     |
	|         | default-k8s-diff-port-734648                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-472858                  | no-preload-472858            | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p no-preload-472858                                   | no-preload-472858            | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:26 UTC | 01 Apr 24 19:38 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |                |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.0                      |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-163608        | old-k8s-version-163608       | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:26 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-882095                 | embed-certs-882095           | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:26 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-882095                                  | embed-certs-882095           | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:26 UTC | 01 Apr 24 19:36 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-734648       | default-k8s-diff-port-734648 | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:27 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-734648 | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:27 UTC | 01 Apr 24 19:36 UTC |
	|         | default-k8s-diff-port-734648                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-163608                              | old-k8s-version-163608       | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:27 UTC | 01 Apr 24 19:27 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-163608             | old-k8s-version-163608       | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:27 UTC | 01 Apr 24 19:27 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-163608                              | old-k8s-version-163608       | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:27 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/01 19:27:52
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 19:27:52.967684   71168 out.go:291] Setting OutFile to fd 1 ...
	I0401 19:27:52.967904   71168 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 19:27:52.967912   71168 out.go:304] Setting ErrFile to fd 2...
	I0401 19:27:52.967916   71168 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 19:27:52.968071   71168 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-10493/.minikube/bin
	I0401 19:27:52.968601   71168 out.go:298] Setting JSON to false
	I0401 19:27:52.969458   71168 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7825,"bootTime":1711991848,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 19:27:52.969511   71168 start.go:139] virtualization: kvm guest
	I0401 19:27:52.972337   71168 out.go:177] * [old-k8s-version-163608] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 19:27:52.973728   71168 out.go:177]   - MINIKUBE_LOCATION=18233
	I0401 19:27:52.973774   71168 notify.go:220] Checking for updates...
	I0401 19:27:52.975050   71168 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 19:27:52.976498   71168 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 19:27:52.977880   71168 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-10493/.minikube
	I0401 19:27:52.979140   71168 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 19:27:52.980397   71168 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 19:27:52.982116   71168 config.go:182] Loaded profile config "old-k8s-version-163608": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 19:27:52.982478   71168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:27:52.982569   71168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:27:52.996903   71168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44083
	I0401 19:27:52.997230   71168 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:27:52.997702   71168 main.go:141] libmachine: Using API Version  1
	I0401 19:27:52.997724   71168 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:27:52.998082   71168 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:27:52.998286   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:27:53.000287   71168 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0401 19:27:53.001714   71168 driver.go:392] Setting default libvirt URI to qemu:///system
	I0401 19:27:53.001993   71168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:27:53.002030   71168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:27:53.016155   71168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43947
	I0401 19:27:53.016524   71168 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:27:53.016981   71168 main.go:141] libmachine: Using API Version  1
	I0401 19:27:53.017003   71168 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:27:53.017352   71168 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:27:53.017550   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:27:53.051163   71168 out.go:177] * Using the kvm2 driver based on existing profile
	I0401 19:27:53.052475   71168 start.go:297] selected driver: kvm2
	I0401 19:27:53.052488   71168 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-163608 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-163608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.106 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:27:53.052621   71168 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 19:27:53.053266   71168 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 19:27:53.053349   71168 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18233-10493/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0401 19:27:53.067629   71168 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0401 19:27:53.067994   71168 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 19:27:53.068065   71168 cni.go:84] Creating CNI manager for ""
	I0401 19:27:53.068083   71168 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:27:53.068130   71168 start.go:340] cluster config:
	{Name:old-k8s-version-163608 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-163608 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.106 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:27:53.068640   71168 iso.go:125] acquiring lock: {Name:mka511ffe42ecd86bd7f46e7a17ddcdd3e5e4327 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 19:27:53.070506   71168 out.go:177] * Starting "old-k8s-version-163608" primary control-plane node in "old-k8s-version-163608" cluster
	I0401 19:27:53.071686   71168 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0401 19:27:53.071716   71168 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0401 19:27:53.071726   71168 cache.go:56] Caching tarball of preloaded images
	I0401 19:27:53.071807   71168 preload.go:173] Found /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 19:27:53.071818   71168 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0401 19:27:53.071904   71168 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/config.json ...
	I0401 19:27:53.072076   71168 start.go:360] acquireMachinesLock for old-k8s-version-163608: {Name:mk6b7472209a8db5f40be4c2f0565da7e0094c19 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 19:27:57.821850   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:00.893934   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:06.973950   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:10.045903   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:16.125969   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:19.197902   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:25.277903   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:28.349963   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:34.429888   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:37.501886   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:43.581910   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:46.653871   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:52.733856   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:55.805957   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:01.885878   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:04.957919   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:11.037896   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:14.109854   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:20.189885   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:23.261848   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:29.341931   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:32.414013   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:38.493870   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:41.565912   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:47.645887   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:50.717882   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:56.797886   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:59.869824   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:30:05.949894   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:30:09.021905   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:30:15.101943   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:30:18.173911   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:30:24.253875   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:30:27.325874   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:30:33.405945   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:30:36.477889   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:30:39.482773   70687 start.go:364] duration metric: took 3m52.901392005s to acquireMachinesLock for "embed-certs-882095"
	I0401 19:30:39.482825   70687 start.go:96] Skipping create...Using existing machine configuration
	I0401 19:30:39.482831   70687 fix.go:54] fixHost starting: 
	I0401 19:30:39.483206   70687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:30:39.483272   70687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:30:39.498155   70687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43779
	I0401 19:30:39.498587   70687 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:30:39.499013   70687 main.go:141] libmachine: Using API Version  1
	I0401 19:30:39.499032   70687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:30:39.499400   70687 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:30:39.499572   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:30:39.499760   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetState
	I0401 19:30:39.501361   70687 fix.go:112] recreateIfNeeded on embed-certs-882095: state=Stopped err=<nil>
	I0401 19:30:39.501398   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	W0401 19:30:39.501552   70687 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 19:30:39.504183   70687 out.go:177] * Restarting existing kvm2 VM for "embed-certs-882095" ...
	I0401 19:30:39.505410   70687 main.go:141] libmachine: (embed-certs-882095) Calling .Start
	I0401 19:30:39.505549   70687 main.go:141] libmachine: (embed-certs-882095) Ensuring networks are active...
	I0401 19:30:39.506257   70687 main.go:141] libmachine: (embed-certs-882095) Ensuring network default is active
	I0401 19:30:39.506533   70687 main.go:141] libmachine: (embed-certs-882095) Ensuring network mk-embed-certs-882095 is active
	I0401 19:30:39.506892   70687 main.go:141] libmachine: (embed-certs-882095) Getting domain xml...
	I0401 19:30:39.507632   70687 main.go:141] libmachine: (embed-certs-882095) Creating domain...
	I0401 19:30:40.693316   70687 main.go:141] libmachine: (embed-certs-882095) Waiting to get IP...
	I0401 19:30:40.694095   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:40.694551   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:40.694597   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:40.694519   71595 retry.go:31] will retry after 283.185096ms: waiting for machine to come up
	I0401 19:30:40.979028   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:40.979500   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:40.979523   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:40.979452   71595 retry.go:31] will retry after 297.637907ms: waiting for machine to come up
	I0401 19:30:41.279111   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:41.279457   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:41.279479   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:41.279411   71595 retry.go:31] will retry after 366.625363ms: waiting for machine to come up
	I0401 19:30:39.480214   70284 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 19:30:39.480252   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetMachineName
	I0401 19:30:39.480557   70284 buildroot.go:166] provisioning hostname "no-preload-472858"
	I0401 19:30:39.480583   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetMachineName
	I0401 19:30:39.480787   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:30:39.482626   70284 machine.go:97] duration metric: took 4m37.415031648s to provisionDockerMachine
	I0401 19:30:39.482666   70284 fix.go:56] duration metric: took 4m37.43830515s for fixHost
	I0401 19:30:39.482676   70284 start.go:83] releasing machines lock for "no-preload-472858", held for 4m37.438344965s
	W0401 19:30:39.482704   70284 start.go:713] error starting host: provision: host is not running
	W0401 19:30:39.482794   70284 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0401 19:30:39.482805   70284 start.go:728] Will try again in 5 seconds ...
	I0401 19:30:41.647682   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:41.648045   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:41.648097   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:41.648026   71595 retry.go:31] will retry after 373.762437ms: waiting for machine to come up
	I0401 19:30:42.023500   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:42.023868   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:42.023904   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:42.023836   71595 retry.go:31] will retry after 461.430639ms: waiting for machine to come up
	I0401 19:30:42.486384   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:42.486836   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:42.486863   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:42.486784   71595 retry.go:31] will retry after 718.511667ms: waiting for machine to come up
	I0401 19:30:43.206555   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:43.206983   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:43.207006   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:43.206939   71595 retry.go:31] will retry after 907.934415ms: waiting for machine to come up
	I0401 19:30:44.115840   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:44.116223   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:44.116259   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:44.116173   71595 retry.go:31] will retry after 1.178492069s: waiting for machine to come up
	I0401 19:30:45.295704   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:45.296117   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:45.296146   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:45.296071   71595 retry.go:31] will retry after 1.188920707s: waiting for machine to come up
	I0401 19:30:44.484802   70284 start.go:360] acquireMachinesLock for no-preload-472858: {Name:mk6b7472209a8db5f40be4c2f0565da7e0094c19 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 19:30:46.486217   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:46.486777   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:46.486816   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:46.486740   71595 retry.go:31] will retry after 2.12728618s: waiting for machine to come up
	I0401 19:30:48.617124   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:48.617521   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:48.617553   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:48.617468   71595 retry.go:31] will retry after 2.867613028s: waiting for machine to come up
	I0401 19:30:51.488009   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:51.491502   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:51.491533   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:51.488532   71595 retry.go:31] will retry after 3.42206094s: waiting for machine to come up
	I0401 19:30:54.911723   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:54.912098   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:54.912127   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:54.912059   71595 retry.go:31] will retry after 4.263880792s: waiting for machine to come up
	I0401 19:31:00.450770   70962 start.go:364] duration metric: took 3m22.921307899s to acquireMachinesLock for "default-k8s-diff-port-734648"
	I0401 19:31:00.450836   70962 start.go:96] Skipping create...Using existing machine configuration
	I0401 19:31:00.450854   70962 fix.go:54] fixHost starting: 
	I0401 19:31:00.451364   70962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:31:00.451401   70962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:31:00.467219   70962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45255
	I0401 19:31:00.467579   70962 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:31:00.467998   70962 main.go:141] libmachine: Using API Version  1
	I0401 19:31:00.468021   70962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:31:00.468368   70962 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:31:00.468567   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:31:00.468740   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetState
	I0401 19:31:00.470224   70962 fix.go:112] recreateIfNeeded on default-k8s-diff-port-734648: state=Stopped err=<nil>
	I0401 19:31:00.470251   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	W0401 19:31:00.470396   70962 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 19:31:00.472906   70962 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-734648" ...
	I0401 19:30:59.180302   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.180756   70687 main.go:141] libmachine: (embed-certs-882095) Found IP for machine: 192.168.39.190
	I0401 19:30:59.180778   70687 main.go:141] libmachine: (embed-certs-882095) Reserving static IP address...
	I0401 19:30:59.180794   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has current primary IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.181269   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "embed-certs-882095", mac: "52:54:00:8c:f1:a7", ip: "192.168.39.190"} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.181300   70687 main.go:141] libmachine: (embed-certs-882095) DBG | skip adding static IP to network mk-embed-certs-882095 - found existing host DHCP lease matching {name: "embed-certs-882095", mac: "52:54:00:8c:f1:a7", ip: "192.168.39.190"}
	I0401 19:30:59.181311   70687 main.go:141] libmachine: (embed-certs-882095) Reserved static IP address: 192.168.39.190
	I0401 19:30:59.181324   70687 main.go:141] libmachine: (embed-certs-882095) DBG | Getting to WaitForSSH function...
	I0401 19:30:59.181331   70687 main.go:141] libmachine: (embed-certs-882095) Waiting for SSH to be available...
	I0401 19:30:59.183293   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.183599   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.183630   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.183756   70687 main.go:141] libmachine: (embed-certs-882095) DBG | Using SSH client type: external
	I0401 19:30:59.183784   70687 main.go:141] libmachine: (embed-certs-882095) DBG | Using SSH private key: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/embed-certs-882095/id_rsa (-rw-------)
	I0401 19:30:59.183837   70687 main.go:141] libmachine: (embed-certs-882095) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.190 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18233-10493/.minikube/machines/embed-certs-882095/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0401 19:30:59.183863   70687 main.go:141] libmachine: (embed-certs-882095) DBG | About to run SSH command:
	I0401 19:30:59.183924   70687 main.go:141] libmachine: (embed-certs-882095) DBG | exit 0
	I0401 19:30:59.305707   70687 main.go:141] libmachine: (embed-certs-882095) DBG | SSH cmd err, output: <nil>: 
	I0401 19:30:59.306036   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetConfigRaw
	I0401 19:30:59.306679   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetIP
	I0401 19:30:59.309266   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.309680   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.309711   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.309938   70687 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/embed-certs-882095/config.json ...
	I0401 19:30:59.310193   70687 machine.go:94] provisionDockerMachine start ...
	I0401 19:30:59.310219   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:30:59.310435   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:30:59.312549   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.312908   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.312930   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.313088   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:30:59.313247   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:30:59.313385   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:30:59.313502   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:30:59.313721   70687 main.go:141] libmachine: Using SSH client type: native
	I0401 19:30:59.313894   70687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0401 19:30:59.313904   70687 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 19:30:59.418216   70687 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0401 19:30:59.418244   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetMachineName
	I0401 19:30:59.418506   70687 buildroot.go:166] provisioning hostname "embed-certs-882095"
	I0401 19:30:59.418537   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetMachineName
	I0401 19:30:59.418703   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:30:59.421075   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.421411   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.421453   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.421534   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:30:59.421721   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:30:59.421867   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:30:59.421978   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:30:59.422122   70687 main.go:141] libmachine: Using SSH client type: native
	I0401 19:30:59.422317   70687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0401 19:30:59.422332   70687 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-882095 && echo "embed-certs-882095" | sudo tee /etc/hostname
	I0401 19:30:59.541974   70687 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-882095
	
	I0401 19:30:59.542006   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:30:59.544628   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.544992   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.545025   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.545193   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:30:59.545403   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:30:59.545566   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:30:59.545720   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:30:59.545906   70687 main.go:141] libmachine: Using SSH client type: native
	I0401 19:30:59.546060   70687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0401 19:30:59.546077   70687 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-882095' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-882095/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-882095' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 19:30:59.660103   70687 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 19:30:59.660134   70687 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18233-10493/.minikube CaCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18233-10493/.minikube}
	I0401 19:30:59.660161   70687 buildroot.go:174] setting up certificates
	I0401 19:30:59.660172   70687 provision.go:84] configureAuth start
	I0401 19:30:59.660193   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetMachineName
	I0401 19:30:59.660465   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetIP
	I0401 19:30:59.662943   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.663260   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.663302   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.663413   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:30:59.665390   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.665688   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.665719   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.665821   70687 provision.go:143] copyHostCerts
	I0401 19:30:59.665879   70687 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem, removing ...
	I0401 19:30:59.665892   70687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem
	I0401 19:30:59.665956   70687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem (1679 bytes)
	I0401 19:30:59.666041   70687 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem, removing ...
	I0401 19:30:59.666048   70687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem
	I0401 19:30:59.666071   70687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem (1082 bytes)
	I0401 19:30:59.666121   70687 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem, removing ...
	I0401 19:30:59.666128   70687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem
	I0401 19:30:59.666148   70687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem (1123 bytes)
	I0401 19:30:59.666193   70687 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem org=jenkins.embed-certs-882095 san=[127.0.0.1 192.168.39.190 embed-certs-882095 localhost minikube]
	I0401 19:30:59.761975   70687 provision.go:177] copyRemoteCerts
	I0401 19:30:59.762033   70687 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 19:30:59.762058   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:30:59.764277   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.764601   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.764626   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.764832   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:30:59.765006   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:30:59.765155   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:30:59.765250   70687 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/embed-certs-882095/id_rsa Username:docker}
	I0401 19:30:59.848158   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 19:30:59.875879   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 19:30:59.902573   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0401 19:30:59.928757   70687 provision.go:87] duration metric: took 268.570153ms to configureAuth
	I0401 19:30:59.928781   70687 buildroot.go:189] setting minikube options for container-runtime
	I0401 19:30:59.928924   70687 config.go:182] Loaded profile config "embed-certs-882095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 19:30:59.928988   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:30:59.931187   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.931571   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.931600   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.931755   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:30:59.931914   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:30:59.932067   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:30:59.932176   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:30:59.932325   70687 main.go:141] libmachine: Using SSH client type: native
	I0401 19:30:59.932506   70687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0401 19:30:59.932530   70687 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 19:31:00.214527   70687 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 19:31:00.214552   70687 machine.go:97] duration metric: took 904.342981ms to provisionDockerMachine
	I0401 19:31:00.214563   70687 start.go:293] postStartSetup for "embed-certs-882095" (driver="kvm2")
	I0401 19:31:00.214574   70687 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 19:31:00.214587   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:31:00.214892   70687 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 19:31:00.214920   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:31:00.217289   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.217580   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:31:00.217608   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.217828   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:31:00.218014   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:31:00.218137   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:31:00.218267   70687 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/embed-certs-882095/id_rsa Username:docker}
	I0401 19:31:00.301379   70687 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 19:31:00.306211   70687 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 19:31:00.306231   70687 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/addons for local assets ...
	I0401 19:31:00.306284   70687 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/files for local assets ...
	I0401 19:31:00.306377   70687 filesync.go:149] local asset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> 177512.pem in /etc/ssl/certs
	I0401 19:31:00.306459   70687 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 19:31:00.316524   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:31:00.342848   70687 start.go:296] duration metric: took 128.272743ms for postStartSetup
	I0401 19:31:00.342887   70687 fix.go:56] duration metric: took 20.860054972s for fixHost
	I0401 19:31:00.342910   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:31:00.345429   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.345883   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:31:00.345915   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.346060   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:31:00.346288   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:31:00.346504   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:31:00.346656   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:31:00.346806   70687 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:00.346961   70687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0401 19:31:00.346972   70687 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 19:31:00.450606   70687 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711999860.420567604
	
	I0401 19:31:00.450627   70687 fix.go:216] guest clock: 1711999860.420567604
	I0401 19:31:00.450635   70687 fix.go:229] Guest: 2024-04-01 19:31:00.420567604 +0000 UTC Remote: 2024-04-01 19:31:00.34289204 +0000 UTC m=+253.905703085 (delta=77.675564ms)
	I0401 19:31:00.450683   70687 fix.go:200] guest clock delta is within tolerance: 77.675564ms
	I0401 19:31:00.450693   70687 start.go:83] releasing machines lock for "embed-certs-882095", held for 20.967887876s
	I0401 19:31:00.450725   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:31:00.451011   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetIP
	I0401 19:31:00.453581   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.453959   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:31:00.453990   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.454112   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:31:00.454613   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:31:00.454788   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:31:00.454844   70687 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 19:31:00.454886   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:31:00.454997   70687 ssh_runner.go:195] Run: cat /version.json
	I0401 19:31:00.455019   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:31:00.457540   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.457811   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.457846   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:31:00.457878   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.458053   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:31:00.458141   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:31:00.458173   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.458217   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:31:00.458295   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:31:00.458387   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:31:00.458471   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:31:00.458556   70687 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/embed-certs-882095/id_rsa Username:docker}
	I0401 19:31:00.458602   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:31:00.458741   70687 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/embed-certs-882095/id_rsa Username:docker}
	I0401 19:31:00.569039   70687 ssh_runner.go:195] Run: systemctl --version
	I0401 19:31:00.575452   70687 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 19:31:00.728549   70687 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 19:31:00.735559   70687 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 19:31:00.735642   70687 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 19:31:00.756640   70687 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 19:31:00.756669   70687 start.go:494] detecting cgroup driver to use...
	I0401 19:31:00.756743   70687 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 19:31:00.776638   70687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 19:31:00.793006   70687 docker.go:217] disabling cri-docker service (if available) ...
	I0401 19:31:00.793063   70687 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 19:31:00.809240   70687 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 19:31:00.825245   70687 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 19:31:00.952595   70687 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 19:31:01.109771   70687 docker.go:233] disabling docker service ...
	I0401 19:31:01.109841   70687 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 19:31:01.126814   70687 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 19:31:01.141976   70687 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 19:31:01.301634   70687 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 19:31:01.440350   70687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 19:31:01.458083   70687 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 19:31:01.479653   70687 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0401 19:31:01.479730   70687 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:01.492598   70687 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 19:31:01.492677   70687 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:01.506469   70687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:01.521981   70687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:01.534406   70687 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 19:31:01.546817   70687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:01.558857   70687 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:01.578922   70687 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:01.593381   70687 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 19:31:01.605265   70687 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0401 19:31:01.605341   70687 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0401 19:31:01.621681   70687 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 19:31:01.633336   70687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:31:01.770373   70687 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 19:31:01.927892   70687 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 19:31:01.927952   70687 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 19:31:01.935046   70687 start.go:562] Will wait 60s for crictl version
	I0401 19:31:01.935101   70687 ssh_runner.go:195] Run: which crictl
	I0401 19:31:01.940563   70687 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 19:31:01.986956   70687 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 19:31:01.987030   70687 ssh_runner.go:195] Run: crio --version
	I0401 19:31:02.018567   70687 ssh_runner.go:195] Run: crio --version
	I0401 19:31:02.059077   70687 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0401 19:31:00.474118   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .Start
	I0401 19:31:00.474275   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Ensuring networks are active...
	I0401 19:31:00.474896   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Ensuring network default is active
	I0401 19:31:00.475289   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Ensuring network mk-default-k8s-diff-port-734648 is active
	I0401 19:31:00.475650   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Getting domain xml...
	I0401 19:31:00.476263   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Creating domain...
	I0401 19:31:01.736646   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting to get IP...
	I0401 19:31:01.737490   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:01.737889   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:01.737939   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:01.737867   71724 retry.go:31] will retry after 198.445345ms: waiting for machine to come up
	I0401 19:31:01.938446   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:01.938981   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:01.939012   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:01.938936   71724 retry.go:31] will retry after 320.128802ms: waiting for machine to come up
	I0401 19:31:02.260257   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:02.260673   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:02.260703   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:02.260633   71724 retry.go:31] will retry after 357.316906ms: waiting for machine to come up
	I0401 19:31:02.060343   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetIP
	I0401 19:31:02.063382   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:02.063775   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:31:02.063808   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:02.064047   70687 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0401 19:31:02.069227   70687 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:31:02.085344   70687 kubeadm.go:877] updating cluster {Name:embed-certs-882095 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.3 ClusterName:embed-certs-882095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.190 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 19:31:02.085451   70687 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0401 19:31:02.085490   70687 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:31:02.139383   70687 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0401 19:31:02.139454   70687 ssh_runner.go:195] Run: which lz4
	I0401 19:31:02.144331   70687 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0401 19:31:02.149534   70687 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 19:31:02.149561   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0401 19:31:03.954448   70687 crio.go:462] duration metric: took 1.810143668s to copy over tarball
	I0401 19:31:03.954523   70687 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 19:31:06.445735   70687 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.491184732s)
	I0401 19:31:06.445759   70687 crio.go:469] duration metric: took 2.491285648s to extract the tarball
	I0401 19:31:06.445765   70687 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 19:31:02.620250   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:02.620729   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:02.620760   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:02.620666   71724 retry.go:31] will retry after 520.509423ms: waiting for machine to come up
	I0401 19:31:03.142471   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:03.142902   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:03.142930   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:03.142864   71724 retry.go:31] will retry after 714.309176ms: waiting for machine to come up
	I0401 19:31:03.858594   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:03.859071   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:03.859104   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:03.859035   71724 retry.go:31] will retry after 620.601084ms: waiting for machine to come up
	I0401 19:31:04.480923   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:04.481350   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:04.481381   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:04.481313   71724 retry.go:31] will retry after 1.00716549s: waiting for machine to come up
	I0401 19:31:05.489788   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:05.490243   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:05.490273   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:05.490186   71724 retry.go:31] will retry after 1.158564029s: waiting for machine to come up
	I0401 19:31:06.650440   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:06.650969   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:06.650997   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:06.650915   71724 retry.go:31] will retry after 1.172294728s: waiting for machine to come up
	I0401 19:31:06.485475   70687 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:31:06.532426   70687 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 19:31:06.532448   70687 cache_images.go:84] Images are preloaded, skipping loading
	I0401 19:31:06.532455   70687 kubeadm.go:928] updating node { 192.168.39.190 8443 v1.29.3 crio true true} ...
	I0401 19:31:06.532544   70687 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-882095 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.190
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:embed-certs-882095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 19:31:06.532611   70687 ssh_runner.go:195] Run: crio config
	I0401 19:31:06.585119   70687 cni.go:84] Creating CNI manager for ""
	I0401 19:31:06.585144   70687 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:31:06.585158   70687 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 19:31:06.585185   70687 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.190 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-882095 NodeName:embed-certs-882095 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.190"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.190 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 19:31:06.585374   70687 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.190
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-882095"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.190
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.190"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 19:31:06.585473   70687 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0401 19:31:06.596747   70687 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 19:31:06.596818   70687 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 19:31:06.606959   70687 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0401 19:31:06.628202   70687 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 19:31:06.649043   70687 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0401 19:31:06.668400   70687 ssh_runner.go:195] Run: grep 192.168.39.190	control-plane.minikube.internal$ /etc/hosts
	I0401 19:31:06.672469   70687 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.190	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:31:06.685666   70687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:31:06.806186   70687 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:31:06.823315   70687 certs.go:68] Setting up /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/embed-certs-882095 for IP: 192.168.39.190
	I0401 19:31:06.823355   70687 certs.go:194] generating shared ca certs ...
	I0401 19:31:06.823376   70687 certs.go:226] acquiring lock for ca certs: {Name:mk348b3e250c104b662139cd7212c6c6dfda3180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:31:06.823569   70687 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key
	I0401 19:31:06.823645   70687 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key
	I0401 19:31:06.823659   70687 certs.go:256] generating profile certs ...
	I0401 19:31:06.823764   70687 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/embed-certs-882095/client.key
	I0401 19:31:06.823872   70687 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/embed-certs-882095/apiserver.key.c07921ce
	I0401 19:31:06.823945   70687 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/embed-certs-882095/proxy-client.key
	I0401 19:31:06.824092   70687 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem (1338 bytes)
	W0401 19:31:06.824132   70687 certs.go:480] ignoring /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751_empty.pem, impossibly tiny 0 bytes
	I0401 19:31:06.824145   70687 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem (1675 bytes)
	I0401 19:31:06.824183   70687 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem (1082 bytes)
	I0401 19:31:06.824223   70687 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem (1123 bytes)
	I0401 19:31:06.824254   70687 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem (1679 bytes)
	I0401 19:31:06.824309   70687 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:31:06.824942   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 19:31:06.867274   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 19:31:06.907288   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 19:31:06.948328   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 19:31:06.975058   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/embed-certs-882095/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0401 19:31:07.003183   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/embed-certs-882095/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0401 19:31:07.032030   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/embed-certs-882095/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 19:31:07.061612   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/embed-certs-882095/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 19:31:07.090149   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /usr/share/ca-certificates/177512.pem (1708 bytes)
	I0401 19:31:07.116885   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 19:31:07.143296   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem --> /usr/share/ca-certificates/17751.pem (1338 bytes)
	I0401 19:31:07.169420   70687 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0401 19:31:07.188908   70687 ssh_runner.go:195] Run: openssl version
	I0401 19:31:07.195591   70687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177512.pem && ln -fs /usr/share/ca-certificates/177512.pem /etc/ssl/certs/177512.pem"
	I0401 19:31:07.211583   70687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177512.pem
	I0401 19:31:07.217049   70687 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 18:15 /usr/share/ca-certificates/177512.pem
	I0401 19:31:07.217110   70687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177512.pem
	I0401 19:31:07.223751   70687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177512.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 19:31:07.237393   70687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 19:31:07.250523   70687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:07.255928   70687 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 18:07 /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:07.255981   70687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:07.262373   70687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 19:31:07.275174   70687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17751.pem && ln -fs /usr/share/ca-certificates/17751.pem /etc/ssl/certs/17751.pem"
	I0401 19:31:07.288039   70687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17751.pem
	I0401 19:31:07.293339   70687 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 18:15 /usr/share/ca-certificates/17751.pem
	I0401 19:31:07.293392   70687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17751.pem
	I0401 19:31:07.299983   70687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17751.pem /etc/ssl/certs/51391683.0"
	I0401 19:31:07.313120   70687 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 19:31:07.318425   70687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 19:31:07.325172   70687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 19:31:07.331674   70687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 19:31:07.338299   70687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 19:31:07.344896   70687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 19:31:07.351424   70687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 19:31:07.357898   70687 kubeadm.go:391] StartCluster: {Name:embed-certs-882095 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.3 ClusterName:embed-certs-882095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.190 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:31:07.357995   70687 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 19:31:07.358047   70687 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:31:07.401268   70687 cri.go:89] found id: ""
	I0401 19:31:07.401326   70687 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0401 19:31:07.414232   70687 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0401 19:31:07.414255   70687 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0401 19:31:07.414262   70687 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0401 19:31:07.414308   70687 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 19:31:07.425972   70687 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 19:31:07.426977   70687 kubeconfig.go:125] found "embed-certs-882095" server: "https://192.168.39.190:8443"
	I0401 19:31:07.428767   70687 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 19:31:07.440164   70687 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.190
	I0401 19:31:07.440191   70687 kubeadm.go:1154] stopping kube-system containers ...
	I0401 19:31:07.440201   70687 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0401 19:31:07.440244   70687 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:31:07.484303   70687 cri.go:89] found id: ""
	I0401 19:31:07.484407   70687 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0401 19:31:07.505186   70687 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:31:07.518316   70687 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:31:07.518342   70687 kubeadm.go:156] found existing configuration files:
	
	I0401 19:31:07.518393   70687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 19:31:07.530759   70687 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:31:07.530832   70687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:31:07.542799   70687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 19:31:07.553972   70687 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:31:07.554031   70687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:31:07.565324   70687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 19:31:07.576244   70687 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:31:07.576318   70687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:31:07.588874   70687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 19:31:07.600440   70687 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:31:07.600526   70687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:31:07.611963   70687 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:31:07.623225   70687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:07.740800   70687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:09.050887   70687 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.310046744s)
	I0401 19:31:09.050920   70687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:09.266170   70687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:09.336585   70687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:09.422513   70687 api_server.go:52] waiting for apiserver process to appear ...
	I0401 19:31:09.422594   70687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:09.923709   70687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:10.422822   70687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:10.922892   70687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:10.946590   70687 api_server.go:72] duration metric: took 1.524076694s to wait for apiserver process to appear ...
	I0401 19:31:10.946627   70687 api_server.go:88] waiting for apiserver healthz status ...
	I0401 19:31:10.946650   70687 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0401 19:31:07.825239   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:07.825629   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:07.825676   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:07.825586   71724 retry.go:31] will retry after 1.412332675s: waiting for machine to come up
	I0401 19:31:09.240010   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:09.240385   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:09.240416   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:09.240327   71724 retry.go:31] will retry after 2.601344034s: waiting for machine to come up
	I0401 19:31:11.843464   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:11.843948   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:11.843976   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:11.843900   71724 retry.go:31] will retry after 3.297720076s: waiting for machine to come up
	I0401 19:31:13.350274   70687 api_server.go:279] https://192.168.39.190:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0401 19:31:13.350309   70687 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0401 19:31:13.350325   70687 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0401 19:31:13.383494   70687 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0401 19:31:13.383543   70687 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0401 19:31:13.447744   70687 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0401 19:31:13.452796   70687 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0401 19:31:13.452852   70687 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0401 19:31:13.946971   70687 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0401 19:31:13.951522   70687 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0401 19:31:13.951554   70687 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0401 19:31:14.447104   70687 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0401 19:31:14.455165   70687 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0401 19:31:14.455204   70687 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0401 19:31:14.947278   70687 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0401 19:31:14.951487   70687 api_server.go:279] https://192.168.39.190:8443/healthz returned 200:
	ok
	I0401 19:31:14.958647   70687 api_server.go:141] control plane version: v1.29.3
	I0401 19:31:14.958670   70687 api_server.go:131] duration metric: took 4.012036456s to wait for apiserver health ...
	I0401 19:31:14.958687   70687 cni.go:84] Creating CNI manager for ""
	I0401 19:31:14.958693   70687 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:31:14.960494   70687 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0401 19:31:14.961899   70687 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0401 19:31:14.973709   70687 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0401 19:31:14.998105   70687 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 19:31:15.008481   70687 system_pods.go:59] 8 kube-system pods found
	I0401 19:31:15.008525   70687 system_pods.go:61] "coredns-76f75df574-nvcq4" [663bd69b-6da8-4a66-b20f-ea1eb507096a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:31:15.008536   70687 system_pods.go:61] "etcd-embed-certs-882095" [2b56dddc-b309-4965-811e-459c59b86dac] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0401 19:31:15.008551   70687 system_pods.go:61] "kube-apiserver-embed-certs-882095" [2e376ce4-504c-441a-baf8-0184a17e5bf4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0401 19:31:15.008561   70687 system_pods.go:61] "kube-controller-manager-embed-certs-882095" [e6bf3b2f-289b-4719-86f7-43e873fe8d85] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0401 19:31:15.008571   70687 system_pods.go:61] "kube-proxy-td6jk" [275536ff-4ec0-4d2c-8658-57aadda367b2] Running
	I0401 19:31:15.008580   70687 system_pods.go:61] "kube-scheduler-embed-certs-882095" [4551eb2a-9560-4d4f-aac0-9cfe6c790649] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0401 19:31:15.008591   70687 system_pods.go:61] "metrics-server-57f55c9bc5-g6z6c" [dc8aee6a-f101-4109-a259-351fddbddd44] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:31:15.008599   70687 system_pods.go:61] "storage-provisioner" [82a76833-c874-45d8-8ba7-1a483c15a997] Running
	I0401 19:31:15.008609   70687 system_pods.go:74] duration metric: took 10.480741ms to wait for pod list to return data ...
	I0401 19:31:15.008622   70687 node_conditions.go:102] verifying NodePressure condition ...
	I0401 19:31:15.012256   70687 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 19:31:15.012289   70687 node_conditions.go:123] node cpu capacity is 2
	I0401 19:31:15.012303   70687 node_conditions.go:105] duration metric: took 3.672159ms to run NodePressure ...
	I0401 19:31:15.012327   70687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:15.288861   70687 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0401 19:31:15.293731   70687 kubeadm.go:733] kubelet initialised
	I0401 19:31:15.293750   70687 kubeadm.go:734] duration metric: took 4.868595ms waiting for restarted kubelet to initialise ...
	I0401 19:31:15.293758   70687 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:31:15.298657   70687 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-nvcq4" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:15.304795   70687 pod_ready.go:97] node "embed-certs-882095" hosting pod "coredns-76f75df574-nvcq4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-882095" has status "Ready":"False"
	I0401 19:31:15.304813   70687 pod_ready.go:81] duration metric: took 6.134849ms for pod "coredns-76f75df574-nvcq4" in "kube-system" namespace to be "Ready" ...
	E0401 19:31:15.304822   70687 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-882095" hosting pod "coredns-76f75df574-nvcq4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-882095" has status "Ready":"False"
	I0401 19:31:15.304827   70687 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:15.309184   70687 pod_ready.go:97] node "embed-certs-882095" hosting pod "etcd-embed-certs-882095" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-882095" has status "Ready":"False"
	I0401 19:31:15.309204   70687 pod_ready.go:81] duration metric: took 4.369325ms for pod "etcd-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	E0401 19:31:15.309213   70687 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-882095" hosting pod "etcd-embed-certs-882095" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-882095" has status "Ready":"False"
	I0401 19:31:15.309221   70687 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:15.313737   70687 pod_ready.go:97] node "embed-certs-882095" hosting pod "kube-apiserver-embed-certs-882095" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-882095" has status "Ready":"False"
	I0401 19:31:15.313755   70687 pod_ready.go:81] duration metric: took 4.525801ms for pod "kube-apiserver-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	E0401 19:31:15.313764   70687 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-882095" hosting pod "kube-apiserver-embed-certs-882095" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-882095" has status "Ready":"False"
	I0401 19:31:15.313771   70687 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:15.401827   70687 pod_ready.go:97] node "embed-certs-882095" hosting pod "kube-controller-manager-embed-certs-882095" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-882095" has status "Ready":"False"
	I0401 19:31:15.401857   70687 pod_ready.go:81] duration metric: took 88.077915ms for pod "kube-controller-manager-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	E0401 19:31:15.401871   70687 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-882095" hosting pod "kube-controller-manager-embed-certs-882095" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-882095" has status "Ready":"False"
	I0401 19:31:15.401878   70687 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-td6jk" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:15.802462   70687 pod_ready.go:92] pod "kube-proxy-td6jk" in "kube-system" namespace has status "Ready":"True"
	I0401 19:31:15.802484   70687 pod_ready.go:81] duration metric: took 400.599194ms for pod "kube-proxy-td6jk" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:15.802494   70687 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:15.142653   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:15.143000   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:15.143062   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:15.142972   71724 retry.go:31] will retry after 3.764823961s: waiting for machine to come up
	I0401 19:31:20.350903   71168 start.go:364] duration metric: took 3m27.278785625s to acquireMachinesLock for "old-k8s-version-163608"
	I0401 19:31:20.350993   71168 start.go:96] Skipping create...Using existing machine configuration
	I0401 19:31:20.351010   71168 fix.go:54] fixHost starting: 
	I0401 19:31:20.351490   71168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:31:20.351571   71168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:31:20.368575   71168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38247
	I0401 19:31:20.368936   71168 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:31:20.369448   71168 main.go:141] libmachine: Using API Version  1
	I0401 19:31:20.369469   71168 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:31:20.369822   71168 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:31:20.370033   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:31:20.370195   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetState
	I0401 19:31:20.371625   71168 fix.go:112] recreateIfNeeded on old-k8s-version-163608: state=Stopped err=<nil>
	I0401 19:31:20.371681   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	W0401 19:31:20.371842   71168 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 19:31:20.374328   71168 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-163608" ...
	I0401 19:31:17.809256   70687 pod_ready.go:102] pod "kube-scheduler-embed-certs-882095" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:19.809947   70687 pod_ready.go:102] pod "kube-scheduler-embed-certs-882095" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:20.818455   70687 pod_ready.go:92] pod "kube-scheduler-embed-certs-882095" in "kube-system" namespace has status "Ready":"True"
	I0401 19:31:20.818481   70687 pod_ready.go:81] duration metric: took 5.015979611s for pod "kube-scheduler-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:20.818493   70687 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:18.910798   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:18.911231   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Found IP for machine: 192.168.61.145
	I0401 19:31:18.911266   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has current primary IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:18.911277   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Reserving static IP address...
	I0401 19:31:18.911761   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-734648", mac: "52:54:00:49:dc:50", ip: "192.168.61.145"} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:18.911795   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | skip adding static IP to network mk-default-k8s-diff-port-734648 - found existing host DHCP lease matching {name: "default-k8s-diff-port-734648", mac: "52:54:00:49:dc:50", ip: "192.168.61.145"}
	I0401 19:31:18.911819   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Reserved static IP address: 192.168.61.145
	I0401 19:31:18.911835   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for SSH to be available...
	I0401 19:31:18.911869   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | Getting to WaitForSSH function...
	I0401 19:31:18.913767   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:18.914054   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:18.914082   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:18.914207   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | Using SSH client type: external
	I0401 19:31:18.914236   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | Using SSH private key: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/default-k8s-diff-port-734648/id_rsa (-rw-------)
	I0401 19:31:18.914278   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.145 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18233-10493/.minikube/machines/default-k8s-diff-port-734648/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0401 19:31:18.914300   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | About to run SSH command:
	I0401 19:31:18.914313   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | exit 0
	I0401 19:31:19.037713   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | SSH cmd err, output: <nil>: 
	I0401 19:31:19.038080   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetConfigRaw
	I0401 19:31:19.038767   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetIP
	I0401 19:31:19.042390   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.043249   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:19.043311   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.043949   70962 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/default-k8s-diff-port-734648/config.json ...
	I0401 19:31:19.044504   70962 machine.go:94] provisionDockerMachine start ...
	I0401 19:31:19.044554   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:31:19.044916   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:19.047637   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.047908   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:19.047941   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.048088   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:31:19.048265   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:19.048408   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:19.048522   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:31:19.048636   70962 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:19.048790   70962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.145 22 <nil> <nil>}
	I0401 19:31:19.048800   70962 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 19:31:19.154415   70962 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0401 19:31:19.154444   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetMachineName
	I0401 19:31:19.154683   70962 buildroot.go:166] provisioning hostname "default-k8s-diff-port-734648"
	I0401 19:31:19.154713   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetMachineName
	I0401 19:31:19.154887   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:19.157442   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.157867   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:19.157896   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.158041   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:31:19.158237   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:19.158402   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:19.158540   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:31:19.158713   70962 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:19.158905   70962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.145 22 <nil> <nil>}
	I0401 19:31:19.158920   70962 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-734648 && echo "default-k8s-diff-port-734648" | sudo tee /etc/hostname
	I0401 19:31:19.276129   70962 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-734648
	
	I0401 19:31:19.276160   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:19.278657   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.278918   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:19.278940   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.279158   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:31:19.279353   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:19.279523   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:19.279671   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:31:19.279831   70962 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:19.280057   70962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.145 22 <nil> <nil>}
	I0401 19:31:19.280082   70962 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-734648' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-734648/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-734648' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 19:31:19.395730   70962 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 19:31:19.395755   70962 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18233-10493/.minikube CaCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18233-10493/.minikube}
	I0401 19:31:19.395779   70962 buildroot.go:174] setting up certificates
	I0401 19:31:19.395788   70962 provision.go:84] configureAuth start
	I0401 19:31:19.395798   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetMachineName
	I0401 19:31:19.396046   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetIP
	I0401 19:31:19.398668   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.399036   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:19.399065   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.399219   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:19.401309   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.401611   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:19.401656   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.401750   70962 provision.go:143] copyHostCerts
	I0401 19:31:19.401812   70962 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem, removing ...
	I0401 19:31:19.401822   70962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem
	I0401 19:31:19.401876   70962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem (1082 bytes)
	I0401 19:31:19.401978   70962 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem, removing ...
	I0401 19:31:19.401988   70962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem
	I0401 19:31:19.402015   70962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem (1123 bytes)
	I0401 19:31:19.402121   70962 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem, removing ...
	I0401 19:31:19.402129   70962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem
	I0401 19:31:19.402147   70962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem (1679 bytes)
	I0401 19:31:19.402205   70962 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-734648 san=[127.0.0.1 192.168.61.145 default-k8s-diff-port-734648 localhost minikube]
	I0401 19:31:19.655203   70962 provision.go:177] copyRemoteCerts
	I0401 19:31:19.655256   70962 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 19:31:19.655281   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:19.658194   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.658512   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:19.658540   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.658693   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:31:19.658896   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:19.659039   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:31:19.659187   70962 sshutil.go:53] new ssh client: &{IP:192.168.61.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/default-k8s-diff-port-734648/id_rsa Username:docker}
	I0401 19:31:19.743131   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 19:31:19.771327   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0401 19:31:19.797350   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 19:31:19.824244   70962 provision.go:87] duration metric: took 428.444366ms to configureAuth
	I0401 19:31:19.824274   70962 buildroot.go:189] setting minikube options for container-runtime
	I0401 19:31:19.824473   70962 config.go:182] Loaded profile config "default-k8s-diff-port-734648": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 19:31:19.824563   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:19.827376   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.827798   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:19.827838   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.827984   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:31:19.828184   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:19.828352   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:19.828496   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:31:19.828653   70962 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:19.828827   70962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.145 22 <nil> <nil>}
	I0401 19:31:19.828865   70962 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 19:31:20.107291   70962 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 19:31:20.107320   70962 machine.go:97] duration metric: took 1.062788118s to provisionDockerMachine
	I0401 19:31:20.107333   70962 start.go:293] postStartSetup for "default-k8s-diff-port-734648" (driver="kvm2")
	I0401 19:31:20.107347   70962 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 19:31:20.107369   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:31:20.107671   70962 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 19:31:20.107693   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:20.110380   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.110739   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:20.110780   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.110895   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:31:20.111075   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:20.111218   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:31:20.111353   70962 sshutil.go:53] new ssh client: &{IP:192.168.61.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/default-k8s-diff-port-734648/id_rsa Username:docker}
	I0401 19:31:20.193908   70962 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 19:31:20.198544   70962 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 19:31:20.198572   70962 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/addons for local assets ...
	I0401 19:31:20.198639   70962 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/files for local assets ...
	I0401 19:31:20.198704   70962 filesync.go:149] local asset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> 177512.pem in /etc/ssl/certs
	I0401 19:31:20.198788   70962 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 19:31:20.209866   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:31:20.240362   70962 start.go:296] duration metric: took 133.016405ms for postStartSetup
	I0401 19:31:20.240399   70962 fix.go:56] duration metric: took 19.789546756s for fixHost
	I0401 19:31:20.240418   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:20.243069   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.243448   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:20.243479   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.243657   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:31:20.243865   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:20.244061   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:20.244209   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:31:20.244399   70962 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:20.244600   70962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.145 22 <nil> <nil>}
	I0401 19:31:20.244616   70962 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 19:31:20.350752   70962 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711999880.326440079
	
	I0401 19:31:20.350779   70962 fix.go:216] guest clock: 1711999880.326440079
	I0401 19:31:20.350789   70962 fix.go:229] Guest: 2024-04-01 19:31:20.326440079 +0000 UTC Remote: 2024-04-01 19:31:20.240403038 +0000 UTC m=+222.858311555 (delta=86.037041ms)
	I0401 19:31:20.350808   70962 fix.go:200] guest clock delta is within tolerance: 86.037041ms
	I0401 19:31:20.350812   70962 start.go:83] releasing machines lock for "default-k8s-diff-port-734648", held for 19.899997669s
	I0401 19:31:20.350838   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:31:20.351118   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetIP
	I0401 19:31:20.354040   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.354395   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:20.354413   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.354595   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:31:20.355068   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:31:20.355238   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:31:20.355317   70962 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 19:31:20.355356   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:20.355530   70962 ssh_runner.go:195] Run: cat /version.json
	I0401 19:31:20.355557   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:20.357970   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.358372   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:20.358405   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.358430   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.358585   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:31:20.358766   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:20.358807   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:20.358834   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.358957   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:31:20.359013   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:31:20.359150   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:20.359203   70962 sshutil.go:53] new ssh client: &{IP:192.168.61.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/default-k8s-diff-port-734648/id_rsa Username:docker}
	I0401 19:31:20.359292   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:31:20.359439   70962 sshutil.go:53] new ssh client: &{IP:192.168.61.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/default-k8s-diff-port-734648/id_rsa Username:docker}
	I0401 19:31:20.466422   70962 ssh_runner.go:195] Run: systemctl --version
	I0401 19:31:20.472949   70962 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 19:31:20.626069   70962 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 19:31:20.633425   70962 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 19:31:20.633497   70962 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 19:31:20.658883   70962 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 19:31:20.658910   70962 start.go:494] detecting cgroup driver to use...
	I0401 19:31:20.658979   70962 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 19:31:20.686302   70962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 19:31:20.704507   70962 docker.go:217] disabling cri-docker service (if available) ...
	I0401 19:31:20.704583   70962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 19:31:20.725216   70962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 19:31:20.740635   70962 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 19:31:20.864184   70962 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 19:31:21.010752   70962 docker.go:233] disabling docker service ...
	I0401 19:31:21.010821   70962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 19:31:21.030718   70962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 19:31:21.047787   70962 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 19:31:21.194455   70962 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 19:31:21.337547   70962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 19:31:21.357144   70962 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 19:31:21.381709   70962 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0401 19:31:21.381782   70962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:21.393160   70962 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 19:31:21.393229   70962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:21.405047   70962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:21.416810   70962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:21.428947   70962 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 19:31:21.440886   70962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:21.452872   70962 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:21.473096   70962 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:21.484427   70962 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 19:31:21.494121   70962 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0401 19:31:21.494190   70962 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0401 19:31:21.509859   70962 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 19:31:21.520329   70962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:31:21.671075   70962 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 19:31:21.818822   70962 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 19:31:21.818892   70962 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 19:31:21.825189   70962 start.go:562] Will wait 60s for crictl version
	I0401 19:31:21.825260   70962 ssh_runner.go:195] Run: which crictl
	I0401 19:31:21.830058   70962 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 19:31:21.869617   70962 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 19:31:21.869721   70962 ssh_runner.go:195] Run: crio --version
	I0401 19:31:21.906091   70962 ssh_runner.go:195] Run: crio --version
	I0401 19:31:21.946240   70962 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0401 19:31:21.947653   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetIP
	I0401 19:31:21.950691   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:21.951156   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:21.951201   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:21.951445   70962 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0401 19:31:21.959376   70962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:31:21.974226   70962 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-734648 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:default-k8s-diff-port-734648 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.145 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 19:31:21.974348   70962 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0401 19:31:21.974426   70962 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:31:22.011856   70962 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0401 19:31:22.011930   70962 ssh_runner.go:195] Run: which lz4
	I0401 19:31:22.016672   70962 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0401 19:31:22.021864   70962 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 19:31:22.021893   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0401 19:31:20.375755   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .Start
	I0401 19:31:20.375932   71168 main.go:141] libmachine: (old-k8s-version-163608) Ensuring networks are active...
	I0401 19:31:20.376713   71168 main.go:141] libmachine: (old-k8s-version-163608) Ensuring network default is active
	I0401 19:31:20.377858   71168 main.go:141] libmachine: (old-k8s-version-163608) Ensuring network mk-old-k8s-version-163608 is active
	I0401 19:31:20.378278   71168 main.go:141] libmachine: (old-k8s-version-163608) Getting domain xml...
	I0401 19:31:20.378972   71168 main.go:141] libmachine: (old-k8s-version-163608) Creating domain...
	I0401 19:31:21.643237   71168 main.go:141] libmachine: (old-k8s-version-163608) Waiting to get IP...
	I0401 19:31:21.644082   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:21.644468   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:21.644535   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:21.644446   71902 retry.go:31] will retry after 208.251344ms: waiting for machine to come up
	I0401 19:31:21.854070   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:21.854545   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:21.854593   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:21.854527   71902 retry.go:31] will retry after 240.466964ms: waiting for machine to come up
	I0401 19:31:22.096940   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:22.097447   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:22.097470   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:22.097405   71902 retry.go:31] will retry after 480.217755ms: waiting for machine to come up
	I0401 19:31:22.579111   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:22.579596   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:22.579628   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:22.579518   71902 retry.go:31] will retry after 581.713487ms: waiting for machine to come up
	I0401 19:31:22.826723   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:25.326165   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:23.813558   70962 crio.go:462] duration metric: took 1.796902191s to copy over tarball
	I0401 19:31:23.813619   70962 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 19:31:26.447802   70962 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.634145928s)
	I0401 19:31:26.447840   70962 crio.go:469] duration metric: took 2.634257029s to extract the tarball
	I0401 19:31:26.447849   70962 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 19:31:26.488228   70962 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:31:26.535741   70962 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 19:31:26.535770   70962 cache_images.go:84] Images are preloaded, skipping loading
	I0401 19:31:26.535780   70962 kubeadm.go:928] updating node { 192.168.61.145 8444 v1.29.3 crio true true} ...
	I0401 19:31:26.535931   70962 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-734648 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.145
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-734648 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 19:31:26.536019   70962 ssh_runner.go:195] Run: crio config
	I0401 19:31:26.590211   70962 cni.go:84] Creating CNI manager for ""
	I0401 19:31:26.590239   70962 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:31:26.590254   70962 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 19:31:26.590282   70962 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.145 APIServerPort:8444 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-734648 NodeName:default-k8s-diff-port-734648 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.145"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.145 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 19:31:26.590459   70962 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.145
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-734648"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.145
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.145"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 19:31:26.590533   70962 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0401 19:31:26.602186   70962 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 19:31:26.602264   70962 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 19:31:26.616193   70962 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0401 19:31:26.636634   70962 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 19:31:26.660339   70962 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0401 19:31:26.687935   70962 ssh_runner.go:195] Run: grep 192.168.61.145	control-plane.minikube.internal$ /etc/hosts
	I0401 19:31:26.693966   70962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.145	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:31:26.709876   70962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:31:26.854990   70962 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:31:26.877303   70962 certs.go:68] Setting up /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/default-k8s-diff-port-734648 for IP: 192.168.61.145
	I0401 19:31:26.877327   70962 certs.go:194] generating shared ca certs ...
	I0401 19:31:26.877350   70962 certs.go:226] acquiring lock for ca certs: {Name:mk348b3e250c104b662139cd7212c6c6dfda3180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:31:26.877578   70962 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key
	I0401 19:31:26.877621   70962 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key
	I0401 19:31:26.877637   70962 certs.go:256] generating profile certs ...
	I0401 19:31:26.877777   70962 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/default-k8s-diff-port-734648/client.key
	I0401 19:31:26.877864   70962 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/default-k8s-diff-port-734648/apiserver.key.e4671486
	I0401 19:31:26.877909   70962 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/default-k8s-diff-port-734648/proxy-client.key
	I0401 19:31:26.878007   70962 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem (1338 bytes)
	W0401 19:31:26.878049   70962 certs.go:480] ignoring /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751_empty.pem, impossibly tiny 0 bytes
	I0401 19:31:26.878062   70962 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem (1675 bytes)
	I0401 19:31:26.878094   70962 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem (1082 bytes)
	I0401 19:31:26.878128   70962 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem (1123 bytes)
	I0401 19:31:26.878153   70962 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem (1679 bytes)
	I0401 19:31:26.878203   70962 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:31:26.879101   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 19:31:26.917600   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 19:31:26.968606   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 19:31:27.012527   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 19:31:27.078525   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/default-k8s-diff-port-734648/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0401 19:31:27.125195   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/default-k8s-diff-port-734648/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 19:31:27.157190   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/default-k8s-diff-port-734648/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 19:31:27.185434   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/default-k8s-diff-port-734648/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 19:31:27.215215   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 19:31:27.246938   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem --> /usr/share/ca-certificates/17751.pem (1338 bytes)
	I0401 19:31:27.277210   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /usr/share/ca-certificates/177512.pem (1708 bytes)
	I0401 19:31:27.307099   70962 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0401 19:31:27.326664   70962 ssh_runner.go:195] Run: openssl version
	I0401 19:31:27.333292   70962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 19:31:27.344724   70962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:27.350096   70962 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 18:07 /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:27.350146   70962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:27.356421   70962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 19:31:27.368124   70962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17751.pem && ln -fs /usr/share/ca-certificates/17751.pem /etc/ssl/certs/17751.pem"
	I0401 19:31:27.379331   70962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17751.pem
	I0401 19:31:27.384465   70962 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 18:15 /usr/share/ca-certificates/17751.pem
	I0401 19:31:27.384518   70962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17751.pem
	I0401 19:31:27.391192   70962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17751.pem /etc/ssl/certs/51391683.0"
	I0401 19:31:27.403898   70962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177512.pem && ln -fs /usr/share/ca-certificates/177512.pem /etc/ssl/certs/177512.pem"
	I0401 19:31:27.418676   70962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177512.pem
	I0401 19:31:27.424254   70962 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 18:15 /usr/share/ca-certificates/177512.pem
	I0401 19:31:27.424308   70962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177512.pem
	I0401 19:31:23.163331   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:23.163803   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:23.163838   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:23.163770   71902 retry.go:31] will retry after 737.12898ms: waiting for machine to come up
	I0401 19:31:23.902739   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:23.903192   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:23.903222   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:23.903139   71902 retry.go:31] will retry after 718.826495ms: waiting for machine to come up
	I0401 19:31:24.624169   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:24.624620   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:24.624648   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:24.624574   71902 retry.go:31] will retry after 1.020701715s: waiting for machine to come up
	I0401 19:31:25.647470   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:25.647957   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:25.647988   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:25.647921   71902 retry.go:31] will retry after 1.318891306s: waiting for machine to come up
	I0401 19:31:26.968134   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:26.968588   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:26.968613   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:26.968535   71902 retry.go:31] will retry after 1.465864517s: waiting for machine to come up
	I0401 19:31:27.752110   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:29.827324   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:27.431798   70962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177512.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 19:31:27.749367   70962 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 19:31:27.757123   70962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 19:31:27.768626   70962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 19:31:27.778119   70962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 19:31:27.786893   70962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 19:31:27.797129   70962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 19:31:27.804804   70962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 19:31:27.813194   70962 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-734648 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.29.3 ClusterName:default-k8s-diff-port-734648 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.145 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:31:27.813274   70962 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 19:31:27.813325   70962 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:31:27.864565   70962 cri.go:89] found id: ""
	I0401 19:31:27.864637   70962 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0401 19:31:27.876745   70962 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0401 19:31:27.876789   70962 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0401 19:31:27.876797   70962 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0401 19:31:27.876862   70962 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 19:31:27.887494   70962 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 19:31:27.888632   70962 kubeconfig.go:125] found "default-k8s-diff-port-734648" server: "https://192.168.61.145:8444"
	I0401 19:31:27.890729   70962 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 19:31:27.900847   70962 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.145
	I0401 19:31:27.900877   70962 kubeadm.go:1154] stopping kube-system containers ...
	I0401 19:31:27.900889   70962 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0401 19:31:27.900936   70962 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:31:27.952874   70962 cri.go:89] found id: ""
	I0401 19:31:27.952954   70962 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0401 19:31:27.971647   70962 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:31:27.982541   70962 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:31:27.982576   70962 kubeadm.go:156] found existing configuration files:
	
	I0401 19:31:27.982612   70962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0401 19:31:27.992341   70962 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:31:27.992414   70962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:31:28.002685   70962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0401 19:31:28.012599   70962 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:31:28.012658   70962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:31:28.022731   70962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0401 19:31:28.033584   70962 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:31:28.033661   70962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:31:28.044940   70962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0401 19:31:28.055832   70962 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:31:28.055886   70962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:31:28.066919   70962 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:31:28.078715   70962 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:28.212251   70962 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:29.214190   70962 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.001904972s)
	I0401 19:31:29.214224   70962 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:29.444484   70962 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:29.536112   70962 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:29.664087   70962 api_server.go:52] waiting for apiserver process to appear ...
	I0401 19:31:29.664201   70962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:30.165117   70962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:30.664872   70962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:30.707251   70962 api_server.go:72] duration metric: took 1.04316448s to wait for apiserver process to appear ...
	I0401 19:31:30.707280   70962 api_server.go:88] waiting for apiserver healthz status ...
	I0401 19:31:30.707297   70962 api_server.go:253] Checking apiserver healthz at https://192.168.61.145:8444/healthz ...
	I0401 19:31:30.707881   70962 api_server.go:269] stopped: https://192.168.61.145:8444/healthz: Get "https://192.168.61.145:8444/healthz": dial tcp 192.168.61.145:8444: connect: connection refused
	I0401 19:31:31.207434   70962 api_server.go:253] Checking apiserver healthz at https://192.168.61.145:8444/healthz ...
	I0401 19:31:28.435890   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:28.436304   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:28.436334   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:28.436255   71902 retry.go:31] will retry after 2.062597688s: waiting for machine to come up
	I0401 19:31:30.500523   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:30.500999   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:30.501027   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:30.500954   71902 retry.go:31] will retry after 2.068480339s: waiting for machine to come up
	I0401 19:31:32.571229   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:32.571603   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:32.571635   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:32.571550   71902 retry.go:31] will retry after 3.355965883s: waiting for machine to come up
	I0401 19:31:33.707613   70962 api_server.go:279] https://192.168.61.145:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0401 19:31:33.707647   70962 api_server.go:103] status: https://192.168.61.145:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0401 19:31:33.707663   70962 api_server.go:253] Checking apiserver healthz at https://192.168.61.145:8444/healthz ...
	I0401 19:31:33.728509   70962 api_server.go:279] https://192.168.61.145:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0401 19:31:33.728582   70962 api_server.go:103] status: https://192.168.61.145:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0401 19:31:34.208163   70962 api_server.go:253] Checking apiserver healthz at https://192.168.61.145:8444/healthz ...
	I0401 19:31:34.212754   70962 api_server.go:279] https://192.168.61.145:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0401 19:31:34.212784   70962 api_server.go:103] status: https://192.168.61.145:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0401 19:31:34.708282   70962 api_server.go:253] Checking apiserver healthz at https://192.168.61.145:8444/healthz ...
	I0401 19:31:34.715268   70962 api_server.go:279] https://192.168.61.145:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0401 19:31:34.715294   70962 api_server.go:103] status: https://192.168.61.145:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0401 19:31:35.207460   70962 api_server.go:253] Checking apiserver healthz at https://192.168.61.145:8444/healthz ...
	I0401 19:31:35.212542   70962 api_server.go:279] https://192.168.61.145:8444/healthz returned 200:
	ok
	I0401 19:31:35.219264   70962 api_server.go:141] control plane version: v1.29.3
	I0401 19:31:35.219287   70962 api_server.go:131] duration metric: took 4.512000334s to wait for apiserver health ...
	I0401 19:31:35.219294   70962 cni.go:84] Creating CNI manager for ""
	I0401 19:31:35.219309   70962 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:31:35.221080   70962 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0401 19:31:31.828694   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:34.325740   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:35.222800   70962 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0401 19:31:35.238787   70962 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0401 19:31:35.286002   70962 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 19:31:35.302379   70962 system_pods.go:59] 8 kube-system pods found
	I0401 19:31:35.302420   70962 system_pods.go:61] "coredns-76f75df574-tdwrh" [c1d3b591-fa81-46dd-847c-ffdfc22937fa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:31:35.302437   70962 system_pods.go:61] "etcd-default-k8s-diff-port-734648" [e977793d-ec92-40b8-a0fe-1b2400fb1af6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0401 19:31:35.302447   70962 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-734648" [2d0eae31-35c3-40aa-9d28-a2f51849c15d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0401 19:31:35.302469   70962 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-734648" [cded1171-2e1b-4d70-9f26-d1d3a6558da1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0401 19:31:35.302483   70962 system_pods.go:61] "kube-proxy-mn546" [f9b6366f-7095-418c-ba24-529c0555f438] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:31:35.302493   70962 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-734648" [c1518ece-8cbf-49fe-9091-15b38dc1bd62] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0401 19:31:35.302504   70962 system_pods.go:61] "metrics-server-57f55c9bc5-g7mg2" [d1ede79a-a7e6-42bd-a799-197ffc7c7939] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:31:35.302519   70962 system_pods.go:61] "storage-provisioner" [bd55f9c8-580c-4eb1-adbc-020d5bbedce9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:31:35.302532   70962 system_pods.go:74] duration metric: took 16.508651ms to wait for pod list to return data ...
	I0401 19:31:35.302545   70962 node_conditions.go:102] verifying NodePressure condition ...
	I0401 19:31:35.305826   70962 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 19:31:35.305862   70962 node_conditions.go:123] node cpu capacity is 2
	I0401 19:31:35.305876   70962 node_conditions.go:105] duration metric: took 3.322577ms to run NodePressure ...
	I0401 19:31:35.305895   70962 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:35.603225   70962 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0401 19:31:35.608584   70962 kubeadm.go:733] kubelet initialised
	I0401 19:31:35.608611   70962 kubeadm.go:734] duration metric: took 5.361549ms waiting for restarted kubelet to initialise ...
	I0401 19:31:35.608620   70962 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:31:35.615252   70962 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-tdwrh" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:35.620605   70962 pod_ready.go:97] node "default-k8s-diff-port-734648" hosting pod "coredns-76f75df574-tdwrh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-734648" has status "Ready":"False"
	I0401 19:31:35.620627   70962 pod_ready.go:81] duration metric: took 5.353257ms for pod "coredns-76f75df574-tdwrh" in "kube-system" namespace to be "Ready" ...
	E0401 19:31:35.620634   70962 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-734648" hosting pod "coredns-76f75df574-tdwrh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-734648" has status "Ready":"False"
	I0401 19:31:35.620641   70962 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:35.625280   70962 pod_ready.go:97] node "default-k8s-diff-port-734648" hosting pod "etcd-default-k8s-diff-port-734648" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-734648" has status "Ready":"False"
	I0401 19:31:35.625297   70962 pod_ready.go:81] duration metric: took 4.646748ms for pod "etcd-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	E0401 19:31:35.625311   70962 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-734648" hosting pod "etcd-default-k8s-diff-port-734648" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-734648" has status "Ready":"False"
	I0401 19:31:35.625325   70962 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:35.630150   70962 pod_ready.go:97] node "default-k8s-diff-port-734648" hosting pod "kube-apiserver-default-k8s-diff-port-734648" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-734648" has status "Ready":"False"
	I0401 19:31:35.630170   70962 pod_ready.go:81] duration metric: took 4.83409ms for pod "kube-apiserver-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	E0401 19:31:35.630178   70962 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-734648" hosting pod "kube-apiserver-default-k8s-diff-port-734648" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-734648" has status "Ready":"False"
	I0401 19:31:35.630184   70962 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:35.693865   70962 pod_ready.go:97] node "default-k8s-diff-port-734648" hosting pod "kube-controller-manager-default-k8s-diff-port-734648" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-734648" has status "Ready":"False"
	I0401 19:31:35.693890   70962 pod_ready.go:81] duration metric: took 63.697397ms for pod "kube-controller-manager-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	E0401 19:31:35.693901   70962 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-734648" hosting pod "kube-controller-manager-default-k8s-diff-port-734648" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-734648" has status "Ready":"False"
	I0401 19:31:35.693908   70962 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mn546" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:36.090904   70962 pod_ready.go:92] pod "kube-proxy-mn546" in "kube-system" namespace has status "Ready":"True"
	I0401 19:31:36.090928   70962 pod_ready.go:81] duration metric: took 397.013717ms for pod "kube-proxy-mn546" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:36.090938   70962 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:35.929498   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:35.930010   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:35.930042   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:35.929963   71902 retry.go:31] will retry after 3.806123644s: waiting for machine to come up
	I0401 19:31:41.203538   70284 start.go:364] duration metric: took 56.718693538s to acquireMachinesLock for "no-preload-472858"
	I0401 19:31:41.203592   70284 start.go:96] Skipping create...Using existing machine configuration
	I0401 19:31:41.203607   70284 fix.go:54] fixHost starting: 
	I0401 19:31:41.204096   70284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:31:41.204143   70284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:31:41.221574   70284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42471
	I0401 19:31:41.222045   70284 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:31:41.222527   70284 main.go:141] libmachine: Using API Version  1
	I0401 19:31:41.222547   70284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:31:41.222856   70284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:31:41.223051   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:31:41.223209   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetState
	I0401 19:31:41.224801   70284 fix.go:112] recreateIfNeeded on no-preload-472858: state=Stopped err=<nil>
	I0401 19:31:41.224827   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	W0401 19:31:41.224979   70284 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 19:31:41.226937   70284 out.go:177] * Restarting existing kvm2 VM for "no-preload-472858" ...
	I0401 19:31:36.824790   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:38.824976   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:40.827269   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:41.228315   70284 main.go:141] libmachine: (no-preload-472858) Calling .Start
	I0401 19:31:41.228509   70284 main.go:141] libmachine: (no-preload-472858) Ensuring networks are active...
	I0401 19:31:41.229206   70284 main.go:141] libmachine: (no-preload-472858) Ensuring network default is active
	I0401 19:31:41.229603   70284 main.go:141] libmachine: (no-preload-472858) Ensuring network mk-no-preload-472858 is active
	I0401 19:31:41.229999   70284 main.go:141] libmachine: (no-preload-472858) Getting domain xml...
	I0401 19:31:41.230682   70284 main.go:141] libmachine: (no-preload-472858) Creating domain...
	I0401 19:31:38.097417   70962 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:40.098187   70962 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:42.099891   70962 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:39.739700   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.740313   71168 main.go:141] libmachine: (old-k8s-version-163608) Found IP for machine: 192.168.50.106
	I0401 19:31:39.740369   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has current primary IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.740386   71168 main.go:141] libmachine: (old-k8s-version-163608) Reserving static IP address...
	I0401 19:31:39.740767   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "old-k8s-version-163608", mac: "52:54:00:fe:1b:e7", ip: "192.168.50.106"} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:39.740798   71168 main.go:141] libmachine: (old-k8s-version-163608) Reserved static IP address: 192.168.50.106
	I0401 19:31:39.740818   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | skip adding static IP to network mk-old-k8s-version-163608 - found existing host DHCP lease matching {name: "old-k8s-version-163608", mac: "52:54:00:fe:1b:e7", ip: "192.168.50.106"}
	I0401 19:31:39.740839   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | Getting to WaitForSSH function...
	I0401 19:31:39.740857   71168 main.go:141] libmachine: (old-k8s-version-163608) Waiting for SSH to be available...
	I0401 19:31:39.743023   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.743417   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:39.743447   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.743589   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | Using SSH client type: external
	I0401 19:31:39.743614   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | Using SSH private key: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/old-k8s-version-163608/id_rsa (-rw-------)
	I0401 19:31:39.743648   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.106 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18233-10493/.minikube/machines/old-k8s-version-163608/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0401 19:31:39.743662   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | About to run SSH command:
	I0401 19:31:39.743676   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | exit 0
	I0401 19:31:39.877699   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | SSH cmd err, output: <nil>: 
	I0401 19:31:39.878044   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetConfigRaw
	I0401 19:31:39.878611   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetIP
	I0401 19:31:39.880733   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.881074   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:39.881107   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.881352   71168 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/config.json ...
	I0401 19:31:39.881510   71168 machine.go:94] provisionDockerMachine start ...
	I0401 19:31:39.881529   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:31:39.881766   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:39.883980   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.884318   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:39.884360   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.884483   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:39.884675   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:39.884877   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:39.885029   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:39.885175   71168 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:39.885339   71168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.106 22 <nil> <nil>}
	I0401 19:31:39.885349   71168 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 19:31:39.994935   71168 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0401 19:31:39.994971   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetMachineName
	I0401 19:31:39.995213   71168 buildroot.go:166] provisioning hostname "old-k8s-version-163608"
	I0401 19:31:39.995241   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetMachineName
	I0401 19:31:39.995472   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:39.998179   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.998490   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:39.998525   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.998656   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:39.998805   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:39.998949   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:39.999054   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:39.999183   71168 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:39.999372   71168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.106 22 <nil> <nil>}
	I0401 19:31:39.999390   71168 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-163608 && echo "old-k8s-version-163608" | sudo tee /etc/hostname
	I0401 19:31:40.128852   71168 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-163608
	
	I0401 19:31:40.128880   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:40.131508   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.131817   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:40.131874   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.131987   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:40.132188   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:40.132365   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:40.132503   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:40.132693   71168 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:40.132890   71168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.106 22 <nil> <nil>}
	I0401 19:31:40.132908   71168 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-163608' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-163608/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-163608' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 19:31:40.252693   71168 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 19:31:40.252727   71168 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18233-10493/.minikube CaCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18233-10493/.minikube}
	I0401 19:31:40.252749   71168 buildroot.go:174] setting up certificates
	I0401 19:31:40.252759   71168 provision.go:84] configureAuth start
	I0401 19:31:40.252767   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetMachineName
	I0401 19:31:40.253030   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetIP
	I0401 19:31:40.255827   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.256183   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:40.256210   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.256418   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:40.259041   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.259388   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:40.259418   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.259540   71168 provision.go:143] copyHostCerts
	I0401 19:31:40.259592   71168 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem, removing ...
	I0401 19:31:40.259602   71168 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem
	I0401 19:31:40.259654   71168 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem (1082 bytes)
	I0401 19:31:40.259745   71168 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem, removing ...
	I0401 19:31:40.259754   71168 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem
	I0401 19:31:40.259773   71168 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem (1123 bytes)
	I0401 19:31:40.259822   71168 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem, removing ...
	I0401 19:31:40.259830   71168 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem
	I0401 19:31:40.259846   71168 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem (1679 bytes)
	I0401 19:31:40.259891   71168 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-163608 san=[127.0.0.1 192.168.50.106 localhost minikube old-k8s-version-163608]
	I0401 19:31:40.465177   71168 provision.go:177] copyRemoteCerts
	I0401 19:31:40.465241   71168 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 19:31:40.465265   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:40.467676   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.468040   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:40.468070   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.468272   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:40.468456   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:40.468622   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:40.468767   71168 sshutil.go:53] new ssh client: &{IP:192.168.50.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/old-k8s-version-163608/id_rsa Username:docker}
	I0401 19:31:40.557764   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 19:31:40.585326   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0401 19:31:40.611671   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 19:31:40.639265   71168 provision.go:87] duration metric: took 386.497023ms to configureAuth
	I0401 19:31:40.639296   71168 buildroot.go:189] setting minikube options for container-runtime
	I0401 19:31:40.639521   71168 config.go:182] Loaded profile config "old-k8s-version-163608": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 19:31:40.639590   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:40.642321   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.642733   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:40.642762   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.642921   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:40.643122   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:40.643294   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:40.643442   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:40.643647   71168 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:40.643802   71168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.106 22 <nil> <nil>}
	I0401 19:31:40.643819   71168 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 19:31:40.940619   71168 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 19:31:40.940647   71168 machine.go:97] duration metric: took 1.059122816s to provisionDockerMachine
	I0401 19:31:40.940661   71168 start.go:293] postStartSetup for "old-k8s-version-163608" (driver="kvm2")
	I0401 19:31:40.940672   71168 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 19:31:40.940687   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:31:40.940955   71168 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 19:31:40.940981   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:40.943787   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.944159   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:40.944197   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.944347   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:40.944556   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:40.944700   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:40.944834   71168 sshutil.go:53] new ssh client: &{IP:192.168.50.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/old-k8s-version-163608/id_rsa Username:docker}
	I0401 19:31:41.035824   71168 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 19:31:41.040975   71168 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 19:31:41.041007   71168 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/addons for local assets ...
	I0401 19:31:41.041085   71168 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/files for local assets ...
	I0401 19:31:41.041165   71168 filesync.go:149] local asset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> 177512.pem in /etc/ssl/certs
	I0401 19:31:41.041255   71168 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 19:31:41.052356   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:31:41.080699   71168 start.go:296] duration metric: took 140.024653ms for postStartSetup
	I0401 19:31:41.080737   71168 fix.go:56] duration metric: took 20.729726297s for fixHost
	I0401 19:31:41.080759   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:41.083664   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:41.084045   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:41.084075   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:41.084202   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:41.084405   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:41.084599   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:41.084796   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:41.084971   71168 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:41.085169   71168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.106 22 <nil> <nil>}
	I0401 19:31:41.085180   71168 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 19:31:41.203392   71168 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711999901.182365994
	
	I0401 19:31:41.203412   71168 fix.go:216] guest clock: 1711999901.182365994
	I0401 19:31:41.203419   71168 fix.go:229] Guest: 2024-04-01 19:31:41.182365994 +0000 UTC Remote: 2024-04-01 19:31:41.080741553 +0000 UTC m=+228.159955492 (delta=101.624441ms)
	I0401 19:31:41.203437   71168 fix.go:200] guest clock delta is within tolerance: 101.624441ms
	I0401 19:31:41.203442   71168 start.go:83] releasing machines lock for "old-k8s-version-163608", held for 20.852486097s
	I0401 19:31:41.203462   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:31:41.203744   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetIP
	I0401 19:31:41.206582   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:41.206952   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:41.206973   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:41.207151   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:31:41.207701   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:31:41.207891   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:31:41.207954   71168 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 19:31:41.207996   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:41.208096   71168 ssh_runner.go:195] Run: cat /version.json
	I0401 19:31:41.208127   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:41.210731   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:41.210928   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:41.211107   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:41.211132   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:41.211317   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:41.211446   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:41.211488   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:41.211491   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:41.211636   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:41.211692   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:41.211783   71168 sshutil.go:53] new ssh client: &{IP:192.168.50.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/old-k8s-version-163608/id_rsa Username:docker}
	I0401 19:31:41.211891   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:41.212031   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:41.212187   71168 sshutil.go:53] new ssh client: &{IP:192.168.50.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/old-k8s-version-163608/id_rsa Username:docker}
	I0401 19:31:41.296330   71168 ssh_runner.go:195] Run: systemctl --version
	I0401 19:31:41.326247   71168 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 19:31:41.479411   71168 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 19:31:41.486996   71168 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 19:31:41.487063   71168 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 19:31:41.507840   71168 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 19:31:41.507870   71168 start.go:494] detecting cgroup driver to use...
	I0401 19:31:41.507942   71168 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 19:31:41.533063   71168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 19:31:41.551699   71168 docker.go:217] disabling cri-docker service (if available) ...
	I0401 19:31:41.551754   71168 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 19:31:41.568078   71168 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 19:31:41.584278   71168 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 19:31:41.726884   71168 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 19:31:41.882514   71168 docker.go:233] disabling docker service ...
	I0401 19:31:41.882587   71168 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 19:31:41.901235   71168 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 19:31:41.919787   71168 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 19:31:42.082420   71168 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 19:31:42.248527   71168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 19:31:42.266610   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 19:31:42.295677   71168 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0401 19:31:42.295740   71168 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:42.313855   71168 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 19:31:42.313920   71168 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:42.327176   71168 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:42.339527   71168 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:42.351220   71168 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 19:31:42.363716   71168 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 19:31:42.379911   71168 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0401 19:31:42.379971   71168 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0401 19:31:42.395282   71168 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 19:31:42.407713   71168 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:31:42.579648   71168 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 19:31:42.764748   71168 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 19:31:42.764858   71168 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 19:31:42.771038   71168 start.go:562] Will wait 60s for crictl version
	I0401 19:31:42.771125   71168 ssh_runner.go:195] Run: which crictl
	I0401 19:31:42.775871   71168 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 19:31:42.823135   71168 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 19:31:42.823218   71168 ssh_runner.go:195] Run: crio --version
	I0401 19:31:42.863748   71168 ssh_runner.go:195] Run: crio --version
	I0401 19:31:42.900263   71168 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0401 19:31:42.901631   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetIP
	I0401 19:31:42.904464   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:42.904773   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:42.904812   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:42.905048   71168 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0401 19:31:42.910117   71168 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:31:42.925313   71168 kubeadm.go:877] updating cluster {Name:old-k8s-version-163608 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-163608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.106 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 19:31:42.925475   71168 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0401 19:31:42.925542   71168 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:31:42.828772   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:44.829527   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:42.553437   70284 main.go:141] libmachine: (no-preload-472858) Waiting to get IP...
	I0401 19:31:42.554422   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:42.554810   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:42.554907   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:42.554806   72041 retry.go:31] will retry after 237.823736ms: waiting for machine to come up
	I0401 19:31:42.794546   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:42.795159   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:42.795205   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:42.795117   72041 retry.go:31] will retry after 326.387674ms: waiting for machine to come up
	I0401 19:31:43.123632   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:43.124306   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:43.124342   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:43.124244   72041 retry.go:31] will retry after 455.262949ms: waiting for machine to come up
	I0401 19:31:43.580752   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:43.581420   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:43.581440   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:43.581375   72041 retry.go:31] will retry after 520.307316ms: waiting for machine to come up
	I0401 19:31:44.103924   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:44.104407   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:44.104431   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:44.104361   72041 retry.go:31] will retry after 491.638031ms: waiting for machine to come up
	I0401 19:31:44.598440   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:44.598990   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:44.599015   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:44.598901   72041 retry.go:31] will retry after 652.234963ms: waiting for machine to come up
	I0401 19:31:45.252362   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:45.252901   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:45.252933   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:45.252853   72041 retry.go:31] will retry after 1.047335678s: waiting for machine to come up
	I0401 19:31:46.301894   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:46.302324   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:46.302349   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:46.302281   72041 retry.go:31] will retry after 1.303326069s: waiting for machine to come up
	I0401 19:31:44.101042   70962 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:46.099803   70962 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace has status "Ready":"True"
	I0401 19:31:46.099828   70962 pod_ready.go:81] duration metric: took 10.008882274s for pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:46.099843   70962 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:42.974220   71168 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0401 19:31:42.974307   71168 ssh_runner.go:195] Run: which lz4
	I0401 19:31:42.979179   71168 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0401 19:31:42.984204   71168 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 19:31:42.984236   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0401 19:31:45.108131   71168 crio.go:462] duration metric: took 2.128988098s to copy over tarball
	I0401 19:31:45.108232   71168 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 19:31:47.328534   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:49.827306   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:47.606907   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:47.607392   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:47.607419   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:47.607356   72041 retry.go:31] will retry after 1.729010443s: waiting for machine to come up
	I0401 19:31:49.338200   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:49.338722   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:49.338751   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:49.338667   72041 retry.go:31] will retry after 2.069036941s: waiting for machine to come up
	I0401 19:31:51.409458   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:51.409945   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:51.409976   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:51.409894   72041 retry.go:31] will retry after 2.405834741s: waiting for machine to come up
	I0401 19:31:48.108234   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:50.607720   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:48.581824   71168 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.473552916s)
	I0401 19:31:48.581871   71168 crio.go:469] duration metric: took 3.473700991s to extract the tarball
	I0401 19:31:48.581881   71168 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 19:31:48.630609   71168 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:31:48.673027   71168 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0401 19:31:48.673048   71168 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0401 19:31:48.673085   71168 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:31:48.673129   71168 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 19:31:48.673155   71168 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 19:31:48.673190   71168 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 19:31:48.673133   71168 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0401 19:31:48.673273   71168 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0401 19:31:48.673143   71168 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0401 19:31:48.673336   71168 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 19:31:48.675068   71168 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:31:48.675073   71168 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 19:31:48.675068   71168 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 19:31:48.675093   71168 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0401 19:31:48.675072   71168 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0401 19:31:48.675073   71168 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 19:31:48.675115   71168 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 19:31:48.675096   71168 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0401 19:31:48.827947   71168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0401 19:31:48.846025   71168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0401 19:31:48.848769   71168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0401 19:31:48.858366   71168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0401 19:31:48.858613   71168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0401 19:31:48.859241   71168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 19:31:48.862047   71168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0401 19:31:48.912299   71168 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0401 19:31:48.912346   71168 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0401 19:31:48.912399   71168 ssh_runner.go:195] Run: which crictl
	I0401 19:31:49.030117   71168 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0401 19:31:49.030357   71168 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 19:31:49.030122   71168 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0401 19:31:49.030433   71168 ssh_runner.go:195] Run: which crictl
	I0401 19:31:49.030460   71168 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 19:31:49.030526   71168 ssh_runner.go:195] Run: which crictl
	I0401 19:31:49.062211   71168 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0401 19:31:49.062327   71168 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0401 19:31:49.062234   71168 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0401 19:31:49.062415   71168 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0401 19:31:49.062396   71168 ssh_runner.go:195] Run: which crictl
	I0401 19:31:49.062461   71168 ssh_runner.go:195] Run: which crictl
	I0401 19:31:49.078249   71168 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0401 19:31:49.078308   71168 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 19:31:49.078323   71168 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0401 19:31:49.078358   71168 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 19:31:49.078379   71168 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0401 19:31:49.078398   71168 ssh_runner.go:195] Run: which crictl
	I0401 19:31:49.078426   71168 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0401 19:31:49.078440   71168 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0401 19:31:49.078362   71168 ssh_runner.go:195] Run: which crictl
	I0401 19:31:49.078466   71168 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0401 19:31:49.078494   71168 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0401 19:31:49.225060   71168 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0401 19:31:49.225137   71168 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0401 19:31:49.225160   71168 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0401 19:31:49.225199   71168 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0401 19:31:49.225250   71168 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0401 19:31:49.225252   71168 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 19:31:49.225326   71168 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0401 19:31:49.280782   71168 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0401 19:31:49.281709   71168 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0401 19:31:49.299218   71168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:31:49.465497   71168 cache_images.go:92] duration metric: took 792.432136ms to LoadCachedImages
	W0401 19:31:49.465595   71168 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0401 19:31:49.465613   71168 kubeadm.go:928] updating node { 192.168.50.106 8443 v1.20.0 crio true true} ...
	I0401 19:31:49.465768   71168 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-163608 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.106
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-163608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 19:31:49.465862   71168 ssh_runner.go:195] Run: crio config
	I0401 19:31:49.529730   71168 cni.go:84] Creating CNI manager for ""
	I0401 19:31:49.529757   71168 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:31:49.529771   71168 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 19:31:49.529799   71168 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.106 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-163608 NodeName:old-k8s-version-163608 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.106"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.106 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0401 19:31:49.529969   71168 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.106
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-163608"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.106
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.106"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 19:31:49.530037   71168 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0401 19:31:49.542642   71168 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 19:31:49.542724   71168 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 19:31:49.557001   71168 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0401 19:31:49.579568   71168 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 19:31:49.599692   71168 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0401 19:31:49.619780   71168 ssh_runner.go:195] Run: grep 192.168.50.106	control-plane.minikube.internal$ /etc/hosts
	I0401 19:31:49.625597   71168 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.106	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:31:49.643862   71168 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:31:49.791391   71168 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:31:49.814470   71168 certs.go:68] Setting up /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608 for IP: 192.168.50.106
	I0401 19:31:49.814497   71168 certs.go:194] generating shared ca certs ...
	I0401 19:31:49.814516   71168 certs.go:226] acquiring lock for ca certs: {Name:mk348b3e250c104b662139cd7212c6c6dfda3180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:31:49.814680   71168 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key
	I0401 19:31:49.814736   71168 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key
	I0401 19:31:49.814745   71168 certs.go:256] generating profile certs ...
	I0401 19:31:49.814852   71168 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/client.key
	I0401 19:31:49.814916   71168 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/apiserver.key.f2de0982
	I0401 19:31:49.814964   71168 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/proxy-client.key
	I0401 19:31:49.815119   71168 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem (1338 bytes)
	W0401 19:31:49.815178   71168 certs.go:480] ignoring /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751_empty.pem, impossibly tiny 0 bytes
	I0401 19:31:49.815195   71168 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem (1675 bytes)
	I0401 19:31:49.815224   71168 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem (1082 bytes)
	I0401 19:31:49.815266   71168 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem (1123 bytes)
	I0401 19:31:49.815299   71168 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem (1679 bytes)
	I0401 19:31:49.815362   71168 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:31:49.816196   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 19:31:49.866842   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 19:31:49.913788   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 19:31:49.953223   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 19:31:50.004313   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0401 19:31:50.046972   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 19:31:50.086990   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 19:31:50.134907   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 19:31:50.163395   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /usr/share/ca-certificates/177512.pem (1708 bytes)
	I0401 19:31:50.191901   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 19:31:50.221196   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem --> /usr/share/ca-certificates/17751.pem (1338 bytes)
	I0401 19:31:50.253024   71168 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0401 19:31:50.275781   71168 ssh_runner.go:195] Run: openssl version
	I0401 19:31:50.282795   71168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177512.pem && ln -fs /usr/share/ca-certificates/177512.pem /etc/ssl/certs/177512.pem"
	I0401 19:31:50.296952   71168 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177512.pem
	I0401 19:31:50.303868   71168 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 18:15 /usr/share/ca-certificates/177512.pem
	I0401 19:31:50.303950   71168 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177512.pem
	I0401 19:31:50.312249   71168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177512.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 19:31:50.328985   71168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 19:31:50.345917   71168 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:50.352041   71168 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 18:07 /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:50.352103   71168 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:50.358752   71168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 19:31:50.371702   71168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17751.pem && ln -fs /usr/share/ca-certificates/17751.pem /etc/ssl/certs/17751.pem"
	I0401 19:31:50.384633   71168 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17751.pem
	I0401 19:31:50.391229   71168 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 18:15 /usr/share/ca-certificates/17751.pem
	I0401 19:31:50.391277   71168 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17751.pem
	I0401 19:31:50.397980   71168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17751.pem /etc/ssl/certs/51391683.0"
	I0401 19:31:50.412674   71168 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 19:31:50.418084   71168 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 19:31:50.425102   71168 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 19:31:50.431949   71168 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 19:31:50.438665   71168 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 19:31:50.446633   71168 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 19:31:50.454688   71168 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 19:31:50.462805   71168 kubeadm.go:391] StartCluster: {Name:old-k8s-version-163608 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-163608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.106 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:31:50.462922   71168 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 19:31:50.462956   71168 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:31:50.505702   71168 cri.go:89] found id: ""
	I0401 19:31:50.505788   71168 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0401 19:31:50.517916   71168 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0401 19:31:50.517934   71168 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0401 19:31:50.517940   71168 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0401 19:31:50.517995   71168 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 19:31:50.529459   71168 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 19:31:50.530408   71168 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-163608" does not appear in /home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 19:31:50.531055   71168 kubeconfig.go:62] /home/jenkins/minikube-integration/18233-10493/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-163608" cluster setting kubeconfig missing "old-k8s-version-163608" context setting]
	I0401 19:31:50.532369   71168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/kubeconfig: {Name:mkbd988e40ba29769e9f8a43c4d876f38e957f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:31:50.534578   71168 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 19:31:50.546275   71168 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.106
	I0401 19:31:50.546309   71168 kubeadm.go:1154] stopping kube-system containers ...
	I0401 19:31:50.546328   71168 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0401 19:31:50.546371   71168 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:31:50.588826   71168 cri.go:89] found id: ""
	I0401 19:31:50.588881   71168 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0401 19:31:50.610933   71168 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:31:50.622201   71168 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:31:50.622221   71168 kubeadm.go:156] found existing configuration files:
	
	I0401 19:31:50.622266   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 19:31:50.634006   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:31:50.634071   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:31:50.647891   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 19:31:50.662548   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:31:50.662596   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:31:50.674627   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 19:31:50.686739   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:31:50.686825   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:31:50.700400   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 19:31:50.712952   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:31:50.713014   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:31:50.725616   71168 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:31:50.739130   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:50.874552   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:51.568640   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:51.850288   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:52.009607   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:52.122887   71168 api_server.go:52] waiting for apiserver process to appear ...
	I0401 19:31:52.122962   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:52.623084   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:51.827968   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:54.325686   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:56.325892   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:53.817748   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:53.818158   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:53.818184   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:53.818122   72041 retry.go:31] will retry after 2.747390243s: waiting for machine to come up
	I0401 19:31:56.567288   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:56.567711   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:56.567742   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:56.567657   72041 retry.go:31] will retry after 3.904473051s: waiting for machine to come up
	I0401 19:31:53.107786   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:55.108974   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:53.123783   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:53.623248   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:54.124004   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:54.623873   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:55.123458   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:55.623923   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:56.123441   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:56.623192   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:57.123012   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:57.624010   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:58.325934   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:00.825343   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:00.476692   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.477192   70284 main.go:141] libmachine: (no-preload-472858) Found IP for machine: 192.168.72.119
	I0401 19:32:00.477217   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has current primary IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.477223   70284 main.go:141] libmachine: (no-preload-472858) Reserving static IP address...
	I0401 19:32:00.477672   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "no-preload-472858", mac: "52:54:00:0a:2e:03", ip: "192.168.72.119"} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:00.477708   70284 main.go:141] libmachine: (no-preload-472858) DBG | skip adding static IP to network mk-no-preload-472858 - found existing host DHCP lease matching {name: "no-preload-472858", mac: "52:54:00:0a:2e:03", ip: "192.168.72.119"}
	I0401 19:32:00.477726   70284 main.go:141] libmachine: (no-preload-472858) Reserved static IP address: 192.168.72.119
	I0401 19:32:00.477742   70284 main.go:141] libmachine: (no-preload-472858) Waiting for SSH to be available...
	I0401 19:32:00.477770   70284 main.go:141] libmachine: (no-preload-472858) DBG | Getting to WaitForSSH function...
	I0401 19:32:00.479949   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.480306   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:00.480334   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.480475   70284 main.go:141] libmachine: (no-preload-472858) DBG | Using SSH client type: external
	I0401 19:32:00.480508   70284 main.go:141] libmachine: (no-preload-472858) DBG | Using SSH private key: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/no-preload-472858/id_rsa (-rw-------)
	I0401 19:32:00.480538   70284 main.go:141] libmachine: (no-preload-472858) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.119 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18233-10493/.minikube/machines/no-preload-472858/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0401 19:32:00.480554   70284 main.go:141] libmachine: (no-preload-472858) DBG | About to run SSH command:
	I0401 19:32:00.480566   70284 main.go:141] libmachine: (no-preload-472858) DBG | exit 0
	I0401 19:32:00.610108   70284 main.go:141] libmachine: (no-preload-472858) DBG | SSH cmd err, output: <nil>: 
	I0401 19:32:00.610458   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetConfigRaw
	I0401 19:32:00.611059   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetIP
	I0401 19:32:00.613496   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.613872   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:00.613906   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.614179   70284 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/no-preload-472858/config.json ...
	I0401 19:32:00.614363   70284 machine.go:94] provisionDockerMachine start ...
	I0401 19:32:00.614382   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:32:00.614593   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:00.617019   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.617404   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:00.617430   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.617585   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:32:00.617780   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:00.617953   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:00.618098   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:32:00.618260   70284 main.go:141] libmachine: Using SSH client type: native
	I0401 19:32:00.618451   70284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.119 22 <nil> <nil>}
	I0401 19:32:00.618462   70284 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 19:32:00.730438   70284 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0401 19:32:00.730473   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetMachineName
	I0401 19:32:00.730725   70284 buildroot.go:166] provisioning hostname "no-preload-472858"
	I0401 19:32:00.730754   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetMachineName
	I0401 19:32:00.730994   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:00.733932   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.734274   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:00.734308   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.734419   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:32:00.734591   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:00.734752   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:00.734918   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:32:00.735092   70284 main.go:141] libmachine: Using SSH client type: native
	I0401 19:32:00.735296   70284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.119 22 <nil> <nil>}
	I0401 19:32:00.735313   70284 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-472858 && echo "no-preload-472858" | sudo tee /etc/hostname
	I0401 19:32:00.865664   70284 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-472858
	
	I0401 19:32:00.865702   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:00.868247   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.868619   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:00.868649   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.868845   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:32:00.869037   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:00.869244   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:00.869420   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:32:00.869671   70284 main.go:141] libmachine: Using SSH client type: native
	I0401 19:32:00.869840   70284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.119 22 <nil> <nil>}
	I0401 19:32:00.869859   70284 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-472858' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-472858/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-472858' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 19:32:00.991430   70284 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 19:32:00.991460   70284 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18233-10493/.minikube CaCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18233-10493/.minikube}
	I0401 19:32:00.991484   70284 buildroot.go:174] setting up certificates
	I0401 19:32:00.991493   70284 provision.go:84] configureAuth start
	I0401 19:32:00.991504   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetMachineName
	I0401 19:32:00.991748   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetIP
	I0401 19:32:00.994239   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.994566   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:00.994596   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.994722   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:00.996735   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.997064   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:00.997090   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.997212   70284 provision.go:143] copyHostCerts
	I0401 19:32:00.997265   70284 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem, removing ...
	I0401 19:32:00.997281   70284 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem
	I0401 19:32:00.997346   70284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem (1082 bytes)
	I0401 19:32:00.997493   70284 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem, removing ...
	I0401 19:32:00.997507   70284 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem
	I0401 19:32:00.997533   70284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem (1123 bytes)
	I0401 19:32:00.997619   70284 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem, removing ...
	I0401 19:32:00.997629   70284 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem
	I0401 19:32:00.997667   70284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem (1679 bytes)
	I0401 19:32:00.997733   70284 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem org=jenkins.no-preload-472858 san=[127.0.0.1 192.168.72.119 localhost minikube no-preload-472858]
	I0401 19:32:01.212397   70284 provision.go:177] copyRemoteCerts
	I0401 19:32:01.212453   70284 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 19:32:01.212473   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:01.214810   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.215170   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:01.215198   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.215398   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:32:01.215603   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:01.215761   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:32:01.215903   70284 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/no-preload-472858/id_rsa Username:docker}
	I0401 19:32:01.303113   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0401 19:32:01.331807   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 19:32:01.358429   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 19:32:01.384521   70284 provision.go:87] duration metric: took 393.005717ms to configureAuth
	I0401 19:32:01.384559   70284 buildroot.go:189] setting minikube options for container-runtime
	I0401 19:32:01.384748   70284 config.go:182] Loaded profile config "no-preload-472858": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0401 19:32:01.384862   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:01.387446   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.387828   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:01.387866   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.387966   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:32:01.388168   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:01.388356   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:01.388509   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:32:01.388663   70284 main.go:141] libmachine: Using SSH client type: native
	I0401 19:32:01.388847   70284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.119 22 <nil> <nil>}
	I0401 19:32:01.388867   70284 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 19:32:01.692586   70284 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 19:32:01.692615   70284 machine.go:97] duration metric: took 1.078237975s to provisionDockerMachine
	I0401 19:32:01.692628   70284 start.go:293] postStartSetup for "no-preload-472858" (driver="kvm2")
	I0401 19:32:01.692644   70284 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 19:32:01.692668   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:32:01.692988   70284 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 19:32:01.693012   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:01.696033   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.696405   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:01.696450   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.696603   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:32:01.696763   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:01.696901   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:32:01.697089   70284 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/no-preload-472858/id_rsa Username:docker}
	I0401 19:32:01.786626   70284 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 19:32:01.791703   70284 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 19:32:01.791726   70284 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/addons for local assets ...
	I0401 19:32:01.791802   70284 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/files for local assets ...
	I0401 19:32:01.791901   70284 filesync.go:149] local asset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> 177512.pem in /etc/ssl/certs
	I0401 19:32:01.791991   70284 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 19:32:01.803733   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:32:01.831768   70284 start.go:296] duration metric: took 139.126077ms for postStartSetup
	I0401 19:32:01.831804   70284 fix.go:56] duration metric: took 20.628199635s for fixHost
	I0401 19:32:01.831823   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:01.834218   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.834548   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:01.834574   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.834725   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:32:01.834901   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:01.835066   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:01.835188   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:32:01.835327   70284 main.go:141] libmachine: Using SSH client type: native
	I0401 19:32:01.835544   70284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.119 22 <nil> <nil>}
	I0401 19:32:01.835558   70284 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 19:31:57.607923   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:59.608857   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:02.106942   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:58.123200   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:58.624028   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:59.123026   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:59.623993   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:00.123039   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:00.623632   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:01.123204   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:01.623162   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:02.123264   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:02.623788   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:01.947198   70284 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711999921.892647753
	
	I0401 19:32:01.947267   70284 fix.go:216] guest clock: 1711999921.892647753
	I0401 19:32:01.947279   70284 fix.go:229] Guest: 2024-04-01 19:32:01.892647753 +0000 UTC Remote: 2024-04-01 19:32:01.831808507 +0000 UTC m=+359.938807685 (delta=60.839246ms)
	I0401 19:32:01.947305   70284 fix.go:200] guest clock delta is within tolerance: 60.839246ms
	I0401 19:32:01.947317   70284 start.go:83] releasing machines lock for "no-preload-472858", held for 20.743748352s
	I0401 19:32:01.947347   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:32:01.947621   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetIP
	I0401 19:32:01.950387   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.950719   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:01.950750   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.950940   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:32:01.951438   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:32:01.951631   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:32:01.951681   70284 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 19:32:01.951737   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:01.951854   70284 ssh_runner.go:195] Run: cat /version.json
	I0401 19:32:01.951881   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:01.954468   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.954603   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.954780   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:01.954815   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.954932   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:01.954960   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.954984   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:32:01.955193   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:01.955230   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:32:01.955341   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:32:01.955388   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:01.955510   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:32:01.955501   70284 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/no-preload-472858/id_rsa Username:docker}
	I0401 19:32:01.955670   70284 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/no-preload-472858/id_rsa Username:docker}
	I0401 19:32:02.035332   70284 ssh_runner.go:195] Run: systemctl --version
	I0401 19:32:02.061178   70284 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 19:32:02.220309   70284 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 19:32:02.227811   70284 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 19:32:02.227885   70284 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 19:32:02.247605   70284 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 19:32:02.247634   70284 start.go:494] detecting cgroup driver to use...
	I0401 19:32:02.247690   70284 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 19:32:02.265463   70284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 19:32:02.280175   70284 docker.go:217] disabling cri-docker service (if available) ...
	I0401 19:32:02.280246   70284 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 19:32:02.295003   70284 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 19:32:02.315072   70284 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 19:32:02.449108   70284 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 19:32:02.627772   70284 docker.go:233] disabling docker service ...
	I0401 19:32:02.627850   70284 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 19:32:02.642924   70284 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 19:32:02.657038   70284 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 19:32:02.787085   70284 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 19:32:02.918355   70284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 19:32:02.934828   70284 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 19:32:02.955495   70284 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0401 19:32:02.955548   70284 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:32:02.966690   70284 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 19:32:02.966754   70284 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:32:02.977812   70284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:32:02.989329   70284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:32:03.000727   70284 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 19:32:03.012341   70284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:32:03.023305   70284 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:32:03.044213   70284 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:32:03.055614   70284 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 19:32:03.065880   70284 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0401 19:32:03.065927   70284 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0401 19:32:03.080514   70284 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 19:32:03.090798   70284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:32:03.224199   70284 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 19:32:03.389414   70284 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 19:32:03.389482   70284 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 19:32:03.395493   70284 start.go:562] Will wait 60s for crictl version
	I0401 19:32:03.395539   70284 ssh_runner.go:195] Run: which crictl
	I0401 19:32:03.399739   70284 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 19:32:03.441020   70284 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 19:32:03.441114   70284 ssh_runner.go:195] Run: crio --version
	I0401 19:32:03.474572   70284 ssh_runner.go:195] Run: crio --version
	I0401 19:32:03.511681   70284 out.go:177] * Preparing Kubernetes v1.30.0-rc.0 on CRI-O 1.29.1 ...
	I0401 19:32:02.825628   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:04.825973   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:03.513067   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetIP
	I0401 19:32:03.515901   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:03.516281   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:03.516315   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:03.516523   70284 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0401 19:32:03.521197   70284 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:32:03.536333   70284 kubeadm.go:877] updating cluster {Name:no-preload-472858 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0-rc.0 ClusterName:no-preload-472858 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.119 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 19:32:03.536459   70284 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.0 and runtime crio
	I0401 19:32:03.536507   70284 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:32:03.582858   70284 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0-rc.0". assuming images are not preloaded.
	I0401 19:32:03.582887   70284 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0-rc.0 registry.k8s.io/kube-controller-manager:v1.30.0-rc.0 registry.k8s.io/kube-scheduler:v1.30.0-rc.0 registry.k8s.io/kube-proxy:v1.30.0-rc.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0401 19:32:03.582970   70284 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:32:03.583026   70284 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0401 19:32:03.583032   70284 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0401 19:32:03.583071   70284 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0401 19:32:03.583161   70284 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0401 19:32:03.582997   70284 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0401 19:32:03.583238   70284 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0401 19:32:03.583388   70284 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0401 19:32:03.584618   70284 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0401 19:32:03.584626   70284 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0401 19:32:03.584630   70284 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:32:03.584619   70284 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0401 19:32:03.584640   70284 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0401 19:32:03.584626   70284 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0401 19:32:03.584701   70284 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0401 19:32:03.584856   70284 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0401 19:32:03.730086   70284 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0401 19:32:03.752217   70284 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0401 19:32:03.765621   70284 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0401 19:32:03.766526   70284 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0401 19:32:03.770748   70284 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0401 19:32:03.777614   70284 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0401 19:32:03.777672   70284 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0401 19:32:03.777699   70284 ssh_runner.go:195] Run: which crictl
	I0401 19:32:03.840814   70284 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0401 19:32:03.852416   70284 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0401 19:32:03.869889   70284 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0-rc.0" does not exist at hash "e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3" in container runtime
	I0401 19:32:03.869929   70284 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0401 19:32:03.869979   70284 ssh_runner.go:195] Run: which crictl
	I0401 19:32:03.874654   70284 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0-rc.0" does not exist at hash "ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a" in container runtime
	I0401 19:32:03.874693   70284 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0401 19:32:03.874737   70284 ssh_runner.go:195] Run: which crictl
	I0401 19:32:03.899207   70284 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:32:03.906139   70284 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0-rc.0" does not exist at hash "fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5" in container runtime
	I0401 19:32:03.906182   70284 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0401 19:32:03.906227   70284 ssh_runner.go:195] Run: which crictl
	I0401 19:32:03.996916   70284 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0401 19:32:03.996987   70284 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0-rc.0" does not exist at hash "33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652" in container runtime
	I0401 19:32:03.997022   70284 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0401 19:32:03.997045   70284 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0401 19:32:03.997053   70284 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0401 19:32:03.997054   70284 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0401 19:32:03.997089   70284 ssh_runner.go:195] Run: which crictl
	I0401 19:32:03.997128   70284 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0401 19:32:03.997142   70284 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0401 19:32:03.997090   70284 ssh_runner.go:195] Run: which crictl
	I0401 19:32:03.997164   70284 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:32:03.997194   70284 ssh_runner.go:195] Run: which crictl
	I0401 19:32:03.997211   70284 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0401 19:32:04.090272   70284 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0401 19:32:04.090548   70284 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0401 19:32:04.090639   70284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0401 19:32:04.102041   70284 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0401 19:32:04.102130   70284 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.0
	I0401 19:32:04.102168   70284 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.0
	I0401 19:32:04.102226   70284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0
	I0401 19:32:04.102241   70284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0
	I0401 19:32:04.102278   70284 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:32:04.108100   70284 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.0
	I0401 19:32:04.108192   70284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0
	I0401 19:32:04.182707   70284 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0401 19:32:04.182747   70284 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0401 19:32:04.182759   70284 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0401 19:32:04.182815   70284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0401 19:32:04.182820   70284 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0401 19:32:04.182883   70284 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.0
	I0401 19:32:04.182988   70284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0
	I0401 19:32:04.186135   70284 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0 (exists)
	I0401 19:32:04.186175   70284 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0 (exists)
	I0401 19:32:04.186221   70284 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0 (exists)
	I0401 19:32:04.186242   70284 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0401 19:32:04.186324   70284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0401 19:32:06.352362   70284 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.169442796s)
	I0401 19:32:06.352398   70284 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0401 19:32:06.352419   70284 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0
	I0401 19:32:06.352416   70284 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0: (2.16957379s)
	I0401 19:32:06.352443   70284 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0401 19:32:06.352465   70284 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0
	I0401 19:32:06.352465   70284 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0: (2.16945688s)
	I0401 19:32:06.352479   70284 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.166139431s)
	I0401 19:32:06.352490   70284 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0401 19:32:06.352491   70284 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0 (exists)
	I0401 19:32:04.109989   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:06.294038   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:03.123452   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:03.623784   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:04.123649   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:04.623076   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:05.123822   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:05.623487   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:06.123635   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:06.623689   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:07.123919   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:07.623237   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:06.826244   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:09.326937   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:09.261547   70284 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0: (2.909056315s)
	I0401 19:32:09.261572   70284 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.0 from cache
	I0401 19:32:09.261600   70284 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0
	I0401 19:32:09.261668   70284 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0
	I0401 19:32:11.739636   70284 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0: (2.477945807s)
	I0401 19:32:11.739667   70284 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.0 from cache
	I0401 19:32:11.739702   70284 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0
	I0401 19:32:11.739761   70284 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0
	I0401 19:32:08.609901   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:11.114752   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:08.123689   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:08.623160   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:09.124002   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:09.623090   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:10.123049   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:10.623111   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:11.123042   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:11.623980   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:12.123074   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:12.623530   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:11.826409   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:13.828437   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:16.326097   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:13.195232   70284 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0: (1.455440816s)
	I0401 19:32:13.195267   70284 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.0 from cache
	I0401 19:32:13.195299   70284 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0401 19:32:13.195350   70284 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0401 19:32:13.607042   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:16.107993   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:13.123428   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:13.623899   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:14.123324   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:14.623889   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:15.123496   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:15.623779   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:16.124012   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:16.623620   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:17.123867   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:17.623014   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:18.326127   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:20.326575   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:17.202247   70284 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.006869591s)
	I0401 19:32:17.202284   70284 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0401 19:32:17.202315   70284 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0401 19:32:17.202364   70284 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0401 19:32:17.962735   70284 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0401 19:32:17.962785   70284 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0
	I0401 19:32:17.962850   70284 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0
	I0401 19:32:20.235136   70284 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0: (2.272262595s)
	I0401 19:32:20.235161   70284 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.0 from cache
	I0401 19:32:20.235193   70284 cache_images.go:123] Successfully loaded all cached images
	I0401 19:32:20.235197   70284 cache_images.go:92] duration metric: took 16.652290938s to LoadCachedImages
	I0401 19:32:20.235205   70284 kubeadm.go:928] updating node { 192.168.72.119 8443 v1.30.0-rc.0 crio true true} ...
	I0401 19:32:20.235332   70284 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-472858 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.119
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-rc.0 ClusterName:no-preload-472858 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 19:32:20.235402   70284 ssh_runner.go:195] Run: crio config
	I0401 19:32:20.296015   70284 cni.go:84] Creating CNI manager for ""
	I0401 19:32:20.296039   70284 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:32:20.296050   70284 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 19:32:20.296074   70284 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.119 APIServerPort:8443 KubernetesVersion:v1.30.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-472858 NodeName:no-preload-472858 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.119"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.119 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 19:32:20.296217   70284 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.119
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-472858"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.119
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.119"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 19:32:20.296275   70284 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.0
	I0401 19:32:20.307937   70284 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 19:32:20.308009   70284 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 19:32:20.318571   70284 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0401 19:32:20.339284   70284 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0401 19:32:20.358601   70284 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0401 19:32:20.379394   70284 ssh_runner.go:195] Run: grep 192.168.72.119	control-plane.minikube.internal$ /etc/hosts
	I0401 19:32:20.383948   70284 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.119	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:32:20.397559   70284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:32:20.549147   70284 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:32:20.568027   70284 certs.go:68] Setting up /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/no-preload-472858 for IP: 192.168.72.119
	I0401 19:32:20.568051   70284 certs.go:194] generating shared ca certs ...
	I0401 19:32:20.568070   70284 certs.go:226] acquiring lock for ca certs: {Name:mk348b3e250c104b662139cd7212c6c6dfda3180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:32:20.568273   70284 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key
	I0401 19:32:20.568337   70284 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key
	I0401 19:32:20.568352   70284 certs.go:256] generating profile certs ...
	I0401 19:32:20.568453   70284 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/no-preload-472858/client.key
	I0401 19:32:20.568534   70284 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/no-preload-472858/apiserver.key.bfc8ff8f
	I0401 19:32:20.568586   70284 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/no-preload-472858/proxy-client.key
	I0401 19:32:20.568691   70284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem (1338 bytes)
	W0401 19:32:20.568718   70284 certs.go:480] ignoring /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751_empty.pem, impossibly tiny 0 bytes
	I0401 19:32:20.568728   70284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem (1675 bytes)
	I0401 19:32:20.568747   70284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem (1082 bytes)
	I0401 19:32:20.568773   70284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem (1123 bytes)
	I0401 19:32:20.568795   70284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem (1679 bytes)
	I0401 19:32:20.568830   70284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:32:20.569519   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 19:32:20.605218   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 19:32:20.650321   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 19:32:20.676884   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 19:32:20.705378   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/no-preload-472858/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0401 19:32:20.733068   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/no-preload-472858/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 19:32:20.767387   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/no-preload-472858/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 19:32:20.793543   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/no-preload-472858/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 19:32:20.820843   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /usr/share/ca-certificates/177512.pem (1708 bytes)
	I0401 19:32:20.848364   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 19:32:20.877551   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem --> /usr/share/ca-certificates/17751.pem (1338 bytes)
	I0401 19:32:20.904650   70284 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0401 19:32:20.922876   70284 ssh_runner.go:195] Run: openssl version
	I0401 19:32:20.929441   70284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 19:32:20.942496   70284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:32:20.948011   70284 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 18:07 /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:32:20.948080   70284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:32:20.954320   70284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 19:32:20.968060   70284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17751.pem && ln -fs /usr/share/ca-certificates/17751.pem /etc/ssl/certs/17751.pem"
	I0401 19:32:20.981591   70284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17751.pem
	I0401 19:32:20.986660   70284 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 18:15 /usr/share/ca-certificates/17751.pem
	I0401 19:32:20.986706   70284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17751.pem
	I0401 19:32:20.993394   70284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17751.pem /etc/ssl/certs/51391683.0"
	I0401 19:32:21.006530   70284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177512.pem && ln -fs /usr/share/ca-certificates/177512.pem /etc/ssl/certs/177512.pem"
	I0401 19:32:21.020014   70284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177512.pem
	I0401 19:32:21.025507   70284 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 18:15 /usr/share/ca-certificates/177512.pem
	I0401 19:32:21.025560   70284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177512.pem
	I0401 19:32:21.032433   70284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177512.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 19:32:21.047002   70284 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 19:32:21.052551   70284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 19:32:21.059875   70284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 19:32:21.067243   70284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 19:32:21.074304   70284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 19:32:21.080978   70284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 19:32:21.088051   70284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 19:32:21.095219   70284 kubeadm.go:391] StartCluster: {Name:no-preload-472858 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0-rc.0 ClusterName:no-preload-472858 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.119 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:32:21.095325   70284 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 19:32:21.095403   70284 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:32:21.144103   70284 cri.go:89] found id: ""
	I0401 19:32:21.144187   70284 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0401 19:32:21.157222   70284 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0401 19:32:21.157241   70284 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0401 19:32:21.157246   70284 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0401 19:32:21.157290   70284 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 19:32:21.169027   70284 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 19:32:21.170123   70284 kubeconfig.go:125] found "no-preload-472858" server: "https://192.168.72.119:8443"
	I0401 19:32:21.172523   70284 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 19:32:21.183801   70284 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.119
	I0401 19:32:21.183838   70284 kubeadm.go:1154] stopping kube-system containers ...
	I0401 19:32:21.183847   70284 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0401 19:32:21.183892   70284 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:32:21.229279   70284 cri.go:89] found id: ""
	I0401 19:32:21.229357   70284 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0401 19:32:21.249719   70284 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:32:21.261894   70284 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:32:21.261929   70284 kubeadm.go:156] found existing configuration files:
	
	I0401 19:32:21.261984   70284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 19:32:21.273961   70284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:32:21.274026   70284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:32:21.286746   70284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 19:32:21.297920   70284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:32:21.297986   70284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:32:21.308793   70284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 19:32:21.319612   70284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:32:21.319658   70284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:32:21.332730   70284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 19:32:21.344752   70284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:32:21.344810   70284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:32:21.355821   70284 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:32:21.366649   70284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:32:21.482208   70284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:32:18.607685   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:20.607824   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:18.123795   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:18.623529   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:19.123446   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:19.623223   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:20.123133   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:20.623058   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:21.123302   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:21.623115   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:22.123810   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:22.623878   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:22.826056   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:24.826357   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:22.312148   70284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:32:22.533156   70284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:32:22.620390   70284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:32:22.704948   70284 api_server.go:52] waiting for apiserver process to appear ...
	I0401 19:32:22.705039   70284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:23.205114   70284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:23.706000   70284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:23.725209   70284 api_server.go:72] duration metric: took 1.020261742s to wait for apiserver process to appear ...
	I0401 19:32:23.725243   70284 api_server.go:88] waiting for apiserver healthz status ...
	I0401 19:32:23.725264   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:23.725749   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": dial tcp 192.168.72.119:8443: connect: connection refused
	I0401 19:32:24.226383   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:23.107450   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:25.109899   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:23.123507   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:23.623244   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:24.123444   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:24.623346   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:25.123834   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:25.623814   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:26.124028   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:26.623428   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:27.123592   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:27.623451   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:27.327961   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:29.826272   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:29.226831   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0401 19:32:29.226876   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:27.607575   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:29.608427   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:32.106668   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:28.123454   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:28.623502   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:29.123265   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:29.623449   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:30.123525   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:30.623634   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:31.123972   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:31.623023   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:32.123346   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:32.623839   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:32.325638   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:34.325777   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:36.326510   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:34.227668   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0401 19:32:34.227723   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:34.606929   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:36.607515   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:33.123673   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:33.623088   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:34.123230   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:34.623967   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:35.123420   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:35.623499   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:36.123152   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:36.623963   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:37.123682   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:37.623536   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:38.829585   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:41.325607   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:39.228117   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0401 19:32:39.228164   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:39.107473   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:41.607043   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:38.123238   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:38.623831   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:39.123180   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:39.623801   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:40.123478   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:40.623651   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:41.123687   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:41.624016   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:42.123891   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:42.623493   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:43.326457   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:45.827310   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:44.228934   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0401 19:32:44.228982   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:44.259601   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": read tcp 192.168.72.1:37026->192.168.72.119:8443: read: connection reset by peer
	I0401 19:32:44.726186   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:44.726759   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": dial tcp 192.168.72.119:8443: connect: connection refused
	I0401 19:32:45.226347   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:43.607936   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:46.106775   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:43.123504   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:43.623527   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:44.124016   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:44.623931   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:45.123188   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:45.623649   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:46.123570   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:46.623179   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:47.123273   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:47.623842   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:48.325252   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:50.327365   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:50.226859   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0401 19:32:50.226907   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:48.109152   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:50.607327   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:48.123759   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:48.623092   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:49.123174   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:49.623986   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:50.123301   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:50.623694   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:51.123466   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:51.623618   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:52.123073   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:32:52.123172   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:32:52.164635   71168 cri.go:89] found id: ""
	I0401 19:32:52.164656   71168 logs.go:276] 0 containers: []
	W0401 19:32:52.164663   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:32:52.164669   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:32:52.164738   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:32:52.202531   71168 cri.go:89] found id: ""
	I0401 19:32:52.202560   71168 logs.go:276] 0 containers: []
	W0401 19:32:52.202572   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:32:52.202580   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:32:52.202653   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:32:52.247667   71168 cri.go:89] found id: ""
	I0401 19:32:52.247693   71168 logs.go:276] 0 containers: []
	W0401 19:32:52.247703   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:32:52.247714   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:32:52.247774   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:32:52.289029   71168 cri.go:89] found id: ""
	I0401 19:32:52.289054   71168 logs.go:276] 0 containers: []
	W0401 19:32:52.289062   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:32:52.289068   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:32:52.289114   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:32:52.326820   71168 cri.go:89] found id: ""
	I0401 19:32:52.326864   71168 logs.go:276] 0 containers: []
	W0401 19:32:52.326875   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:32:52.326882   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:32:52.326944   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:32:52.362793   71168 cri.go:89] found id: ""
	I0401 19:32:52.362827   71168 logs.go:276] 0 containers: []
	W0401 19:32:52.362838   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:32:52.362845   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:32:52.362950   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:32:52.400174   71168 cri.go:89] found id: ""
	I0401 19:32:52.400204   71168 logs.go:276] 0 containers: []
	W0401 19:32:52.400215   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:32:52.400222   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:32:52.400282   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:32:52.436027   71168 cri.go:89] found id: ""
	I0401 19:32:52.436056   71168 logs.go:276] 0 containers: []
	W0401 19:32:52.436066   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:32:52.436085   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:32:52.436099   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:32:52.477246   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:32:52.477272   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:32:52.529215   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:32:52.529247   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:32:52.544695   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:32:52.544724   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:32:52.677816   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:32:52.677849   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:32:52.677877   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:32:52.825288   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:54.826043   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:55.228105   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0401 19:32:55.228139   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:53.106774   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:55.107668   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:55.241224   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:55.256975   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:32:55.257045   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:32:55.298280   71168 cri.go:89] found id: ""
	I0401 19:32:55.298307   71168 logs.go:276] 0 containers: []
	W0401 19:32:55.298319   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:32:55.298326   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:32:55.298397   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:32:55.337707   71168 cri.go:89] found id: ""
	I0401 19:32:55.337732   71168 logs.go:276] 0 containers: []
	W0401 19:32:55.337739   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:32:55.337745   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:32:55.337791   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:32:55.381455   71168 cri.go:89] found id: ""
	I0401 19:32:55.381479   71168 logs.go:276] 0 containers: []
	W0401 19:32:55.381490   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:32:55.381496   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:32:55.381557   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:32:55.420715   71168 cri.go:89] found id: ""
	I0401 19:32:55.420739   71168 logs.go:276] 0 containers: []
	W0401 19:32:55.420749   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:32:55.420756   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:32:55.420820   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:32:55.459546   71168 cri.go:89] found id: ""
	I0401 19:32:55.459575   71168 logs.go:276] 0 containers: []
	W0401 19:32:55.459583   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:32:55.459588   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:32:55.459634   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:32:55.504240   71168 cri.go:89] found id: ""
	I0401 19:32:55.504267   71168 logs.go:276] 0 containers: []
	W0401 19:32:55.504277   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:32:55.504285   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:32:55.504368   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:32:55.539399   71168 cri.go:89] found id: ""
	I0401 19:32:55.539426   71168 logs.go:276] 0 containers: []
	W0401 19:32:55.539437   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:32:55.539443   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:32:55.539509   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:32:55.583823   71168 cri.go:89] found id: ""
	I0401 19:32:55.583861   71168 logs.go:276] 0 containers: []
	W0401 19:32:55.583872   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:32:55.583881   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:32:55.583895   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:32:55.645489   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:32:55.645523   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:32:55.712883   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:32:55.712920   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:32:55.734890   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:32:55.734923   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:32:55.853068   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:32:55.853089   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:32:55.853102   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:32:57.325965   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:59.827753   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:00.228533   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0401 19:33:00.228582   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:57.607203   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:59.610732   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:02.108676   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:58.435925   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:58.450910   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:32:58.450980   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:32:58.487470   71168 cri.go:89] found id: ""
	I0401 19:32:58.487495   71168 logs.go:276] 0 containers: []
	W0401 19:32:58.487506   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:32:58.487514   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:32:58.487562   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:32:58.529513   71168 cri.go:89] found id: ""
	I0401 19:32:58.529534   71168 logs.go:276] 0 containers: []
	W0401 19:32:58.529543   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:32:58.529547   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:32:58.529592   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:32:58.574170   71168 cri.go:89] found id: ""
	I0401 19:32:58.574197   71168 logs.go:276] 0 containers: []
	W0401 19:32:58.574205   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:32:58.574211   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:32:58.574258   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:32:58.615379   71168 cri.go:89] found id: ""
	I0401 19:32:58.615405   71168 logs.go:276] 0 containers: []
	W0401 19:32:58.615414   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:32:58.615419   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:32:58.615468   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:32:58.655496   71168 cri.go:89] found id: ""
	I0401 19:32:58.655523   71168 logs.go:276] 0 containers: []
	W0401 19:32:58.655534   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:32:58.655542   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:32:58.655593   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:32:58.697199   71168 cri.go:89] found id: ""
	I0401 19:32:58.697229   71168 logs.go:276] 0 containers: []
	W0401 19:32:58.697238   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:32:58.697246   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:32:58.697312   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:32:58.735618   71168 cri.go:89] found id: ""
	I0401 19:32:58.735643   71168 logs.go:276] 0 containers: []
	W0401 19:32:58.735651   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:32:58.735656   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:32:58.735701   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:32:58.780583   71168 cri.go:89] found id: ""
	I0401 19:32:58.780613   71168 logs.go:276] 0 containers: []
	W0401 19:32:58.780624   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:32:58.780635   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:32:58.780649   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:32:58.829717   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:32:58.829743   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:32:58.844836   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:32:58.844866   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:32:58.923138   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:32:58.923157   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:32:58.923172   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:32:58.993680   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:32:58.993713   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:01.538920   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:01.556943   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:01.557017   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:01.608397   71168 cri.go:89] found id: ""
	I0401 19:33:01.608417   71168 logs.go:276] 0 containers: []
	W0401 19:33:01.608425   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:01.608430   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:01.608490   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:01.666573   71168 cri.go:89] found id: ""
	I0401 19:33:01.666599   71168 logs.go:276] 0 containers: []
	W0401 19:33:01.666609   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:01.666615   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:01.666674   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:01.726308   71168 cri.go:89] found id: ""
	I0401 19:33:01.726331   71168 logs.go:276] 0 containers: []
	W0401 19:33:01.726341   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:01.726347   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:01.726412   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:01.773095   71168 cri.go:89] found id: ""
	I0401 19:33:01.773118   71168 logs.go:276] 0 containers: []
	W0401 19:33:01.773125   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:01.773131   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:01.773189   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:01.813011   71168 cri.go:89] found id: ""
	I0401 19:33:01.813034   71168 logs.go:276] 0 containers: []
	W0401 19:33:01.813042   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:01.813048   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:01.813096   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:01.859124   71168 cri.go:89] found id: ""
	I0401 19:33:01.859151   71168 logs.go:276] 0 containers: []
	W0401 19:33:01.859161   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:01.859169   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:01.859228   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:01.904491   71168 cri.go:89] found id: ""
	I0401 19:33:01.904519   71168 logs.go:276] 0 containers: []
	W0401 19:33:01.904530   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:01.904537   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:01.904596   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:01.946768   71168 cri.go:89] found id: ""
	I0401 19:33:01.946794   71168 logs.go:276] 0 containers: []
	W0401 19:33:01.946804   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:01.946815   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:01.946829   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:02.026315   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:02.026362   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:02.072861   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:02.072893   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:02.132064   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:02.132105   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:02.151545   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:02.151575   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:02.234059   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:02.325806   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:04.327258   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:03.215901   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0401 19:33:03.215933   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0401 19:33:03.215947   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:03.264913   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0401 19:33:03.264946   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0401 19:33:03.264961   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:03.272548   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0401 19:33:03.272580   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0401 19:33:03.726254   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:03.731022   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:03.731050   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:04.225595   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:04.237757   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:04.237783   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:04.725330   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:04.734019   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:04.734047   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:05.225303   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:05.242774   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:05.242811   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:05.726350   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:05.730775   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:05.730838   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:06.225345   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:06.229749   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:06.229793   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:06.725687   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:06.730607   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:06.730640   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:04.112109   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:06.606160   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:04.734559   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:04.755071   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:04.755130   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:04.798316   71168 cri.go:89] found id: ""
	I0401 19:33:04.798345   71168 logs.go:276] 0 containers: []
	W0401 19:33:04.798358   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:04.798366   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:04.798426   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:04.840011   71168 cri.go:89] found id: ""
	I0401 19:33:04.840032   71168 logs.go:276] 0 containers: []
	W0401 19:33:04.840043   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:04.840050   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:04.840106   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:04.883686   71168 cri.go:89] found id: ""
	I0401 19:33:04.883713   71168 logs.go:276] 0 containers: []
	W0401 19:33:04.883725   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:04.883733   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:04.883795   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:04.933810   71168 cri.go:89] found id: ""
	I0401 19:33:04.933844   71168 logs.go:276] 0 containers: []
	W0401 19:33:04.933855   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:04.933863   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:04.933925   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:04.983118   71168 cri.go:89] found id: ""
	I0401 19:33:04.983139   71168 logs.go:276] 0 containers: []
	W0401 19:33:04.983146   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:04.983151   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:04.983207   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:05.036146   71168 cri.go:89] found id: ""
	I0401 19:33:05.036169   71168 logs.go:276] 0 containers: []
	W0401 19:33:05.036179   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:05.036186   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:05.036242   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:05.086269   71168 cri.go:89] found id: ""
	I0401 19:33:05.086296   71168 logs.go:276] 0 containers: []
	W0401 19:33:05.086308   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:05.086315   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:05.086378   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:05.140893   71168 cri.go:89] found id: ""
	I0401 19:33:05.140914   71168 logs.go:276] 0 containers: []
	W0401 19:33:05.140922   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:05.140931   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:05.140946   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:05.161222   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:05.161249   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:05.262254   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:05.262276   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:05.262289   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:05.352880   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:05.352908   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:05.400720   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:05.400748   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:07.954227   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:07.225774   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:07.230656   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:07.230684   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:07.726299   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:07.731793   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:07.731830   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:08.225362   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:08.229716   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:08.229755   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:08.725315   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:08.733428   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 200:
	ok
	I0401 19:33:08.739761   70284 api_server.go:141] control plane version: v1.30.0-rc.0
	I0401 19:33:08.739788   70284 api_server.go:131] duration metric: took 45.014537527s to wait for apiserver health ...
	I0401 19:33:08.739796   70284 cni.go:84] Creating CNI manager for ""
	I0401 19:33:08.739802   70284 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:33:08.741701   70284 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0401 19:33:06.825165   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:08.829987   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:11.327172   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:08.743011   70284 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0401 19:33:08.758184   70284 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0401 19:33:08.778975   70284 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 19:33:08.789725   70284 system_pods.go:59] 8 kube-system pods found
	I0401 19:33:08.789763   70284 system_pods.go:61] "coredns-7db6d8ff4d-gdml5" [039c8887-dff0-40e5-b8b5-00ef2f4a21cc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:33:08.789771   70284 system_pods.go:61] "etcd-no-preload-472858" [09086659-e20f-40da-b01f-3690e110ffeb] Running
	I0401 19:33:08.789781   70284 system_pods.go:61] "kube-apiserver-no-preload-472858" [5139434c-3d23-4736-86ad-28253c89f7da] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0401 19:33:08.789794   70284 system_pods.go:61] "kube-controller-manager-no-preload-472858" [965d600a-612e-4625-b883-7105f9166503] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0401 19:33:08.789806   70284 system_pods.go:61] "kube-proxy-7c22p" [903412f5-252c-41f3-81ac-1ae47522b403] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:33:08.789820   70284 system_pods.go:61] "kube-scheduler-no-preload-472858" [936981be-fc5e-4865-811c-936fab59f37b] Running
	I0401 19:33:08.789832   70284 system_pods.go:61] "metrics-server-569cc877fc-wlr7k" [14010e9a-9662-46c9-bc46-cc6d19c0cddf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:33:08.789839   70284 system_pods.go:61] "storage-provisioner" [2e5d9f78-e74c-4b3b-8878-e4bd8ce34108] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:33:08.789861   70284 system_pods.go:74] duration metric: took 10.868458ms to wait for pod list to return data ...
	I0401 19:33:08.789874   70284 node_conditions.go:102] verifying NodePressure condition ...
	I0401 19:33:08.793853   70284 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 19:33:08.793883   70284 node_conditions.go:123] node cpu capacity is 2
	I0401 19:33:08.793897   70284 node_conditions.go:105] duration metric: took 4.016996ms to run NodePressure ...
	I0401 19:33:08.793916   70284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:33:09.081698   70284 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0401 19:33:09.085681   70284 kubeadm.go:733] kubelet initialised
	I0401 19:33:09.085699   70284 kubeadm.go:734] duration metric: took 3.976973ms waiting for restarted kubelet to initialise ...
	I0401 19:33:09.085705   70284 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:33:09.090647   70284 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:11.102738   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:08.608194   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:11.109659   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:07.970794   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:07.970850   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:08.013694   71168 cri.go:89] found id: ""
	I0401 19:33:08.013719   71168 logs.go:276] 0 containers: []
	W0401 19:33:08.013729   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:08.013737   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:08.013810   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:08.050810   71168 cri.go:89] found id: ""
	I0401 19:33:08.050849   71168 logs.go:276] 0 containers: []
	W0401 19:33:08.050861   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:08.050868   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:08.050932   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:08.092056   71168 cri.go:89] found id: ""
	I0401 19:33:08.092086   71168 logs.go:276] 0 containers: []
	W0401 19:33:08.092096   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:08.092102   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:08.092157   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:08.133171   71168 cri.go:89] found id: ""
	I0401 19:33:08.133195   71168 logs.go:276] 0 containers: []
	W0401 19:33:08.133205   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:08.133212   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:08.133271   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:08.173997   71168 cri.go:89] found id: ""
	I0401 19:33:08.174023   71168 logs.go:276] 0 containers: []
	W0401 19:33:08.174034   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:08.174041   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:08.174102   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:08.212740   71168 cri.go:89] found id: ""
	I0401 19:33:08.212768   71168 logs.go:276] 0 containers: []
	W0401 19:33:08.212778   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:08.212785   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:08.212831   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:08.254815   71168 cri.go:89] found id: ""
	I0401 19:33:08.254837   71168 logs.go:276] 0 containers: []
	W0401 19:33:08.254847   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:08.254854   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:08.254909   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:08.295347   71168 cri.go:89] found id: ""
	I0401 19:33:08.295375   71168 logs.go:276] 0 containers: []
	W0401 19:33:08.295382   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:08.295390   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:08.295402   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:08.311574   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:08.311600   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:08.405437   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:08.405455   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:08.405470   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:08.483687   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:08.483722   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:08.526132   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:08.526158   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:11.076590   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:11.093846   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:11.093983   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:11.146046   71168 cri.go:89] found id: ""
	I0401 19:33:11.146073   71168 logs.go:276] 0 containers: []
	W0401 19:33:11.146083   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:11.146088   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:11.146146   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:11.193751   71168 cri.go:89] found id: ""
	I0401 19:33:11.193782   71168 logs.go:276] 0 containers: []
	W0401 19:33:11.193793   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:11.193801   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:11.193873   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:11.242150   71168 cri.go:89] found id: ""
	I0401 19:33:11.242178   71168 logs.go:276] 0 containers: []
	W0401 19:33:11.242189   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:11.242197   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:11.242271   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:11.294063   71168 cri.go:89] found id: ""
	I0401 19:33:11.294092   71168 logs.go:276] 0 containers: []
	W0401 19:33:11.294103   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:11.294110   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:11.294175   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:11.334764   71168 cri.go:89] found id: ""
	I0401 19:33:11.334784   71168 logs.go:276] 0 containers: []
	W0401 19:33:11.334791   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:11.334797   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:11.334846   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:11.372770   71168 cri.go:89] found id: ""
	I0401 19:33:11.372789   71168 logs.go:276] 0 containers: []
	W0401 19:33:11.372795   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:11.372806   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:11.372871   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:11.413233   71168 cri.go:89] found id: ""
	I0401 19:33:11.413261   71168 logs.go:276] 0 containers: []
	W0401 19:33:11.413271   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:11.413278   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:11.413337   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:11.456044   71168 cri.go:89] found id: ""
	I0401 19:33:11.456073   71168 logs.go:276] 0 containers: []
	W0401 19:33:11.456084   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:11.456093   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:11.456103   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:11.471157   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:11.471183   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:11.550489   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:11.550508   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:11.550523   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:11.635360   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:11.635389   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:11.680683   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:11.680713   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:13.827425   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:16.325563   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:13.104812   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:15.602114   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:13.607926   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:16.107219   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:14.235295   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:14.251513   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:14.251590   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:14.291688   71168 cri.go:89] found id: ""
	I0401 19:33:14.291715   71168 logs.go:276] 0 containers: []
	W0401 19:33:14.291725   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:14.291732   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:14.291792   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:14.332030   71168 cri.go:89] found id: ""
	I0401 19:33:14.332051   71168 logs.go:276] 0 containers: []
	W0401 19:33:14.332060   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:14.332068   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:14.332132   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:14.372098   71168 cri.go:89] found id: ""
	I0401 19:33:14.372122   71168 logs.go:276] 0 containers: []
	W0401 19:33:14.372130   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:14.372137   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:14.372183   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:14.410529   71168 cri.go:89] found id: ""
	I0401 19:33:14.410554   71168 logs.go:276] 0 containers: []
	W0401 19:33:14.410563   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:14.410570   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:14.410624   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:14.451198   71168 cri.go:89] found id: ""
	I0401 19:33:14.451226   71168 logs.go:276] 0 containers: []
	W0401 19:33:14.451238   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:14.451246   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:14.451306   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:14.494588   71168 cri.go:89] found id: ""
	I0401 19:33:14.494616   71168 logs.go:276] 0 containers: []
	W0401 19:33:14.494627   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:14.494635   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:14.494689   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:14.537561   71168 cri.go:89] found id: ""
	I0401 19:33:14.537583   71168 logs.go:276] 0 containers: []
	W0401 19:33:14.537590   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:14.537597   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:14.537674   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:14.580624   71168 cri.go:89] found id: ""
	I0401 19:33:14.580651   71168 logs.go:276] 0 containers: []
	W0401 19:33:14.580662   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:14.580672   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:14.580688   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:14.635769   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:14.635798   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:14.650275   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:14.650304   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:14.742355   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:14.742378   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:14.742394   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:14.827839   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:14.827869   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:17.373408   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:17.390110   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:17.390185   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:17.432355   71168 cri.go:89] found id: ""
	I0401 19:33:17.432384   71168 logs.go:276] 0 containers: []
	W0401 19:33:17.432396   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:17.432409   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:17.432471   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:17.476458   71168 cri.go:89] found id: ""
	I0401 19:33:17.476484   71168 logs.go:276] 0 containers: []
	W0401 19:33:17.476495   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:17.476502   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:17.476587   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:17.519657   71168 cri.go:89] found id: ""
	I0401 19:33:17.519686   71168 logs.go:276] 0 containers: []
	W0401 19:33:17.519694   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:17.519699   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:17.519751   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:17.559962   71168 cri.go:89] found id: ""
	I0401 19:33:17.559985   71168 logs.go:276] 0 containers: []
	W0401 19:33:17.559992   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:17.559997   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:17.560054   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:17.608924   71168 cri.go:89] found id: ""
	I0401 19:33:17.608995   71168 logs.go:276] 0 containers: []
	W0401 19:33:17.609009   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:17.609016   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:17.609075   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:17.648371   71168 cri.go:89] found id: ""
	I0401 19:33:17.648394   71168 logs.go:276] 0 containers: []
	W0401 19:33:17.648401   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:17.648406   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:17.648462   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:17.689217   71168 cri.go:89] found id: ""
	I0401 19:33:17.689239   71168 logs.go:276] 0 containers: []
	W0401 19:33:17.689246   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:17.689252   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:17.689312   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:17.741738   71168 cri.go:89] found id: ""
	I0401 19:33:17.741768   71168 logs.go:276] 0 containers: []
	W0401 19:33:17.741779   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:17.741790   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:17.741805   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:17.839857   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:17.839887   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:17.888684   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:17.888716   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:17.944268   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:17.944298   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:17.959305   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:17.959334   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0401 19:33:18.327388   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:20.826627   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:18.100065   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:20.100714   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:18.107770   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:20.108880   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	W0401 19:33:18.040820   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:20.541980   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:20.558198   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:20.558270   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:20.596329   71168 cri.go:89] found id: ""
	I0401 19:33:20.596357   71168 logs.go:276] 0 containers: []
	W0401 19:33:20.596366   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:20.596373   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:20.596431   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:20.638611   71168 cri.go:89] found id: ""
	I0401 19:33:20.638639   71168 logs.go:276] 0 containers: []
	W0401 19:33:20.638664   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:20.638672   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:20.638729   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:20.677984   71168 cri.go:89] found id: ""
	I0401 19:33:20.678014   71168 logs.go:276] 0 containers: []
	W0401 19:33:20.678024   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:20.678032   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:20.678080   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:20.718491   71168 cri.go:89] found id: ""
	I0401 19:33:20.718520   71168 logs.go:276] 0 containers: []
	W0401 19:33:20.718530   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:20.718537   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:20.718597   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:20.772147   71168 cri.go:89] found id: ""
	I0401 19:33:20.772174   71168 logs.go:276] 0 containers: []
	W0401 19:33:20.772185   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:20.772199   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:20.772258   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:20.823339   71168 cri.go:89] found id: ""
	I0401 19:33:20.823361   71168 logs.go:276] 0 containers: []
	W0401 19:33:20.823372   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:20.823380   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:20.823463   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:20.884081   71168 cri.go:89] found id: ""
	I0401 19:33:20.884106   71168 logs.go:276] 0 containers: []
	W0401 19:33:20.884117   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:20.884124   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:20.884185   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:20.931679   71168 cri.go:89] found id: ""
	I0401 19:33:20.931703   71168 logs.go:276] 0 containers: []
	W0401 19:33:20.931713   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:20.931722   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:20.931736   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:21.016766   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:21.016797   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:21.067600   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:21.067632   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:21.136989   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:21.137045   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:21.152673   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:21.152706   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:21.250186   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:23.325222   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:25.326919   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:22.597922   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:24.602701   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:22.606659   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:24.606811   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:26.608185   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:23.750565   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:23.768458   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:23.768534   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:23.814489   71168 cri.go:89] found id: ""
	I0401 19:33:23.814534   71168 logs.go:276] 0 containers: []
	W0401 19:33:23.814555   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:23.814565   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:23.814632   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:23.854954   71168 cri.go:89] found id: ""
	I0401 19:33:23.854981   71168 logs.go:276] 0 containers: []
	W0401 19:33:23.854989   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:23.854995   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:23.855060   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:23.896115   71168 cri.go:89] found id: ""
	I0401 19:33:23.896148   71168 logs.go:276] 0 containers: []
	W0401 19:33:23.896159   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:23.896169   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:23.896231   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:23.941300   71168 cri.go:89] found id: ""
	I0401 19:33:23.941324   71168 logs.go:276] 0 containers: []
	W0401 19:33:23.941337   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:23.941344   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:23.941390   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:23.983955   71168 cri.go:89] found id: ""
	I0401 19:33:23.983982   71168 logs.go:276] 0 containers: []
	W0401 19:33:23.983991   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:23.983997   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:23.984056   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:24.020756   71168 cri.go:89] found id: ""
	I0401 19:33:24.020777   71168 logs.go:276] 0 containers: []
	W0401 19:33:24.020784   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:24.020789   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:24.020835   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:24.063426   71168 cri.go:89] found id: ""
	I0401 19:33:24.063454   71168 logs.go:276] 0 containers: []
	W0401 19:33:24.063462   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:24.063467   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:24.063529   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:24.110924   71168 cri.go:89] found id: ""
	I0401 19:33:24.110945   71168 logs.go:276] 0 containers: []
	W0401 19:33:24.110952   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:24.110960   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:24.110969   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:24.179200   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:24.179240   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:24.194880   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:24.194909   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:24.280555   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:24.280588   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:24.280603   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:24.359502   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:24.359534   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:26.909147   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:26.925961   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:26.926028   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:26.969502   71168 cri.go:89] found id: ""
	I0401 19:33:26.969525   71168 logs.go:276] 0 containers: []
	W0401 19:33:26.969536   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:26.969543   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:26.969604   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:27.015205   71168 cri.go:89] found id: ""
	I0401 19:33:27.015232   71168 logs.go:276] 0 containers: []
	W0401 19:33:27.015241   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:27.015246   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:27.015296   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:27.055943   71168 cri.go:89] found id: ""
	I0401 19:33:27.055968   71168 logs.go:276] 0 containers: []
	W0401 19:33:27.055977   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:27.055983   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:27.056039   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:27.095447   71168 cri.go:89] found id: ""
	I0401 19:33:27.095474   71168 logs.go:276] 0 containers: []
	W0401 19:33:27.095485   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:27.095497   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:27.095558   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:27.137912   71168 cri.go:89] found id: ""
	I0401 19:33:27.137941   71168 logs.go:276] 0 containers: []
	W0401 19:33:27.137948   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:27.137954   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:27.138008   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:27.183303   71168 cri.go:89] found id: ""
	I0401 19:33:27.183325   71168 logs.go:276] 0 containers: []
	W0401 19:33:27.183335   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:27.183344   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:27.183403   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:27.225780   71168 cri.go:89] found id: ""
	I0401 19:33:27.225804   71168 logs.go:276] 0 containers: []
	W0401 19:33:27.225814   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:27.225822   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:27.225880   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:27.268136   71168 cri.go:89] found id: ""
	I0401 19:33:27.268159   71168 logs.go:276] 0 containers: []
	W0401 19:33:27.268168   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:27.268191   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:27.268215   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:27.325527   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:27.325557   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:27.341727   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:27.341763   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:27.432369   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:27.432389   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:27.432403   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:27.523104   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:27.523135   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:27.826804   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:30.326279   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:27.099509   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:29.597830   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:31.598325   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:29.107400   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:31.107514   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:30.066147   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:30.079999   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:30.080062   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:30.121887   71168 cri.go:89] found id: ""
	I0401 19:33:30.121911   71168 logs.go:276] 0 containers: []
	W0401 19:33:30.121920   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:30.121929   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:30.121986   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:30.163939   71168 cri.go:89] found id: ""
	I0401 19:33:30.163967   71168 logs.go:276] 0 containers: []
	W0401 19:33:30.163978   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:30.163986   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:30.164051   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:30.203924   71168 cri.go:89] found id: ""
	I0401 19:33:30.203965   71168 logs.go:276] 0 containers: []
	W0401 19:33:30.203977   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:30.203985   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:30.204048   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:30.243771   71168 cri.go:89] found id: ""
	I0401 19:33:30.243798   71168 logs.go:276] 0 containers: []
	W0401 19:33:30.243809   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:30.243816   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:30.243888   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:30.284039   71168 cri.go:89] found id: ""
	I0401 19:33:30.284066   71168 logs.go:276] 0 containers: []
	W0401 19:33:30.284074   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:30.284079   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:30.284127   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:30.327549   71168 cri.go:89] found id: ""
	I0401 19:33:30.327570   71168 logs.go:276] 0 containers: []
	W0401 19:33:30.327577   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:30.327583   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:30.327630   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:30.365258   71168 cri.go:89] found id: ""
	I0401 19:33:30.365281   71168 logs.go:276] 0 containers: []
	W0401 19:33:30.365291   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:30.365297   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:30.365352   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:30.405959   71168 cri.go:89] found id: ""
	I0401 19:33:30.405984   71168 logs.go:276] 0 containers: []
	W0401 19:33:30.405992   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:30.405999   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:30.406011   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:30.480668   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:30.480692   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:30.480706   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:30.566042   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:30.566077   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:30.629250   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:30.629285   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:30.682185   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:30.682213   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:32.824844   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:34.826598   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:33.600555   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:36.100194   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:33.608315   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:36.106573   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:33.199466   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:33.213557   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:33.213630   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:33.255038   71168 cri.go:89] found id: ""
	I0401 19:33:33.255062   71168 logs.go:276] 0 containers: []
	W0401 19:33:33.255072   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:33.255079   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:33.255143   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:33.297724   71168 cri.go:89] found id: ""
	I0401 19:33:33.297751   71168 logs.go:276] 0 containers: []
	W0401 19:33:33.297761   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:33.297767   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:33.297836   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:33.340694   71168 cri.go:89] found id: ""
	I0401 19:33:33.340718   71168 logs.go:276] 0 containers: []
	W0401 19:33:33.340727   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:33.340735   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:33.340794   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:33.388857   71168 cri.go:89] found id: ""
	I0401 19:33:33.388883   71168 logs.go:276] 0 containers: []
	W0401 19:33:33.388891   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:33.388896   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:33.388940   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:33.430875   71168 cri.go:89] found id: ""
	I0401 19:33:33.430899   71168 logs.go:276] 0 containers: []
	W0401 19:33:33.430906   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:33.430911   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:33.430966   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:33.479877   71168 cri.go:89] found id: ""
	I0401 19:33:33.479905   71168 logs.go:276] 0 containers: []
	W0401 19:33:33.479917   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:33.479923   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:33.479968   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:33.522635   71168 cri.go:89] found id: ""
	I0401 19:33:33.522662   71168 logs.go:276] 0 containers: []
	W0401 19:33:33.522672   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:33.522680   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:33.522737   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:33.560497   71168 cri.go:89] found id: ""
	I0401 19:33:33.560519   71168 logs.go:276] 0 containers: []
	W0401 19:33:33.560527   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:33.560534   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:33.560549   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:33.612141   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:33.612170   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:33.665142   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:33.665170   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:33.681076   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:33.681100   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:33.755938   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:33.755966   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:33.755983   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:36.341957   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:36.359519   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:36.359586   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:36.416339   71168 cri.go:89] found id: ""
	I0401 19:33:36.416362   71168 logs.go:276] 0 containers: []
	W0401 19:33:36.416373   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:36.416381   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:36.416442   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:36.473883   71168 cri.go:89] found id: ""
	I0401 19:33:36.473906   71168 logs.go:276] 0 containers: []
	W0401 19:33:36.473918   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:36.473925   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:36.473988   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:36.521532   71168 cri.go:89] found id: ""
	I0401 19:33:36.521558   71168 logs.go:276] 0 containers: []
	W0401 19:33:36.521568   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:36.521575   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:36.521639   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:36.563420   71168 cri.go:89] found id: ""
	I0401 19:33:36.563446   71168 logs.go:276] 0 containers: []
	W0401 19:33:36.563454   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:36.563459   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:36.563520   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:36.605658   71168 cri.go:89] found id: ""
	I0401 19:33:36.605678   71168 logs.go:276] 0 containers: []
	W0401 19:33:36.605689   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:36.605697   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:36.605759   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:36.645611   71168 cri.go:89] found id: ""
	I0401 19:33:36.645631   71168 logs.go:276] 0 containers: []
	W0401 19:33:36.645638   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:36.645656   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:36.645715   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:36.685994   71168 cri.go:89] found id: ""
	I0401 19:33:36.686022   71168 logs.go:276] 0 containers: []
	W0401 19:33:36.686033   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:36.686041   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:36.686099   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:36.725573   71168 cri.go:89] found id: ""
	I0401 19:33:36.725598   71168 logs.go:276] 0 containers: []
	W0401 19:33:36.725608   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:36.725618   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:36.725630   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:36.778854   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:36.778885   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:36.795003   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:36.795036   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:36.872648   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:36.872666   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:36.872678   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:36.956648   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:36.956683   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:36.827745   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:38.830544   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:41.326012   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:38.597991   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:41.097044   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:38.107961   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:40.606475   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:39.502868   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:39.519090   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:39.519161   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:39.562347   71168 cri.go:89] found id: ""
	I0401 19:33:39.562371   71168 logs.go:276] 0 containers: []
	W0401 19:33:39.562379   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:39.562384   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:39.562442   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:39.607250   71168 cri.go:89] found id: ""
	I0401 19:33:39.607276   71168 logs.go:276] 0 containers: []
	W0401 19:33:39.607286   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:39.607293   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:39.607343   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:39.650683   71168 cri.go:89] found id: ""
	I0401 19:33:39.650704   71168 logs.go:276] 0 containers: []
	W0401 19:33:39.650712   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:39.650717   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:39.650764   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:39.694676   71168 cri.go:89] found id: ""
	I0401 19:33:39.694706   71168 logs.go:276] 0 containers: []
	W0401 19:33:39.694718   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:39.694724   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:39.694783   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:39.733873   71168 cri.go:89] found id: ""
	I0401 19:33:39.733901   71168 logs.go:276] 0 containers: []
	W0401 19:33:39.733911   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:39.733919   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:39.733980   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:39.773625   71168 cri.go:89] found id: ""
	I0401 19:33:39.773668   71168 logs.go:276] 0 containers: []
	W0401 19:33:39.773679   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:39.773686   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:39.773735   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:39.815020   71168 cri.go:89] found id: ""
	I0401 19:33:39.815053   71168 logs.go:276] 0 containers: []
	W0401 19:33:39.815064   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:39.815071   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:39.815134   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:39.855575   71168 cri.go:89] found id: ""
	I0401 19:33:39.855606   71168 logs.go:276] 0 containers: []
	W0401 19:33:39.855615   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:39.855626   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:39.855641   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:39.873827   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:39.873857   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:39.948487   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:39.948507   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:39.948521   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:40.034026   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:40.034062   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:40.077798   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:40.077828   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:42.637999   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:42.654991   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:42.655063   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:42.695920   71168 cri.go:89] found id: ""
	I0401 19:33:42.695953   71168 logs.go:276] 0 containers: []
	W0401 19:33:42.695964   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:42.695971   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:42.696030   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:42.737303   71168 cri.go:89] found id: ""
	I0401 19:33:42.737325   71168 logs.go:276] 0 containers: []
	W0401 19:33:42.737333   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:42.737341   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:42.737393   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:42.777922   71168 cri.go:89] found id: ""
	I0401 19:33:42.777953   71168 logs.go:276] 0 containers: []
	W0401 19:33:42.777965   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:42.777972   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:42.778036   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:42.818339   71168 cri.go:89] found id: ""
	I0401 19:33:42.818364   71168 logs.go:276] 0 containers: []
	W0401 19:33:42.818372   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:42.818379   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:42.818435   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:42.859470   71168 cri.go:89] found id: ""
	I0401 19:33:42.859494   71168 logs.go:276] 0 containers: []
	W0401 19:33:42.859502   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:42.859507   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:42.859556   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:42.901950   71168 cri.go:89] found id: ""
	I0401 19:33:42.901980   71168 logs.go:276] 0 containers: []
	W0401 19:33:42.901989   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:42.901996   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:42.902063   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:42.947230   71168 cri.go:89] found id: ""
	I0401 19:33:42.947258   71168 logs.go:276] 0 containers: []
	W0401 19:33:42.947268   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:42.947275   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:42.947351   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:43.827204   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:46.325749   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:43.098252   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:45.098316   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:42.607590   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:44.607666   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:47.107837   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:42.988997   71168 cri.go:89] found id: ""
	I0401 19:33:42.989022   71168 logs.go:276] 0 containers: []
	W0401 19:33:42.989032   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:42.989049   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:42.989066   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:43.075323   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:43.075352   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:43.075363   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:43.164445   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:43.164479   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:43.215852   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:43.215885   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:43.271301   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:43.271334   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:45.786705   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:45.804389   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:45.804445   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:45.849838   71168 cri.go:89] found id: ""
	I0401 19:33:45.849872   71168 logs.go:276] 0 containers: []
	W0401 19:33:45.849883   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:45.849891   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:45.849950   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:45.890603   71168 cri.go:89] found id: ""
	I0401 19:33:45.890625   71168 logs.go:276] 0 containers: []
	W0401 19:33:45.890635   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:45.890642   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:45.890703   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:45.929189   71168 cri.go:89] found id: ""
	I0401 19:33:45.929210   71168 logs.go:276] 0 containers: []
	W0401 19:33:45.929218   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:45.929223   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:45.929268   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:45.968266   71168 cri.go:89] found id: ""
	I0401 19:33:45.968292   71168 logs.go:276] 0 containers: []
	W0401 19:33:45.968303   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:45.968310   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:45.968365   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:46.007114   71168 cri.go:89] found id: ""
	I0401 19:33:46.007135   71168 logs.go:276] 0 containers: []
	W0401 19:33:46.007143   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:46.007148   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:46.007195   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:46.046067   71168 cri.go:89] found id: ""
	I0401 19:33:46.046088   71168 logs.go:276] 0 containers: []
	W0401 19:33:46.046095   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:46.046101   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:46.046186   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:46.083604   71168 cri.go:89] found id: ""
	I0401 19:33:46.083630   71168 logs.go:276] 0 containers: []
	W0401 19:33:46.083644   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:46.083651   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:46.083709   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:46.125435   71168 cri.go:89] found id: ""
	I0401 19:33:46.125457   71168 logs.go:276] 0 containers: []
	W0401 19:33:46.125464   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:46.125472   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:46.125483   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:46.179060   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:46.179092   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:46.195139   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:46.195179   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:46.275876   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:46.275903   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:46.275914   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:46.365430   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:46.365465   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:48.825540   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:50.827204   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:47.099197   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:49.105260   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:51.597808   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:49.108344   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:51.607079   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:48.908390   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:48.924357   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:48.924416   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:48.969325   71168 cri.go:89] found id: ""
	I0401 19:33:48.969351   71168 logs.go:276] 0 containers: []
	W0401 19:33:48.969359   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:48.969364   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:48.969421   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:49.006702   71168 cri.go:89] found id: ""
	I0401 19:33:49.006724   71168 logs.go:276] 0 containers: []
	W0401 19:33:49.006731   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:49.006736   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:49.006785   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:49.051196   71168 cri.go:89] found id: ""
	I0401 19:33:49.051229   71168 logs.go:276] 0 containers: []
	W0401 19:33:49.051241   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:49.051260   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:49.051336   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:49.098123   71168 cri.go:89] found id: ""
	I0401 19:33:49.098150   71168 logs.go:276] 0 containers: []
	W0401 19:33:49.098159   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:49.098166   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:49.098225   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:49.138203   71168 cri.go:89] found id: ""
	I0401 19:33:49.138232   71168 logs.go:276] 0 containers: []
	W0401 19:33:49.138239   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:49.138244   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:49.138290   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:49.185441   71168 cri.go:89] found id: ""
	I0401 19:33:49.185465   71168 logs.go:276] 0 containers: []
	W0401 19:33:49.185473   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:49.185478   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:49.185537   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:49.235649   71168 cri.go:89] found id: ""
	I0401 19:33:49.235670   71168 logs.go:276] 0 containers: []
	W0401 19:33:49.235678   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:49.235683   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:49.235762   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:49.279638   71168 cri.go:89] found id: ""
	I0401 19:33:49.279662   71168 logs.go:276] 0 containers: []
	W0401 19:33:49.279673   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:49.279683   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:49.279699   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:49.340761   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:49.340798   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:49.356552   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:49.356581   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:49.441110   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:49.441129   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:49.441140   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:49.523159   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:49.523189   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:52.067710   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:52.082986   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:52.083046   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:52.128510   71168 cri.go:89] found id: ""
	I0401 19:33:52.128531   71168 logs.go:276] 0 containers: []
	W0401 19:33:52.128538   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:52.128543   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:52.128590   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:52.167767   71168 cri.go:89] found id: ""
	I0401 19:33:52.167792   71168 logs.go:276] 0 containers: []
	W0401 19:33:52.167803   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:52.167810   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:52.167871   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:52.206384   71168 cri.go:89] found id: ""
	I0401 19:33:52.206416   71168 logs.go:276] 0 containers: []
	W0401 19:33:52.206426   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:52.206433   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:52.206493   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:52.245277   71168 cri.go:89] found id: ""
	I0401 19:33:52.245301   71168 logs.go:276] 0 containers: []
	W0401 19:33:52.245309   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:52.245318   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:52.245388   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:52.283925   71168 cri.go:89] found id: ""
	I0401 19:33:52.283954   71168 logs.go:276] 0 containers: []
	W0401 19:33:52.283964   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:52.283971   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:52.284032   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:52.323944   71168 cri.go:89] found id: ""
	I0401 19:33:52.323970   71168 logs.go:276] 0 containers: []
	W0401 19:33:52.323981   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:52.323988   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:52.324045   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:52.364853   71168 cri.go:89] found id: ""
	I0401 19:33:52.364882   71168 logs.go:276] 0 containers: []
	W0401 19:33:52.364893   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:52.364901   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:52.364958   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:52.404136   71168 cri.go:89] found id: ""
	I0401 19:33:52.404158   71168 logs.go:276] 0 containers: []
	W0401 19:33:52.404165   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:52.404173   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:52.404184   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:52.459097   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:52.459129   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:52.474392   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:52.474417   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:52.551817   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:52.551843   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:52.551860   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:52.650710   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:52.650750   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:53.326050   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:55.327326   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:52.607062   70284 pod_ready.go:92] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"True"
	I0401 19:33:52.607082   70284 pod_ready.go:81] duration metric: took 43.516413537s for pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.607091   70284 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.628695   70284 pod_ready.go:92] pod "etcd-no-preload-472858" in "kube-system" namespace has status "Ready":"True"
	I0401 19:33:52.628725   70284 pod_ready.go:81] duration metric: took 21.625468ms for pod "etcd-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.628739   70284 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.643017   70284 pod_ready.go:92] pod "kube-apiserver-no-preload-472858" in "kube-system" namespace has status "Ready":"True"
	I0401 19:33:52.643044   70284 pod_ready.go:81] duration metric: took 14.296056ms for pod "kube-apiserver-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.643058   70284 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.649063   70284 pod_ready.go:92] pod "kube-controller-manager-no-preload-472858" in "kube-system" namespace has status "Ready":"True"
	I0401 19:33:52.649091   70284 pod_ready.go:81] duration metric: took 6.024238ms for pod "kube-controller-manager-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.649105   70284 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7c22p" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.654806   70284 pod_ready.go:92] pod "kube-proxy-7c22p" in "kube-system" namespace has status "Ready":"True"
	I0401 19:33:52.654829   70284 pod_ready.go:81] duration metric: took 5.709865ms for pod "kube-proxy-7c22p" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.654840   70284 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.997116   70284 pod_ready.go:92] pod "kube-scheduler-no-preload-472858" in "kube-system" namespace has status "Ready":"True"
	I0401 19:33:52.997139   70284 pod_ready.go:81] duration metric: took 342.291727ms for pod "kube-scheduler-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.997148   70284 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:55.004130   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:53.608064   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:56.106148   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:55.205689   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:55.222840   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:55.222901   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:55.263783   71168 cri.go:89] found id: ""
	I0401 19:33:55.263813   71168 logs.go:276] 0 containers: []
	W0401 19:33:55.263820   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:55.263828   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:55.263883   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:55.300788   71168 cri.go:89] found id: ""
	I0401 19:33:55.300818   71168 logs.go:276] 0 containers: []
	W0401 19:33:55.300826   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:55.300834   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:55.300888   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:55.343189   71168 cri.go:89] found id: ""
	I0401 19:33:55.343215   71168 logs.go:276] 0 containers: []
	W0401 19:33:55.343223   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:55.343229   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:55.343286   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:55.387560   71168 cri.go:89] found id: ""
	I0401 19:33:55.387587   71168 logs.go:276] 0 containers: []
	W0401 19:33:55.387597   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:55.387604   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:55.387663   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:55.428078   71168 cri.go:89] found id: ""
	I0401 19:33:55.428103   71168 logs.go:276] 0 containers: []
	W0401 19:33:55.428112   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:55.428119   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:55.428181   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:55.472696   71168 cri.go:89] found id: ""
	I0401 19:33:55.472722   71168 logs.go:276] 0 containers: []
	W0401 19:33:55.472734   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:55.472741   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:55.472797   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:55.518071   71168 cri.go:89] found id: ""
	I0401 19:33:55.518115   71168 logs.go:276] 0 containers: []
	W0401 19:33:55.518126   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:55.518136   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:55.518201   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:55.555697   71168 cri.go:89] found id: ""
	I0401 19:33:55.555717   71168 logs.go:276] 0 containers: []
	W0401 19:33:55.555724   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:55.555732   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:55.555747   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:55.637462   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:55.637492   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:55.682353   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:55.682380   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:55.735451   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:55.735484   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:55.750928   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:55.750954   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:55.824610   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:57.328228   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:59.826213   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:57.005395   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:59.505575   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:01.506107   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:58.106643   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:00.606864   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:58.325742   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:58.341022   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:58.341092   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:58.380910   71168 cri.go:89] found id: ""
	I0401 19:33:58.380932   71168 logs.go:276] 0 containers: []
	W0401 19:33:58.380940   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:58.380946   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:58.380990   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:58.420387   71168 cri.go:89] found id: ""
	I0401 19:33:58.420413   71168 logs.go:276] 0 containers: []
	W0401 19:33:58.420425   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:58.420431   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:58.420479   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:58.460470   71168 cri.go:89] found id: ""
	I0401 19:33:58.460501   71168 logs.go:276] 0 containers: []
	W0401 19:33:58.460511   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:58.460520   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:58.460580   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:58.496844   71168 cri.go:89] found id: ""
	I0401 19:33:58.496867   71168 logs.go:276] 0 containers: []
	W0401 19:33:58.496875   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:58.496881   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:58.496930   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:58.535883   71168 cri.go:89] found id: ""
	I0401 19:33:58.535905   71168 logs.go:276] 0 containers: []
	W0401 19:33:58.535915   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:58.535922   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:58.535979   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:58.576833   71168 cri.go:89] found id: ""
	I0401 19:33:58.576855   71168 logs.go:276] 0 containers: []
	W0401 19:33:58.576863   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:58.576869   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:58.576913   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:58.615057   71168 cri.go:89] found id: ""
	I0401 19:33:58.615081   71168 logs.go:276] 0 containers: []
	W0401 19:33:58.615091   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:58.615098   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:58.615156   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:58.657982   71168 cri.go:89] found id: ""
	I0401 19:33:58.658008   71168 logs.go:276] 0 containers: []
	W0401 19:33:58.658018   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:58.658028   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:58.658045   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:58.734579   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:58.734601   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:58.734616   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:58.821779   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:58.821819   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:58.894470   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:58.894506   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:58.949854   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:58.949884   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:01.465820   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:01.481929   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:01.481984   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:01.525371   71168 cri.go:89] found id: ""
	I0401 19:34:01.525397   71168 logs.go:276] 0 containers: []
	W0401 19:34:01.525407   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:01.525415   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:01.525473   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:01.571106   71168 cri.go:89] found id: ""
	I0401 19:34:01.571136   71168 logs.go:276] 0 containers: []
	W0401 19:34:01.571146   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:01.571153   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:01.571214   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:01.617666   71168 cri.go:89] found id: ""
	I0401 19:34:01.617705   71168 logs.go:276] 0 containers: []
	W0401 19:34:01.617717   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:01.617725   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:01.617787   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:01.655286   71168 cri.go:89] found id: ""
	I0401 19:34:01.655311   71168 logs.go:276] 0 containers: []
	W0401 19:34:01.655321   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:01.655328   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:01.655396   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:01.694911   71168 cri.go:89] found id: ""
	I0401 19:34:01.694940   71168 logs.go:276] 0 containers: []
	W0401 19:34:01.694950   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:01.694957   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:01.695040   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:01.734970   71168 cri.go:89] found id: ""
	I0401 19:34:01.734996   71168 logs.go:276] 0 containers: []
	W0401 19:34:01.735007   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:01.735014   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:01.735071   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:01.778846   71168 cri.go:89] found id: ""
	I0401 19:34:01.778871   71168 logs.go:276] 0 containers: []
	W0401 19:34:01.778879   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:01.778885   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:01.778958   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:01.821934   71168 cri.go:89] found id: ""
	I0401 19:34:01.821964   71168 logs.go:276] 0 containers: []
	W0401 19:34:01.821975   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:01.821986   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:01.822002   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:01.880123   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:01.880155   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:01.895178   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:01.895200   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:01.972248   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:01.972275   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:01.972290   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:02.056663   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:02.056694   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:02.325323   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:04.326474   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:06.327583   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:04.004061   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:06.004176   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:02.608516   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:05.108477   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:04.603745   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:04.619269   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:04.619344   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:04.658089   71168 cri.go:89] found id: ""
	I0401 19:34:04.658111   71168 logs.go:276] 0 containers: []
	W0401 19:34:04.658118   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:04.658123   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:04.658168   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:04.700596   71168 cri.go:89] found id: ""
	I0401 19:34:04.700622   71168 logs.go:276] 0 containers: []
	W0401 19:34:04.700634   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:04.700641   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:04.700708   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:04.744960   71168 cri.go:89] found id: ""
	I0401 19:34:04.744990   71168 logs.go:276] 0 containers: []
	W0401 19:34:04.744999   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:04.745004   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:04.745052   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:04.788239   71168 cri.go:89] found id: ""
	I0401 19:34:04.788264   71168 logs.go:276] 0 containers: []
	W0401 19:34:04.788272   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:04.788278   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:04.788343   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:04.830788   71168 cri.go:89] found id: ""
	I0401 19:34:04.830812   71168 logs.go:276] 0 containers: []
	W0401 19:34:04.830850   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:04.830859   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:04.830917   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:04.889784   71168 cri.go:89] found id: ""
	I0401 19:34:04.889815   71168 logs.go:276] 0 containers: []
	W0401 19:34:04.889826   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:04.889834   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:04.889902   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:04.931969   71168 cri.go:89] found id: ""
	I0401 19:34:04.931996   71168 logs.go:276] 0 containers: []
	W0401 19:34:04.932004   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:04.932010   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:04.932058   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:04.975668   71168 cri.go:89] found id: ""
	I0401 19:34:04.975689   71168 logs.go:276] 0 containers: []
	W0401 19:34:04.975696   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:04.975704   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:04.975715   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:05.032212   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:05.032246   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:05.047900   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:05.047924   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:05.132371   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:05.132394   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:05.132408   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:05.222591   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:05.222623   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:07.767686   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:07.784473   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:07.784542   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:07.828460   71168 cri.go:89] found id: ""
	I0401 19:34:07.828487   71168 logs.go:276] 0 containers: []
	W0401 19:34:07.828498   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:07.828505   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:07.828564   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:07.872760   71168 cri.go:89] found id: ""
	I0401 19:34:07.872786   71168 logs.go:276] 0 containers: []
	W0401 19:34:07.872797   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:07.872804   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:07.872862   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:07.914241   71168 cri.go:89] found id: ""
	I0401 19:34:07.914263   71168 logs.go:276] 0 containers: []
	W0401 19:34:07.914271   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:07.914276   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:07.914340   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:07.953757   71168 cri.go:89] found id: ""
	I0401 19:34:07.953784   71168 logs.go:276] 0 containers: []
	W0401 19:34:07.953795   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:07.953803   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:07.953869   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:08.825113   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:10.827081   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:08.504038   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:10.508973   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:07.608037   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:10.110321   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:07.994382   71168 cri.go:89] found id: ""
	I0401 19:34:07.994401   71168 logs.go:276] 0 containers: []
	W0401 19:34:07.994409   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:07.994414   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:07.994459   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:08.038178   71168 cri.go:89] found id: ""
	I0401 19:34:08.038202   71168 logs.go:276] 0 containers: []
	W0401 19:34:08.038213   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:08.038220   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:08.038282   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:08.077532   71168 cri.go:89] found id: ""
	I0401 19:34:08.077562   71168 logs.go:276] 0 containers: []
	W0401 19:34:08.077573   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:08.077580   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:08.077657   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:08.119825   71168 cri.go:89] found id: ""
	I0401 19:34:08.119845   71168 logs.go:276] 0 containers: []
	W0401 19:34:08.119855   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:08.119865   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:08.119878   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:08.207688   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:08.207724   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:08.253050   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:08.253085   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:08.309119   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:08.309152   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:08.325675   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:08.325704   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:08.410877   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:10.911211   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:10.925590   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:10.925657   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:10.964180   71168 cri.go:89] found id: ""
	I0401 19:34:10.964205   71168 logs.go:276] 0 containers: []
	W0401 19:34:10.964216   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:10.964224   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:10.964273   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:11.004492   71168 cri.go:89] found id: ""
	I0401 19:34:11.004515   71168 logs.go:276] 0 containers: []
	W0401 19:34:11.004526   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:11.004533   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:11.004588   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:11.048771   71168 cri.go:89] found id: ""
	I0401 19:34:11.048792   71168 logs.go:276] 0 containers: []
	W0401 19:34:11.048804   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:11.048810   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:11.048861   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:11.084956   71168 cri.go:89] found id: ""
	I0401 19:34:11.084982   71168 logs.go:276] 0 containers: []
	W0401 19:34:11.084992   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:11.084999   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:11.085043   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:11.128194   71168 cri.go:89] found id: ""
	I0401 19:34:11.128218   71168 logs.go:276] 0 containers: []
	W0401 19:34:11.128225   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:11.128230   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:11.128274   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:11.169884   71168 cri.go:89] found id: ""
	I0401 19:34:11.169908   71168 logs.go:276] 0 containers: []
	W0401 19:34:11.169918   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:11.169925   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:11.169988   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:11.213032   71168 cri.go:89] found id: ""
	I0401 19:34:11.213066   71168 logs.go:276] 0 containers: []
	W0401 19:34:11.213077   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:11.213084   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:11.213149   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:11.258391   71168 cri.go:89] found id: ""
	I0401 19:34:11.258414   71168 logs.go:276] 0 containers: []
	W0401 19:34:11.258422   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:11.258429   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:11.258445   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:11.341297   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:11.341328   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:11.388628   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:11.388659   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:11.442300   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:11.442326   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:11.457531   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:11.457561   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:11.561556   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:13.324598   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:15.325464   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:13.005005   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:15.505216   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:12.607201   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:14.607580   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:17.107659   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:14.062670   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:14.077384   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:14.077449   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:14.119421   71168 cri.go:89] found id: ""
	I0401 19:34:14.119444   71168 logs.go:276] 0 containers: []
	W0401 19:34:14.119455   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:14.119462   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:14.119518   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:14.158762   71168 cri.go:89] found id: ""
	I0401 19:34:14.158783   71168 logs.go:276] 0 containers: []
	W0401 19:34:14.158798   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:14.158805   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:14.158867   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:14.197024   71168 cri.go:89] found id: ""
	I0401 19:34:14.197052   71168 logs.go:276] 0 containers: []
	W0401 19:34:14.197060   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:14.197065   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:14.197115   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:14.235976   71168 cri.go:89] found id: ""
	I0401 19:34:14.236004   71168 logs.go:276] 0 containers: []
	W0401 19:34:14.236015   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:14.236021   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:14.236085   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:14.280596   71168 cri.go:89] found id: ""
	I0401 19:34:14.280623   71168 logs.go:276] 0 containers: []
	W0401 19:34:14.280635   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:14.280642   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:14.280703   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:14.322196   71168 cri.go:89] found id: ""
	I0401 19:34:14.322219   71168 logs.go:276] 0 containers: []
	W0401 19:34:14.322230   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:14.322239   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:14.322298   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:14.364572   71168 cri.go:89] found id: ""
	I0401 19:34:14.364596   71168 logs.go:276] 0 containers: []
	W0401 19:34:14.364607   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:14.364615   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:14.364662   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:14.406043   71168 cri.go:89] found id: ""
	I0401 19:34:14.406066   71168 logs.go:276] 0 containers: []
	W0401 19:34:14.406072   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:14.406082   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:14.406097   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:14.461841   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:14.461870   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:14.479960   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:14.479990   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:14.557039   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:14.557058   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:14.557070   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:14.641945   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:14.641975   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:17.192681   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:17.207913   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:17.207964   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:17.245596   71168 cri.go:89] found id: ""
	I0401 19:34:17.245618   71168 logs.go:276] 0 containers: []
	W0401 19:34:17.245625   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:17.245630   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:17.245701   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:17.310845   71168 cri.go:89] found id: ""
	I0401 19:34:17.310875   71168 logs.go:276] 0 containers: []
	W0401 19:34:17.310887   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:17.310894   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:17.310958   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:17.367726   71168 cri.go:89] found id: ""
	I0401 19:34:17.367753   71168 logs.go:276] 0 containers: []
	W0401 19:34:17.367764   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:17.367770   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:17.367833   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:17.410807   71168 cri.go:89] found id: ""
	I0401 19:34:17.410834   71168 logs.go:276] 0 containers: []
	W0401 19:34:17.410842   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:17.410847   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:17.410892   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:17.448242   71168 cri.go:89] found id: ""
	I0401 19:34:17.448268   71168 logs.go:276] 0 containers: []
	W0401 19:34:17.448278   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:17.448285   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:17.448337   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:17.486552   71168 cri.go:89] found id: ""
	I0401 19:34:17.486580   71168 logs.go:276] 0 containers: []
	W0401 19:34:17.486590   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:17.486595   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:17.486644   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:17.529947   71168 cri.go:89] found id: ""
	I0401 19:34:17.529975   71168 logs.go:276] 0 containers: []
	W0401 19:34:17.529986   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:17.529993   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:17.530052   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:17.571617   71168 cri.go:89] found id: ""
	I0401 19:34:17.571640   71168 logs.go:276] 0 containers: []
	W0401 19:34:17.571648   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:17.571656   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:17.571673   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:17.627326   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:17.627354   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:17.643409   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:17.643431   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:17.723772   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:17.723798   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:17.723811   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:17.803383   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:17.803414   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:17.325836   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:19.328447   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:17.509486   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:20.004341   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:19.606840   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:21.607646   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:20.348949   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:20.363311   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:20.363385   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:20.401558   71168 cri.go:89] found id: ""
	I0401 19:34:20.401585   71168 logs.go:276] 0 containers: []
	W0401 19:34:20.401595   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:20.401603   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:20.401686   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:20.445979   71168 cri.go:89] found id: ""
	I0401 19:34:20.446004   71168 logs.go:276] 0 containers: []
	W0401 19:34:20.446011   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:20.446016   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:20.446060   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:20.487819   71168 cri.go:89] found id: ""
	I0401 19:34:20.487844   71168 logs.go:276] 0 containers: []
	W0401 19:34:20.487854   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:20.487862   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:20.487921   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:20.532107   71168 cri.go:89] found id: ""
	I0401 19:34:20.532131   71168 logs.go:276] 0 containers: []
	W0401 19:34:20.532154   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:20.532186   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:20.532247   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:20.577727   71168 cri.go:89] found id: ""
	I0401 19:34:20.577749   71168 logs.go:276] 0 containers: []
	W0401 19:34:20.577756   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:20.577762   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:20.577841   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:20.616774   71168 cri.go:89] found id: ""
	I0401 19:34:20.616805   71168 logs.go:276] 0 containers: []
	W0401 19:34:20.616816   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:20.616824   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:20.616887   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:20.656122   71168 cri.go:89] found id: ""
	I0401 19:34:20.656150   71168 logs.go:276] 0 containers: []
	W0401 19:34:20.656160   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:20.656167   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:20.656226   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:20.701249   71168 cri.go:89] found id: ""
	I0401 19:34:20.701274   71168 logs.go:276] 0 containers: []
	W0401 19:34:20.701285   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:20.701295   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:20.701310   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:20.746979   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:20.747003   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:20.799197   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:20.799226   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:20.815771   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:20.815808   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:20.895179   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:20.895202   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:20.895218   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:21.826671   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:24.325896   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:26.326569   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:22.503727   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:24.503877   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:26.506643   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:24.107702   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:26.607285   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:23.481911   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:23.496820   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:23.496889   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:23.538292   71168 cri.go:89] found id: ""
	I0401 19:34:23.538314   71168 logs.go:276] 0 containers: []
	W0401 19:34:23.538322   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:23.538327   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:23.538372   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:23.579171   71168 cri.go:89] found id: ""
	I0401 19:34:23.579200   71168 logs.go:276] 0 containers: []
	W0401 19:34:23.579209   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:23.579214   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:23.579269   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:23.620377   71168 cri.go:89] found id: ""
	I0401 19:34:23.620399   71168 logs.go:276] 0 containers: []
	W0401 19:34:23.620410   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:23.620417   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:23.620477   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:23.663309   71168 cri.go:89] found id: ""
	I0401 19:34:23.663329   71168 logs.go:276] 0 containers: []
	W0401 19:34:23.663337   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:23.663342   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:23.663392   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:23.702724   71168 cri.go:89] found id: ""
	I0401 19:34:23.702755   71168 logs.go:276] 0 containers: []
	W0401 19:34:23.702772   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:23.702778   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:23.702836   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:23.742797   71168 cri.go:89] found id: ""
	I0401 19:34:23.742827   71168 logs.go:276] 0 containers: []
	W0401 19:34:23.742837   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:23.742845   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:23.742913   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:23.781299   71168 cri.go:89] found id: ""
	I0401 19:34:23.781350   71168 logs.go:276] 0 containers: []
	W0401 19:34:23.781367   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:23.781375   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:23.781440   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:23.828244   71168 cri.go:89] found id: ""
	I0401 19:34:23.828270   71168 logs.go:276] 0 containers: []
	W0401 19:34:23.828277   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:23.828284   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:23.828298   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:23.914758   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:23.914782   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:23.914797   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:23.993300   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:23.993332   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:24.037388   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:24.037424   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:24.090157   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:24.090198   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:26.609062   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:26.624241   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:26.624309   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:26.665813   71168 cri.go:89] found id: ""
	I0401 19:34:26.665840   71168 logs.go:276] 0 containers: []
	W0401 19:34:26.665848   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:26.665857   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:26.665917   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:26.709571   71168 cri.go:89] found id: ""
	I0401 19:34:26.709593   71168 logs.go:276] 0 containers: []
	W0401 19:34:26.709600   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:26.709606   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:26.709680   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:26.757286   71168 cri.go:89] found id: ""
	I0401 19:34:26.757309   71168 logs.go:276] 0 containers: []
	W0401 19:34:26.757319   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:26.757325   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:26.757386   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:26.795715   71168 cri.go:89] found id: ""
	I0401 19:34:26.795768   71168 logs.go:276] 0 containers: []
	W0401 19:34:26.795781   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:26.795788   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:26.795839   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:26.835985   71168 cri.go:89] found id: ""
	I0401 19:34:26.836011   71168 logs.go:276] 0 containers: []
	W0401 19:34:26.836022   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:26.836029   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:26.836094   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:26.878890   71168 cri.go:89] found id: ""
	I0401 19:34:26.878918   71168 logs.go:276] 0 containers: []
	W0401 19:34:26.878929   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:26.878936   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:26.878991   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:26.920161   71168 cri.go:89] found id: ""
	I0401 19:34:26.920189   71168 logs.go:276] 0 containers: []
	W0401 19:34:26.920199   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:26.920206   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:26.920262   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:26.961597   71168 cri.go:89] found id: ""
	I0401 19:34:26.961626   71168 logs.go:276] 0 containers: []
	W0401 19:34:26.961637   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:26.961663   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:26.961679   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:27.019814   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:27.019847   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:27.035535   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:27.035564   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:27.111755   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:27.111776   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:27.111790   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:27.194932   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:27.194964   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:28.827702   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:31.325488   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:29.005830   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:31.007294   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:29.107097   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:31.109807   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:29.738592   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:29.752851   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:29.752913   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:29.791808   71168 cri.go:89] found id: ""
	I0401 19:34:29.791863   71168 logs.go:276] 0 containers: []
	W0401 19:34:29.791875   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:29.791883   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:29.791944   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:29.836113   71168 cri.go:89] found id: ""
	I0401 19:34:29.836132   71168 logs.go:276] 0 containers: []
	W0401 19:34:29.836139   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:29.836144   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:29.836200   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:29.879005   71168 cri.go:89] found id: ""
	I0401 19:34:29.879039   71168 logs.go:276] 0 containers: []
	W0401 19:34:29.879050   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:29.879059   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:29.879122   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:29.919349   71168 cri.go:89] found id: ""
	I0401 19:34:29.919383   71168 logs.go:276] 0 containers: []
	W0401 19:34:29.919394   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:29.919400   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:29.919454   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:29.957252   71168 cri.go:89] found id: ""
	I0401 19:34:29.957275   71168 logs.go:276] 0 containers: []
	W0401 19:34:29.957287   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:29.957294   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:29.957354   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:30.003220   71168 cri.go:89] found id: ""
	I0401 19:34:30.003245   71168 logs.go:276] 0 containers: []
	W0401 19:34:30.003256   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:30.003263   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:30.003311   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:30.043873   71168 cri.go:89] found id: ""
	I0401 19:34:30.043900   71168 logs.go:276] 0 containers: []
	W0401 19:34:30.043921   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:30.043928   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:30.043989   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:30.082215   71168 cri.go:89] found id: ""
	I0401 19:34:30.082242   71168 logs.go:276] 0 containers: []
	W0401 19:34:30.082253   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:30.082263   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:30.082277   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:30.098676   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:30.098701   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:30.180857   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:30.180879   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:30.180897   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:30.269982   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:30.270016   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:30.317933   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:30.317967   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:32.874312   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:32.888687   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:32.888742   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:32.926222   71168 cri.go:89] found id: ""
	I0401 19:34:32.926244   71168 logs.go:276] 0 containers: []
	W0401 19:34:32.926252   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:32.926257   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:32.926307   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:32.964838   71168 cri.go:89] found id: ""
	I0401 19:34:32.964858   71168 logs.go:276] 0 containers: []
	W0401 19:34:32.964865   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:32.964870   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:32.964914   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:33.327670   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:35.826387   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:33.504338   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:36.005240   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:33.606596   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:35.607014   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:33.006903   71168 cri.go:89] found id: ""
	I0401 19:34:33.006920   71168 logs.go:276] 0 containers: []
	W0401 19:34:33.006927   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:33.006933   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:33.006983   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:33.045663   71168 cri.go:89] found id: ""
	I0401 19:34:33.045691   71168 logs.go:276] 0 containers: []
	W0401 19:34:33.045701   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:33.045709   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:33.045770   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:33.086262   71168 cri.go:89] found id: ""
	I0401 19:34:33.086290   71168 logs.go:276] 0 containers: []
	W0401 19:34:33.086298   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:33.086303   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:33.086368   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:33.128302   71168 cri.go:89] found id: ""
	I0401 19:34:33.128327   71168 logs.go:276] 0 containers: []
	W0401 19:34:33.128335   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:33.128341   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:33.128402   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:33.171155   71168 cri.go:89] found id: ""
	I0401 19:34:33.171189   71168 logs.go:276] 0 containers: []
	W0401 19:34:33.171200   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:33.171207   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:33.171270   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:33.210793   71168 cri.go:89] found id: ""
	I0401 19:34:33.210820   71168 logs.go:276] 0 containers: []
	W0401 19:34:33.210838   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:33.210848   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:33.210870   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:33.295035   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:33.295072   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:33.345381   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:33.345417   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:33.401082   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:33.401120   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:33.417029   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:33.417055   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:33.497027   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:35.997632   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:36.013106   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:36.013161   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:36.053013   71168 cri.go:89] found id: ""
	I0401 19:34:36.053040   71168 logs.go:276] 0 containers: []
	W0401 19:34:36.053050   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:36.053059   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:36.053116   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:36.092268   71168 cri.go:89] found id: ""
	I0401 19:34:36.092297   71168 logs.go:276] 0 containers: []
	W0401 19:34:36.092308   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:36.092315   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:36.092389   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:36.131347   71168 cri.go:89] found id: ""
	I0401 19:34:36.131391   71168 logs.go:276] 0 containers: []
	W0401 19:34:36.131402   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:36.131409   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:36.131468   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:36.171402   71168 cri.go:89] found id: ""
	I0401 19:34:36.171432   71168 logs.go:276] 0 containers: []
	W0401 19:34:36.171443   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:36.171449   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:36.171511   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:36.211239   71168 cri.go:89] found id: ""
	I0401 19:34:36.211272   71168 logs.go:276] 0 containers: []
	W0401 19:34:36.211283   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:36.211290   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:36.211354   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:36.251246   71168 cri.go:89] found id: ""
	I0401 19:34:36.251275   71168 logs.go:276] 0 containers: []
	W0401 19:34:36.251287   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:36.251294   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:36.251354   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:36.293140   71168 cri.go:89] found id: ""
	I0401 19:34:36.293162   71168 logs.go:276] 0 containers: []
	W0401 19:34:36.293169   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:36.293174   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:36.293231   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:36.330281   71168 cri.go:89] found id: ""
	I0401 19:34:36.330308   71168 logs.go:276] 0 containers: []
	W0401 19:34:36.330318   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:36.330328   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:36.330342   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:36.421753   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:36.421790   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:36.467555   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:36.467581   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:36.524747   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:36.524778   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:36.540946   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:36.540976   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:36.622452   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:38.326341   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:40.327267   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:38.503641   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:40.504555   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:38.107732   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:40.608535   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:39.122969   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:39.139092   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:39.139157   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:39.177337   71168 cri.go:89] found id: ""
	I0401 19:34:39.177368   71168 logs.go:276] 0 containers: []
	W0401 19:34:39.177379   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:39.177387   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:39.177449   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:39.216471   71168 cri.go:89] found id: ""
	I0401 19:34:39.216498   71168 logs.go:276] 0 containers: []
	W0401 19:34:39.216507   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:39.216512   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:39.216558   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:39.255526   71168 cri.go:89] found id: ""
	I0401 19:34:39.255550   71168 logs.go:276] 0 containers: []
	W0401 19:34:39.255557   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:39.255563   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:39.255623   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:39.294682   71168 cri.go:89] found id: ""
	I0401 19:34:39.294711   71168 logs.go:276] 0 containers: []
	W0401 19:34:39.294723   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:39.294735   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:39.294798   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:39.337416   71168 cri.go:89] found id: ""
	I0401 19:34:39.337437   71168 logs.go:276] 0 containers: []
	W0401 19:34:39.337444   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:39.337449   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:39.337510   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:39.384560   71168 cri.go:89] found id: ""
	I0401 19:34:39.384586   71168 logs.go:276] 0 containers: []
	W0401 19:34:39.384598   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:39.384608   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:39.384671   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:39.421459   71168 cri.go:89] found id: ""
	I0401 19:34:39.421480   71168 logs.go:276] 0 containers: []
	W0401 19:34:39.421488   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:39.421493   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:39.421540   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:39.460221   71168 cri.go:89] found id: ""
	I0401 19:34:39.460246   71168 logs.go:276] 0 containers: []
	W0401 19:34:39.460256   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:39.460264   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:39.460275   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:39.543800   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:39.543835   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:39.591012   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:39.591038   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:39.645994   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:39.646025   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:39.662223   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:39.662250   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:39.741574   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:42.242541   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:42.256933   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:42.257006   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:42.294268   71168 cri.go:89] found id: ""
	I0401 19:34:42.294297   71168 logs.go:276] 0 containers: []
	W0401 19:34:42.294308   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:42.294315   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:42.294370   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:42.331978   71168 cri.go:89] found id: ""
	I0401 19:34:42.331999   71168 logs.go:276] 0 containers: []
	W0401 19:34:42.332005   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:42.332013   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:42.332078   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:42.369858   71168 cri.go:89] found id: ""
	I0401 19:34:42.369885   71168 logs.go:276] 0 containers: []
	W0401 19:34:42.369895   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:42.369903   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:42.369989   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:42.412688   71168 cri.go:89] found id: ""
	I0401 19:34:42.412708   71168 logs.go:276] 0 containers: []
	W0401 19:34:42.412715   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:42.412720   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:42.412776   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:42.449180   71168 cri.go:89] found id: ""
	I0401 19:34:42.449209   71168 logs.go:276] 0 containers: []
	W0401 19:34:42.449217   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:42.449225   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:42.449283   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:42.488582   71168 cri.go:89] found id: ""
	I0401 19:34:42.488606   71168 logs.go:276] 0 containers: []
	W0401 19:34:42.488613   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:42.488618   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:42.488665   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:42.527883   71168 cri.go:89] found id: ""
	I0401 19:34:42.527915   71168 logs.go:276] 0 containers: []
	W0401 19:34:42.527924   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:42.527931   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:42.527993   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:42.564372   71168 cri.go:89] found id: ""
	I0401 19:34:42.564394   71168 logs.go:276] 0 containers: []
	W0401 19:34:42.564401   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:42.564408   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:42.564419   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:42.646940   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:42.646974   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:42.689323   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:42.689354   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:42.744996   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:42.745024   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:42.761404   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:42.761429   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:42.836643   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:42.825895   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:45.325856   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:42.504642   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:45.004315   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:43.110114   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:45.607093   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:45.337809   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:45.352936   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:45.353029   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:45.395073   71168 cri.go:89] found id: ""
	I0401 19:34:45.395098   71168 logs.go:276] 0 containers: []
	W0401 19:34:45.395106   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:45.395112   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:45.395160   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:45.433537   71168 cri.go:89] found id: ""
	I0401 19:34:45.433567   71168 logs.go:276] 0 containers: []
	W0401 19:34:45.433578   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:45.433586   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:45.433658   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:45.477108   71168 cri.go:89] found id: ""
	I0401 19:34:45.477138   71168 logs.go:276] 0 containers: []
	W0401 19:34:45.477150   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:45.477157   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:45.477217   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:45.520350   71168 cri.go:89] found id: ""
	I0401 19:34:45.520389   71168 logs.go:276] 0 containers: []
	W0401 19:34:45.520401   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:45.520408   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:45.520466   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:45.562871   71168 cri.go:89] found id: ""
	I0401 19:34:45.562901   71168 logs.go:276] 0 containers: []
	W0401 19:34:45.562911   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:45.562918   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:45.562988   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:45.619214   71168 cri.go:89] found id: ""
	I0401 19:34:45.619237   71168 logs.go:276] 0 containers: []
	W0401 19:34:45.619248   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:45.619255   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:45.619317   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:45.664361   71168 cri.go:89] found id: ""
	I0401 19:34:45.664387   71168 logs.go:276] 0 containers: []
	W0401 19:34:45.664398   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:45.664405   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:45.664463   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:45.701087   71168 cri.go:89] found id: ""
	I0401 19:34:45.701110   71168 logs.go:276] 0 containers: []
	W0401 19:34:45.701120   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:45.701128   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:45.701139   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:45.716839   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:45.716863   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:45.794609   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:45.794630   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:45.794642   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:45.883428   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:45.883464   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:45.934342   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:45.934374   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:47.825597   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:50.326528   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:47.505036   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:49.505287   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:51.505884   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:47.609038   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:50.106705   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:52.107802   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:48.492128   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:48.508674   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:48.508746   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:48.549522   71168 cri.go:89] found id: ""
	I0401 19:34:48.549545   71168 logs.go:276] 0 containers: []
	W0401 19:34:48.549555   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:48.549561   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:48.549619   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:48.587014   71168 cri.go:89] found id: ""
	I0401 19:34:48.587037   71168 logs.go:276] 0 containers: []
	W0401 19:34:48.587045   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:48.587051   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:48.587108   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:48.629591   71168 cri.go:89] found id: ""
	I0401 19:34:48.629620   71168 logs.go:276] 0 containers: []
	W0401 19:34:48.629630   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:48.629636   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:48.629707   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:48.669335   71168 cri.go:89] found id: ""
	I0401 19:34:48.669363   71168 logs.go:276] 0 containers: []
	W0401 19:34:48.669383   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:48.669400   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:48.669455   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:48.708322   71168 cri.go:89] found id: ""
	I0401 19:34:48.708350   71168 logs.go:276] 0 containers: []
	W0401 19:34:48.708356   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:48.708362   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:48.708407   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:48.750680   71168 cri.go:89] found id: ""
	I0401 19:34:48.750708   71168 logs.go:276] 0 containers: []
	W0401 19:34:48.750718   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:48.750726   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:48.750791   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:48.790946   71168 cri.go:89] found id: ""
	I0401 19:34:48.790974   71168 logs.go:276] 0 containers: []
	W0401 19:34:48.790984   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:48.790998   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:48.791055   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:48.828849   71168 cri.go:89] found id: ""
	I0401 19:34:48.828871   71168 logs.go:276] 0 containers: []
	W0401 19:34:48.828880   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:48.828889   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:48.828904   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:48.909182   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:48.909212   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:48.954285   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:48.954315   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:49.010340   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:49.010372   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:49.026493   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:49.026516   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:49.099662   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:51.599905   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:51.618094   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:51.618168   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:51.657003   71168 cri.go:89] found id: ""
	I0401 19:34:51.657028   71168 logs.go:276] 0 containers: []
	W0401 19:34:51.657038   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:51.657046   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:51.657104   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:51.696415   71168 cri.go:89] found id: ""
	I0401 19:34:51.696441   71168 logs.go:276] 0 containers: []
	W0401 19:34:51.696451   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:51.696456   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:51.696515   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:51.734416   71168 cri.go:89] found id: ""
	I0401 19:34:51.734445   71168 logs.go:276] 0 containers: []
	W0401 19:34:51.734457   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:51.734465   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:51.734523   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:51.774895   71168 cri.go:89] found id: ""
	I0401 19:34:51.774918   71168 logs.go:276] 0 containers: []
	W0401 19:34:51.774925   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:51.774931   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:51.774980   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:51.814602   71168 cri.go:89] found id: ""
	I0401 19:34:51.814623   71168 logs.go:276] 0 containers: []
	W0401 19:34:51.814631   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:51.814637   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:51.814687   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:51.856035   71168 cri.go:89] found id: ""
	I0401 19:34:51.856061   71168 logs.go:276] 0 containers: []
	W0401 19:34:51.856071   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:51.856078   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:51.856132   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:51.897415   71168 cri.go:89] found id: ""
	I0401 19:34:51.897440   71168 logs.go:276] 0 containers: []
	W0401 19:34:51.897451   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:51.897457   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:51.897516   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:51.937406   71168 cri.go:89] found id: ""
	I0401 19:34:51.937428   71168 logs.go:276] 0 containers: []
	W0401 19:34:51.937436   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:51.937443   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:51.937456   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:51.981508   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:51.981535   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:52.039956   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:52.039995   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:52.066403   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:52.066429   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:52.172509   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:52.172530   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:52.172541   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:52.827950   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:55.331369   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:54.004625   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:56.503197   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:54.607359   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:57.108257   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:54.761459   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:54.776972   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:54.777030   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:54.822945   71168 cri.go:89] found id: ""
	I0401 19:34:54.822983   71168 logs.go:276] 0 containers: []
	W0401 19:34:54.822996   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:54.823004   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:54.823066   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:54.861602   71168 cri.go:89] found id: ""
	I0401 19:34:54.861629   71168 logs.go:276] 0 containers: []
	W0401 19:34:54.861639   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:54.861662   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:54.861727   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:54.901283   71168 cri.go:89] found id: ""
	I0401 19:34:54.901309   71168 logs.go:276] 0 containers: []
	W0401 19:34:54.901319   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:54.901327   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:54.901385   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:54.940071   71168 cri.go:89] found id: ""
	I0401 19:34:54.940103   71168 logs.go:276] 0 containers: []
	W0401 19:34:54.940114   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:54.940121   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:54.940179   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:54.978447   71168 cri.go:89] found id: ""
	I0401 19:34:54.978474   71168 logs.go:276] 0 containers: []
	W0401 19:34:54.978485   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:54.978493   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:54.978563   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:55.021786   71168 cri.go:89] found id: ""
	I0401 19:34:55.021810   71168 logs.go:276] 0 containers: []
	W0401 19:34:55.021819   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:55.021827   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:55.021886   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:55.059861   71168 cri.go:89] found id: ""
	I0401 19:34:55.059889   71168 logs.go:276] 0 containers: []
	W0401 19:34:55.059899   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:55.059907   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:55.059963   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:55.104484   71168 cri.go:89] found id: ""
	I0401 19:34:55.104516   71168 logs.go:276] 0 containers: []
	W0401 19:34:55.104527   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:55.104537   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:55.104551   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:55.152197   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:55.152221   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:55.203900   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:55.203942   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:55.221553   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:55.221580   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:55.299651   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:55.299668   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:55.299680   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:57.877382   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:57.899186   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:57.899260   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:57.948146   71168 cri.go:89] found id: ""
	I0401 19:34:57.948182   71168 logs.go:276] 0 containers: []
	W0401 19:34:57.948192   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:57.948203   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:57.948270   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:57.826282   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:59.826598   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:58.504492   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:01.003480   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:59.607646   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:02.107162   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:58.017121   71168 cri.go:89] found id: ""
	I0401 19:34:58.017150   71168 logs.go:276] 0 containers: []
	W0401 19:34:58.017161   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:58.017168   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:58.017230   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:58.073881   71168 cri.go:89] found id: ""
	I0401 19:34:58.073905   71168 logs.go:276] 0 containers: []
	W0401 19:34:58.073916   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:58.073923   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:58.073979   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:58.115410   71168 cri.go:89] found id: ""
	I0401 19:34:58.115435   71168 logs.go:276] 0 containers: []
	W0401 19:34:58.115445   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:58.115452   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:58.115512   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:58.155452   71168 cri.go:89] found id: ""
	I0401 19:34:58.155481   71168 logs.go:276] 0 containers: []
	W0401 19:34:58.155492   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:58.155500   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:58.155562   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:58.197335   71168 cri.go:89] found id: ""
	I0401 19:34:58.197376   71168 logs.go:276] 0 containers: []
	W0401 19:34:58.197397   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:58.197407   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:58.197469   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:58.239782   71168 cri.go:89] found id: ""
	I0401 19:34:58.239808   71168 logs.go:276] 0 containers: []
	W0401 19:34:58.239815   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:58.239820   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:58.239870   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:58.280936   71168 cri.go:89] found id: ""
	I0401 19:34:58.280961   71168 logs.go:276] 0 containers: []
	W0401 19:34:58.280971   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:58.280982   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:58.280998   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:58.368357   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:58.368401   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:58.415104   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:58.415132   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:58.474719   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:58.474749   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:58.491004   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:58.491031   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:58.573999   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:01.074865   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:01.091751   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:01.091822   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:01.140053   71168 cri.go:89] found id: ""
	I0401 19:35:01.140079   71168 logs.go:276] 0 containers: []
	W0401 19:35:01.140089   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:01.140096   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:01.140154   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:01.184046   71168 cri.go:89] found id: ""
	I0401 19:35:01.184078   71168 logs.go:276] 0 containers: []
	W0401 19:35:01.184089   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:01.184096   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:01.184161   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:01.225962   71168 cri.go:89] found id: ""
	I0401 19:35:01.225989   71168 logs.go:276] 0 containers: []
	W0401 19:35:01.225999   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:01.226006   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:01.226072   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:01.267212   71168 cri.go:89] found id: ""
	I0401 19:35:01.267234   71168 logs.go:276] 0 containers: []
	W0401 19:35:01.267242   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:01.267247   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:01.267308   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:01.307039   71168 cri.go:89] found id: ""
	I0401 19:35:01.307066   71168 logs.go:276] 0 containers: []
	W0401 19:35:01.307074   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:01.307080   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:01.307132   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:01.347856   71168 cri.go:89] found id: ""
	I0401 19:35:01.347886   71168 logs.go:276] 0 containers: []
	W0401 19:35:01.347898   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:01.347905   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:01.347962   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:01.385893   71168 cri.go:89] found id: ""
	I0401 19:35:01.385923   71168 logs.go:276] 0 containers: []
	W0401 19:35:01.385933   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:01.385940   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:01.385999   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:01.422983   71168 cri.go:89] found id: ""
	I0401 19:35:01.423012   71168 logs.go:276] 0 containers: []
	W0401 19:35:01.423022   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:01.423033   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:01.423048   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:01.469842   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:01.469875   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:01.527536   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:01.527566   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:01.542332   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:01.542357   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:01.617252   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:01.617270   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:01.617284   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:02.325502   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:04.326603   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:06.328115   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:03.005979   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:05.504470   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:04.107681   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:06.607619   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:04.195171   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:04.211963   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:04.212015   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:04.252298   71168 cri.go:89] found id: ""
	I0401 19:35:04.252324   71168 logs.go:276] 0 containers: []
	W0401 19:35:04.252334   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:04.252342   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:04.252396   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:04.299619   71168 cri.go:89] found id: ""
	I0401 19:35:04.299649   71168 logs.go:276] 0 containers: []
	W0401 19:35:04.299659   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:04.299667   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:04.299725   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:04.347386   71168 cri.go:89] found id: ""
	I0401 19:35:04.347409   71168 logs.go:276] 0 containers: []
	W0401 19:35:04.347416   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:04.347426   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:04.347473   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:04.385902   71168 cri.go:89] found id: ""
	I0401 19:35:04.385929   71168 logs.go:276] 0 containers: []
	W0401 19:35:04.385937   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:04.385943   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:04.385993   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:04.425235   71168 cri.go:89] found id: ""
	I0401 19:35:04.425258   71168 logs.go:276] 0 containers: []
	W0401 19:35:04.425266   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:04.425271   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:04.425325   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:04.463849   71168 cri.go:89] found id: ""
	I0401 19:35:04.463881   71168 logs.go:276] 0 containers: []
	W0401 19:35:04.463891   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:04.463899   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:04.463974   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:04.501983   71168 cri.go:89] found id: ""
	I0401 19:35:04.502003   71168 logs.go:276] 0 containers: []
	W0401 19:35:04.502010   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:04.502016   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:04.502072   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:04.544082   71168 cri.go:89] found id: ""
	I0401 19:35:04.544103   71168 logs.go:276] 0 containers: []
	W0401 19:35:04.544113   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:04.544124   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:04.544141   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:04.600545   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:04.600578   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:04.617049   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:04.617075   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:04.696927   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:04.696945   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:04.696957   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:04.780024   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:04.780056   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:07.323161   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:07.339368   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:07.339432   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:07.379407   71168 cri.go:89] found id: ""
	I0401 19:35:07.379429   71168 logs.go:276] 0 containers: []
	W0401 19:35:07.379440   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:07.379452   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:07.379497   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:07.418700   71168 cri.go:89] found id: ""
	I0401 19:35:07.418728   71168 logs.go:276] 0 containers: []
	W0401 19:35:07.418737   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:07.418743   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:07.418788   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:07.457580   71168 cri.go:89] found id: ""
	I0401 19:35:07.457606   71168 logs.go:276] 0 containers: []
	W0401 19:35:07.457617   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:07.457624   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:07.457696   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:07.498211   71168 cri.go:89] found id: ""
	I0401 19:35:07.498240   71168 logs.go:276] 0 containers: []
	W0401 19:35:07.498249   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:07.498256   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:07.498318   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:07.539659   71168 cri.go:89] found id: ""
	I0401 19:35:07.539681   71168 logs.go:276] 0 containers: []
	W0401 19:35:07.539692   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:07.539699   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:07.539759   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:07.577414   71168 cri.go:89] found id: ""
	I0401 19:35:07.577440   71168 logs.go:276] 0 containers: []
	W0401 19:35:07.577450   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:07.577456   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:07.577520   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:07.623318   71168 cri.go:89] found id: ""
	I0401 19:35:07.623340   71168 logs.go:276] 0 containers: []
	W0401 19:35:07.623352   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:07.623358   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:07.623416   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:07.664791   71168 cri.go:89] found id: ""
	I0401 19:35:07.664823   71168 logs.go:276] 0 containers: []
	W0401 19:35:07.664834   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:07.664842   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:07.664854   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:07.722158   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:07.722186   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:07.737838   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:07.737876   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:07.813694   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:07.813717   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:07.813728   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:07.899698   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:07.899740   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:08.825778   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:10.825935   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:07.505933   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:10.003529   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:09.107076   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:11.108917   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:10.446184   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:10.460860   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:10.460927   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:10.505656   71168 cri.go:89] found id: ""
	I0401 19:35:10.505685   71168 logs.go:276] 0 containers: []
	W0401 19:35:10.505692   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:10.505698   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:10.505742   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:10.547771   71168 cri.go:89] found id: ""
	I0401 19:35:10.547796   71168 logs.go:276] 0 containers: []
	W0401 19:35:10.547814   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:10.547820   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:10.547876   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:10.584625   71168 cri.go:89] found id: ""
	I0401 19:35:10.584652   71168 logs.go:276] 0 containers: []
	W0401 19:35:10.584664   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:10.584671   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:10.584737   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:10.625512   71168 cri.go:89] found id: ""
	I0401 19:35:10.625541   71168 logs.go:276] 0 containers: []
	W0401 19:35:10.625552   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:10.625559   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:10.625618   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:10.664905   71168 cri.go:89] found id: ""
	I0401 19:35:10.664936   71168 logs.go:276] 0 containers: []
	W0401 19:35:10.664949   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:10.664955   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:10.665015   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:10.703043   71168 cri.go:89] found id: ""
	I0401 19:35:10.703071   71168 logs.go:276] 0 containers: []
	W0401 19:35:10.703082   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:10.703090   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:10.703149   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:10.747750   71168 cri.go:89] found id: ""
	I0401 19:35:10.747777   71168 logs.go:276] 0 containers: []
	W0401 19:35:10.747790   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:10.747796   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:10.747841   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:10.792944   71168 cri.go:89] found id: ""
	I0401 19:35:10.792970   71168 logs.go:276] 0 containers: []
	W0401 19:35:10.792980   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:10.792989   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:10.793004   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:10.854029   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:10.854058   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:10.868968   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:10.868991   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:10.940537   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:10.940564   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:10.940579   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:11.018201   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:11.018231   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:12.826117   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:14.826387   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:12.003995   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:14.503258   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:16.504686   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:13.608777   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:16.108992   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:13.562139   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:13.579370   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:13.579435   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:13.620811   71168 cri.go:89] found id: ""
	I0401 19:35:13.620838   71168 logs.go:276] 0 containers: []
	W0401 19:35:13.620847   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:13.620859   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:13.620919   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:13.661377   71168 cri.go:89] found id: ""
	I0401 19:35:13.661408   71168 logs.go:276] 0 containers: []
	W0401 19:35:13.661419   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:13.661427   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:13.661489   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:13.702413   71168 cri.go:89] found id: ""
	I0401 19:35:13.702436   71168 logs.go:276] 0 containers: []
	W0401 19:35:13.702445   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:13.702453   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:13.702519   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:13.748760   71168 cri.go:89] found id: ""
	I0401 19:35:13.748788   71168 logs.go:276] 0 containers: []
	W0401 19:35:13.748796   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:13.748803   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:13.748874   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:13.795438   71168 cri.go:89] found id: ""
	I0401 19:35:13.795460   71168 logs.go:276] 0 containers: []
	W0401 19:35:13.795472   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:13.795479   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:13.795537   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:13.835572   71168 cri.go:89] found id: ""
	I0401 19:35:13.835601   71168 logs.go:276] 0 containers: []
	W0401 19:35:13.835612   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:13.835619   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:13.835677   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:13.874301   71168 cri.go:89] found id: ""
	I0401 19:35:13.874327   71168 logs.go:276] 0 containers: []
	W0401 19:35:13.874336   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:13.874342   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:13.874387   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:13.914847   71168 cri.go:89] found id: ""
	I0401 19:35:13.914876   71168 logs.go:276] 0 containers: []
	W0401 19:35:13.914883   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:13.914891   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:13.914904   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:13.929329   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:13.929355   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:14.004332   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:14.004358   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:14.004373   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:14.084901   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:14.084935   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:14.134471   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:14.134500   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:16.693432   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:16.710258   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:16.710332   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:16.757213   71168 cri.go:89] found id: ""
	I0401 19:35:16.757243   71168 logs.go:276] 0 containers: []
	W0401 19:35:16.757254   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:16.757261   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:16.757320   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:16.797134   71168 cri.go:89] found id: ""
	I0401 19:35:16.797174   71168 logs.go:276] 0 containers: []
	W0401 19:35:16.797182   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:16.797188   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:16.797233   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:16.839502   71168 cri.go:89] found id: ""
	I0401 19:35:16.839530   71168 logs.go:276] 0 containers: []
	W0401 19:35:16.839541   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:16.839549   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:16.839609   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:16.881380   71168 cri.go:89] found id: ""
	I0401 19:35:16.881406   71168 logs.go:276] 0 containers: []
	W0401 19:35:16.881413   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:16.881419   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:16.881472   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:16.922968   71168 cri.go:89] found id: ""
	I0401 19:35:16.922991   71168 logs.go:276] 0 containers: []
	W0401 19:35:16.923002   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:16.923009   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:16.923069   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:16.961262   71168 cri.go:89] found id: ""
	I0401 19:35:16.961290   71168 logs.go:276] 0 containers: []
	W0401 19:35:16.961301   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:16.961310   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:16.961369   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:16.996901   71168 cri.go:89] found id: ""
	I0401 19:35:16.996929   71168 logs.go:276] 0 containers: []
	W0401 19:35:16.996940   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:16.996947   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:16.997004   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:17.038447   71168 cri.go:89] found id: ""
	I0401 19:35:17.038473   71168 logs.go:276] 0 containers: []
	W0401 19:35:17.038481   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:17.038489   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:17.038500   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:17.079979   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:17.080013   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:17.136973   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:17.137010   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:17.153083   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:17.153108   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:17.232055   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:17.232078   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:17.232096   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:17.326246   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:19.326903   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:20.818889   70687 pod_ready.go:81] duration metric: took 4m0.000381983s for pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace to be "Ready" ...
	E0401 19:35:20.818918   70687 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace to be "Ready" (will not retry!)
	I0401 19:35:20.818938   70687 pod_ready.go:38] duration metric: took 4m5.525170808s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:35:20.818967   70687 kubeadm.go:591] duration metric: took 4m13.404699267s to restartPrimaryControlPlane
	W0401 19:35:20.819026   70687 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0401 19:35:20.819059   70687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0401 19:35:19.004932   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:21.504514   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:18.607067   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:20.609619   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:19.813327   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:19.830168   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:19.830229   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:19.875502   71168 cri.go:89] found id: ""
	I0401 19:35:19.875524   71168 logs.go:276] 0 containers: []
	W0401 19:35:19.875532   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:19.875537   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:19.875591   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:19.916084   71168 cri.go:89] found id: ""
	I0401 19:35:19.916107   71168 logs.go:276] 0 containers: []
	W0401 19:35:19.916117   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:19.916125   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:19.916188   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:19.960673   71168 cri.go:89] found id: ""
	I0401 19:35:19.960699   71168 logs.go:276] 0 containers: []
	W0401 19:35:19.960710   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:19.960717   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:19.960796   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:19.998736   71168 cri.go:89] found id: ""
	I0401 19:35:19.998760   71168 logs.go:276] 0 containers: []
	W0401 19:35:19.998768   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:19.998776   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:19.998840   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:20.043382   71168 cri.go:89] found id: ""
	I0401 19:35:20.043408   71168 logs.go:276] 0 containers: []
	W0401 19:35:20.043418   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:20.043425   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:20.043492   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:20.086132   71168 cri.go:89] found id: ""
	I0401 19:35:20.086158   71168 logs.go:276] 0 containers: []
	W0401 19:35:20.086171   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:20.086178   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:20.086239   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:20.131052   71168 cri.go:89] found id: ""
	I0401 19:35:20.131074   71168 logs.go:276] 0 containers: []
	W0401 19:35:20.131081   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:20.131091   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:20.131151   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:20.174668   71168 cri.go:89] found id: ""
	I0401 19:35:20.174693   71168 logs.go:276] 0 containers: []
	W0401 19:35:20.174699   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:20.174707   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:20.174718   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:20.266503   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:20.266521   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:20.266534   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:20.351555   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:20.351586   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:20.400261   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:20.400289   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:20.455149   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:20.455183   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:23.510048   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:26.005267   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:23.109720   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:25.608633   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:22.972675   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:22.987481   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:22.987555   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:23.032429   71168 cri.go:89] found id: ""
	I0401 19:35:23.032453   71168 logs.go:276] 0 containers: []
	W0401 19:35:23.032461   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:23.032467   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:23.032522   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:23.073286   71168 cri.go:89] found id: ""
	I0401 19:35:23.073313   71168 logs.go:276] 0 containers: []
	W0401 19:35:23.073322   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:23.073330   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:23.073397   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:23.115424   71168 cri.go:89] found id: ""
	I0401 19:35:23.115447   71168 logs.go:276] 0 containers: []
	W0401 19:35:23.115454   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:23.115459   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:23.115506   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:23.164883   71168 cri.go:89] found id: ""
	I0401 19:35:23.164908   71168 logs.go:276] 0 containers: []
	W0401 19:35:23.164918   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:23.164925   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:23.164985   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:23.213617   71168 cri.go:89] found id: ""
	I0401 19:35:23.213656   71168 logs.go:276] 0 containers: []
	W0401 19:35:23.213668   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:23.213675   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:23.213787   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:23.264846   71168 cri.go:89] found id: ""
	I0401 19:35:23.264874   71168 logs.go:276] 0 containers: []
	W0401 19:35:23.264886   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:23.264893   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:23.264958   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:23.306467   71168 cri.go:89] found id: ""
	I0401 19:35:23.306495   71168 logs.go:276] 0 containers: []
	W0401 19:35:23.306506   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:23.306514   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:23.306566   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:23.358574   71168 cri.go:89] found id: ""
	I0401 19:35:23.358597   71168 logs.go:276] 0 containers: []
	W0401 19:35:23.358608   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:23.358619   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:23.358634   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:23.437486   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:23.437510   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:23.437525   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:23.555307   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:23.555350   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:23.601776   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:23.601808   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:23.666654   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:23.666688   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:26.184503   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:26.199924   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:26.199997   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:26.252151   71168 cri.go:89] found id: ""
	I0401 19:35:26.252181   71168 logs.go:276] 0 containers: []
	W0401 19:35:26.252192   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:26.252199   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:26.252266   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:26.299094   71168 cri.go:89] found id: ""
	I0401 19:35:26.299126   71168 logs.go:276] 0 containers: []
	W0401 19:35:26.299134   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:26.299139   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:26.299194   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:26.340483   71168 cri.go:89] found id: ""
	I0401 19:35:26.340516   71168 logs.go:276] 0 containers: []
	W0401 19:35:26.340533   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:26.340540   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:26.340599   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:26.387153   71168 cri.go:89] found id: ""
	I0401 19:35:26.387180   71168 logs.go:276] 0 containers: []
	W0401 19:35:26.387188   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:26.387194   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:26.387261   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:26.430746   71168 cri.go:89] found id: ""
	I0401 19:35:26.430773   71168 logs.go:276] 0 containers: []
	W0401 19:35:26.430781   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:26.430787   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:26.430854   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:26.478412   71168 cri.go:89] found id: ""
	I0401 19:35:26.478440   71168 logs.go:276] 0 containers: []
	W0401 19:35:26.478451   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:26.478458   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:26.478523   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:26.521120   71168 cri.go:89] found id: ""
	I0401 19:35:26.521150   71168 logs.go:276] 0 containers: []
	W0401 19:35:26.521161   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:26.521168   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:26.521229   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:26.564678   71168 cri.go:89] found id: ""
	I0401 19:35:26.564721   71168 logs.go:276] 0 containers: []
	W0401 19:35:26.564731   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:26.564742   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:26.564757   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:26.625271   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:26.625308   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:26.640505   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:26.640529   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:26.722753   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:26.722777   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:26.722795   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:26.830507   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:26.830551   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:28.505100   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:31.004387   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:28.107396   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:30.108080   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:29.386655   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:29.401232   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:29.401308   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:29.440479   71168 cri.go:89] found id: ""
	I0401 19:35:29.440511   71168 logs.go:276] 0 containers: []
	W0401 19:35:29.440522   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:29.440530   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:29.440590   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:29.479022   71168 cri.go:89] found id: ""
	I0401 19:35:29.479049   71168 logs.go:276] 0 containers: []
	W0401 19:35:29.479057   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:29.479062   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:29.479119   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:29.518179   71168 cri.go:89] found id: ""
	I0401 19:35:29.518208   71168 logs.go:276] 0 containers: []
	W0401 19:35:29.518216   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:29.518222   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:29.518281   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:29.556654   71168 cri.go:89] found id: ""
	I0401 19:35:29.556682   71168 logs.go:276] 0 containers: []
	W0401 19:35:29.556692   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:29.556712   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:29.556772   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:29.593258   71168 cri.go:89] found id: ""
	I0401 19:35:29.593287   71168 logs.go:276] 0 containers: []
	W0401 19:35:29.593295   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:29.593301   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:29.593349   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:29.637215   71168 cri.go:89] found id: ""
	I0401 19:35:29.637243   71168 logs.go:276] 0 containers: []
	W0401 19:35:29.637253   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:29.637261   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:29.637321   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:29.683052   71168 cri.go:89] found id: ""
	I0401 19:35:29.683090   71168 logs.go:276] 0 containers: []
	W0401 19:35:29.683100   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:29.683108   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:29.683164   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:29.730948   71168 cri.go:89] found id: ""
	I0401 19:35:29.730979   71168 logs.go:276] 0 containers: []
	W0401 19:35:29.730991   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:29.731001   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:29.731014   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:29.781969   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:29.782001   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:29.800700   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:29.800729   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:29.877200   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:29.877225   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:29.877244   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:29.958110   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:29.958144   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:32.501060   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:32.519551   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:32.519619   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:32.579776   71168 cri.go:89] found id: ""
	I0401 19:35:32.579802   71168 logs.go:276] 0 containers: []
	W0401 19:35:32.579813   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:32.579824   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:32.579886   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:32.643271   71168 cri.go:89] found id: ""
	I0401 19:35:32.643300   71168 logs.go:276] 0 containers: []
	W0401 19:35:32.643312   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:32.643322   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:32.643387   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:32.688576   71168 cri.go:89] found id: ""
	I0401 19:35:32.688605   71168 logs.go:276] 0 containers: []
	W0401 19:35:32.688614   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:32.688619   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:32.688678   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:32.729867   71168 cri.go:89] found id: ""
	I0401 19:35:32.729890   71168 logs.go:276] 0 containers: []
	W0401 19:35:32.729898   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:32.729906   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:32.729962   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:32.771485   71168 cri.go:89] found id: ""
	I0401 19:35:32.771508   71168 logs.go:276] 0 containers: []
	W0401 19:35:32.771515   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:32.771521   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:32.771574   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:32.809362   71168 cri.go:89] found id: ""
	I0401 19:35:32.809385   71168 logs.go:276] 0 containers: []
	W0401 19:35:32.809393   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:32.809398   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:32.809458   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:32.844916   71168 cri.go:89] found id: ""
	I0401 19:35:32.844941   71168 logs.go:276] 0 containers: []
	W0401 19:35:32.844950   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:32.844955   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:32.845000   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:32.884638   71168 cri.go:89] found id: ""
	I0401 19:35:32.884660   71168 logs.go:276] 0 containers: []
	W0401 19:35:32.884670   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:32.884680   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:32.884695   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:32.937462   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:32.937489   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:32.952842   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:32.952871   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0401 19:35:33.005516   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:35.504755   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:32.608051   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:35.106708   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:37.108135   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	W0401 19:35:33.035254   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:33.035278   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:33.035294   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:33.114963   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:33.114994   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:35.662190   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:35.675960   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:35.676016   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:35.717300   71168 cri.go:89] found id: ""
	I0401 19:35:35.717329   71168 logs.go:276] 0 containers: []
	W0401 19:35:35.717340   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:35.717347   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:35.717409   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:35.756687   71168 cri.go:89] found id: ""
	I0401 19:35:35.756713   71168 logs.go:276] 0 containers: []
	W0401 19:35:35.756723   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:35.756730   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:35.756788   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:35.796995   71168 cri.go:89] found id: ""
	I0401 19:35:35.797017   71168 logs.go:276] 0 containers: []
	W0401 19:35:35.797025   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:35.797030   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:35.797083   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:35.840419   71168 cri.go:89] found id: ""
	I0401 19:35:35.840444   71168 logs.go:276] 0 containers: []
	W0401 19:35:35.840455   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:35.840462   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:35.840523   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:35.880059   71168 cri.go:89] found id: ""
	I0401 19:35:35.880093   71168 logs.go:276] 0 containers: []
	W0401 19:35:35.880107   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:35.880113   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:35.880171   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:35.929491   71168 cri.go:89] found id: ""
	I0401 19:35:35.929515   71168 logs.go:276] 0 containers: []
	W0401 19:35:35.929523   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:35.929530   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:35.929584   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:35.968745   71168 cri.go:89] found id: ""
	I0401 19:35:35.968771   71168 logs.go:276] 0 containers: []
	W0401 19:35:35.968778   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:35.968784   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:35.968833   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:36.014294   71168 cri.go:89] found id: ""
	I0401 19:35:36.014318   71168 logs.go:276] 0 containers: []
	W0401 19:35:36.014328   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:36.014338   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:36.014359   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:36.068418   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:36.068450   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:36.086343   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:36.086367   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:36.172027   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:36.172053   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:36.172067   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:36.250046   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:36.250080   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:38.004007   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:40.004138   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:39.607714   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:42.107775   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:38.794261   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:38.809535   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:38.809597   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:38.849139   71168 cri.go:89] found id: ""
	I0401 19:35:38.849167   71168 logs.go:276] 0 containers: []
	W0401 19:35:38.849176   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:38.849181   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:38.849238   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:38.886787   71168 cri.go:89] found id: ""
	I0401 19:35:38.886811   71168 logs.go:276] 0 containers: []
	W0401 19:35:38.886821   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:38.886828   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:38.886891   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:38.923388   71168 cri.go:89] found id: ""
	I0401 19:35:38.923419   71168 logs.go:276] 0 containers: []
	W0401 19:35:38.923431   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:38.923438   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:38.923497   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:38.959583   71168 cri.go:89] found id: ""
	I0401 19:35:38.959608   71168 logs.go:276] 0 containers: []
	W0401 19:35:38.959619   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:38.959626   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:38.959682   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:38.998201   71168 cri.go:89] found id: ""
	I0401 19:35:38.998226   71168 logs.go:276] 0 containers: []
	W0401 19:35:38.998233   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:38.998238   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:38.998294   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:39.039669   71168 cri.go:89] found id: ""
	I0401 19:35:39.039692   71168 logs.go:276] 0 containers: []
	W0401 19:35:39.039703   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:39.039710   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:39.039767   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:39.077331   71168 cri.go:89] found id: ""
	I0401 19:35:39.077358   71168 logs.go:276] 0 containers: []
	W0401 19:35:39.077366   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:39.077371   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:39.077423   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:39.125999   71168 cri.go:89] found id: ""
	I0401 19:35:39.126021   71168 logs.go:276] 0 containers: []
	W0401 19:35:39.126031   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:39.126041   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:39.126054   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:39.183579   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:39.183612   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:39.201200   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:39.201227   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:39.282262   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:39.282280   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:39.282291   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:39.365340   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:39.365370   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:41.914909   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:41.929243   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:41.929317   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:41.975594   71168 cri.go:89] found id: ""
	I0401 19:35:41.975622   71168 logs.go:276] 0 containers: []
	W0401 19:35:41.975632   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:41.975639   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:41.975701   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:42.023558   71168 cri.go:89] found id: ""
	I0401 19:35:42.023585   71168 logs.go:276] 0 containers: []
	W0401 19:35:42.023596   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:42.023602   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:42.023662   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:42.074242   71168 cri.go:89] found id: ""
	I0401 19:35:42.074266   71168 logs.go:276] 0 containers: []
	W0401 19:35:42.074276   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:42.074283   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:42.074340   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:42.123327   71168 cri.go:89] found id: ""
	I0401 19:35:42.123358   71168 logs.go:276] 0 containers: []
	W0401 19:35:42.123370   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:42.123378   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:42.123452   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:42.168931   71168 cri.go:89] found id: ""
	I0401 19:35:42.168961   71168 logs.go:276] 0 containers: []
	W0401 19:35:42.168972   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:42.168980   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:42.169037   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:42.211747   71168 cri.go:89] found id: ""
	I0401 19:35:42.211774   71168 logs.go:276] 0 containers: []
	W0401 19:35:42.211784   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:42.211793   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:42.211849   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:42.251809   71168 cri.go:89] found id: ""
	I0401 19:35:42.251830   71168 logs.go:276] 0 containers: []
	W0401 19:35:42.251841   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:42.251849   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:42.251908   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:42.293266   71168 cri.go:89] found id: ""
	I0401 19:35:42.293361   71168 logs.go:276] 0 containers: []
	W0401 19:35:42.293377   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:42.293388   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:42.293405   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:42.364502   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:42.364553   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:42.381147   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:42.381180   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:42.464219   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:42.464238   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:42.464249   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:42.544564   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:42.544594   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:42.006061   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:44.504700   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:46.505615   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:44.606915   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:46.100004   70962 pod_ready.go:81] duration metric: took 4m0.000146584s for pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace to be "Ready" ...
	E0401 19:35:46.100029   70962 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace to be "Ready" (will not retry!)
	I0401 19:35:46.100044   70962 pod_ready.go:38] duration metric: took 4m10.491414096s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:35:46.100088   70962 kubeadm.go:591] duration metric: took 4m18.223285856s to restartPrimaryControlPlane
	W0401 19:35:46.100141   70962 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0401 19:35:46.100164   70962 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0401 19:35:45.105777   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:45.119911   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:45.119976   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:45.161871   71168 cri.go:89] found id: ""
	I0401 19:35:45.161890   71168 logs.go:276] 0 containers: []
	W0401 19:35:45.161897   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:45.161902   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:45.161949   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:45.198677   71168 cri.go:89] found id: ""
	I0401 19:35:45.198702   71168 logs.go:276] 0 containers: []
	W0401 19:35:45.198710   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:45.198715   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:45.198776   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:45.236938   71168 cri.go:89] found id: ""
	I0401 19:35:45.236972   71168 logs.go:276] 0 containers: []
	W0401 19:35:45.236983   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:45.236990   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:45.237052   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:45.280621   71168 cri.go:89] found id: ""
	I0401 19:35:45.280650   71168 logs.go:276] 0 containers: []
	W0401 19:35:45.280661   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:45.280668   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:45.280727   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:45.326794   71168 cri.go:89] found id: ""
	I0401 19:35:45.326818   71168 logs.go:276] 0 containers: []
	W0401 19:35:45.326827   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:45.326834   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:45.326892   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:45.369405   71168 cri.go:89] found id: ""
	I0401 19:35:45.369431   71168 logs.go:276] 0 containers: []
	W0401 19:35:45.369441   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:45.369446   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:45.369501   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:45.407609   71168 cri.go:89] found id: ""
	I0401 19:35:45.407635   71168 logs.go:276] 0 containers: []
	W0401 19:35:45.407643   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:45.407648   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:45.407720   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:45.444848   71168 cri.go:89] found id: ""
	I0401 19:35:45.444871   71168 logs.go:276] 0 containers: []
	W0401 19:35:45.444881   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:45.444891   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:45.444911   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:45.531938   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:45.531957   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:45.531972   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:45.617109   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:45.617141   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:45.663559   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:45.663591   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:45.717622   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:45.717670   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:49.004037   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:51.004650   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:48.234834   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:48.250543   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:48.250606   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:48.294396   71168 cri.go:89] found id: ""
	I0401 19:35:48.294423   71168 logs.go:276] 0 containers: []
	W0401 19:35:48.294432   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:48.294439   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:48.294504   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:48.336866   71168 cri.go:89] found id: ""
	I0401 19:35:48.336892   71168 logs.go:276] 0 containers: []
	W0401 19:35:48.336902   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:48.336908   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:48.336965   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:48.376031   71168 cri.go:89] found id: ""
	I0401 19:35:48.376065   71168 logs.go:276] 0 containers: []
	W0401 19:35:48.376076   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:48.376084   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:48.376142   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:48.414975   71168 cri.go:89] found id: ""
	I0401 19:35:48.414995   71168 logs.go:276] 0 containers: []
	W0401 19:35:48.415003   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:48.415008   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:48.415058   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:48.453484   71168 cri.go:89] found id: ""
	I0401 19:35:48.453513   71168 logs.go:276] 0 containers: []
	W0401 19:35:48.453524   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:48.453532   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:48.453593   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:48.487712   71168 cri.go:89] found id: ""
	I0401 19:35:48.487739   71168 logs.go:276] 0 containers: []
	W0401 19:35:48.487749   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:48.487757   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:48.487815   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:48.533331   71168 cri.go:89] found id: ""
	I0401 19:35:48.533364   71168 logs.go:276] 0 containers: []
	W0401 19:35:48.533375   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:48.533383   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:48.533442   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:48.574103   71168 cri.go:89] found id: ""
	I0401 19:35:48.574131   71168 logs.go:276] 0 containers: []
	W0401 19:35:48.574139   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:48.574147   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:48.574160   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:48.632068   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:48.632098   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:48.649342   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:48.649369   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:48.721799   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:48.721822   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:48.721836   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:48.821549   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:48.821584   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:51.364852   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:51.380281   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:51.380362   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:51.423383   71168 cri.go:89] found id: ""
	I0401 19:35:51.423412   71168 logs.go:276] 0 containers: []
	W0401 19:35:51.423422   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:51.423430   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:51.423490   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:51.470331   71168 cri.go:89] found id: ""
	I0401 19:35:51.470359   71168 logs.go:276] 0 containers: []
	W0401 19:35:51.470370   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:51.470378   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:51.470441   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:51.520310   71168 cri.go:89] found id: ""
	I0401 19:35:51.520339   71168 logs.go:276] 0 containers: []
	W0401 19:35:51.520350   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:51.520358   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:51.520414   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:51.568681   71168 cri.go:89] found id: ""
	I0401 19:35:51.568706   71168 logs.go:276] 0 containers: []
	W0401 19:35:51.568716   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:51.568724   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:51.568843   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:51.615146   71168 cri.go:89] found id: ""
	I0401 19:35:51.615174   71168 logs.go:276] 0 containers: []
	W0401 19:35:51.615185   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:51.615193   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:51.615256   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:51.658678   71168 cri.go:89] found id: ""
	I0401 19:35:51.658703   71168 logs.go:276] 0 containers: []
	W0401 19:35:51.658712   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:51.658720   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:51.658791   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:51.700071   71168 cri.go:89] found id: ""
	I0401 19:35:51.700097   71168 logs.go:276] 0 containers: []
	W0401 19:35:51.700108   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:51.700114   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:51.700177   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:51.746772   71168 cri.go:89] found id: ""
	I0401 19:35:51.746798   71168 logs.go:276] 0 containers: []
	W0401 19:35:51.746809   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:51.746826   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:51.746849   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:51.762321   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:51.762350   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:51.843300   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:51.843322   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:51.843337   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:51.919059   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:51.919090   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:51.965899   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:51.965925   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:53.564613   70687 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.745530657s)
	I0401 19:35:53.564696   70687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:35:53.582161   70687 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:35:53.593313   70687 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:35:53.604441   70687 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:35:53.604460   70687 kubeadm.go:156] found existing configuration files:
	
	I0401 19:35:53.604502   70687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 19:35:53.615367   70687 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:35:53.615426   70687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:35:53.626375   70687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 19:35:53.636924   70687 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:35:53.636975   70687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:35:53.647493   70687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 19:35:53.657319   70687 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:35:53.657373   70687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:35:53.667422   70687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 19:35:53.677235   70687 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:35:53.677308   70687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:35:53.688043   70687 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 19:35:53.894204   70687 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 19:35:53.504486   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:55.505966   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:54.523484   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:54.542004   71168 kubeadm.go:591] duration metric: took 4m4.024054342s to restartPrimaryControlPlane
	W0401 19:35:54.542067   71168 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0401 19:35:54.542088   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0401 19:35:55.179619   71168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:35:55.196424   71168 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:35:55.209517   71168 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:35:55.222643   71168 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:35:55.222664   71168 kubeadm.go:156] found existing configuration files:
	
	I0401 19:35:55.222714   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 19:35:55.234756   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:35:55.234813   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:35:55.246725   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 19:35:55.258440   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:35:55.258499   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:35:55.270106   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 19:35:55.280724   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:35:55.280776   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:35:55.293630   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 19:35:55.305588   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:35:55.305660   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:35:55.318308   71168 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 19:35:55.574896   71168 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 19:35:58.004494   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:00.505168   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:02.622337   70687 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0401 19:36:02.622433   70687 kubeadm.go:309] [preflight] Running pre-flight checks
	I0401 19:36:02.622548   70687 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 19:36:02.622659   70687 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 19:36:02.622794   70687 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 19:36:02.622883   70687 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 19:36:02.624550   70687 out.go:204]   - Generating certificates and keys ...
	I0401 19:36:02.624640   70687 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0401 19:36:02.624734   70687 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0401 19:36:02.624861   70687 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0401 19:36:02.624952   70687 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0401 19:36:02.625042   70687 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0401 19:36:02.625114   70687 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0401 19:36:02.625206   70687 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0401 19:36:02.625271   70687 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0401 19:36:02.625337   70687 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0401 19:36:02.625398   70687 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0401 19:36:02.625430   70687 kubeadm.go:309] [certs] Using the existing "sa" key
	I0401 19:36:02.625475   70687 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 19:36:02.625519   70687 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 19:36:02.625567   70687 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 19:36:02.625630   70687 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 19:36:02.625744   70687 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 19:36:02.625825   70687 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 19:36:02.625938   70687 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 19:36:02.626041   70687 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 19:36:02.627616   70687 out.go:204]   - Booting up control plane ...
	I0401 19:36:02.627744   70687 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 19:36:02.627812   70687 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 19:36:02.627878   70687 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 19:36:02.627976   70687 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 19:36:02.628046   70687 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 19:36:02.628098   70687 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0401 19:36:02.628273   70687 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 19:36:02.628354   70687 kubeadm.go:309] [apiclient] All control plane components are healthy after 5.502318 seconds
	I0401 19:36:02.628467   70687 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 19:36:02.628587   70687 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 19:36:02.628642   70687 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 19:36:02.628800   70687 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-882095 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 19:36:02.628849   70687 kubeadm.go:309] [bootstrap-token] Using token: 821cxx.fac41nwqi8u5mwgu
	I0401 19:36:02.630202   70687 out.go:204]   - Configuring RBAC rules ...
	I0401 19:36:02.630328   70687 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 19:36:02.630413   70687 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 19:36:02.630593   70687 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 19:36:02.630794   70687 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 19:36:02.630941   70687 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 19:36:02.631049   70687 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 19:36:02.631205   70687 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 19:36:02.631255   70687 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0401 19:36:02.631318   70687 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0401 19:36:02.631326   70687 kubeadm.go:309] 
	I0401 19:36:02.631412   70687 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0401 19:36:02.631421   70687 kubeadm.go:309] 
	I0401 19:36:02.631527   70687 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0401 19:36:02.631534   70687 kubeadm.go:309] 
	I0401 19:36:02.631560   70687 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0401 19:36:02.631649   70687 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 19:36:02.631721   70687 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 19:36:02.631731   70687 kubeadm.go:309] 
	I0401 19:36:02.631810   70687 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0401 19:36:02.631822   70687 kubeadm.go:309] 
	I0401 19:36:02.631896   70687 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 19:36:02.631910   70687 kubeadm.go:309] 
	I0401 19:36:02.631986   70687 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0401 19:36:02.632088   70687 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 19:36:02.632181   70687 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 19:36:02.632190   70687 kubeadm.go:309] 
	I0401 19:36:02.632319   70687 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 19:36:02.632427   70687 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0401 19:36:02.632437   70687 kubeadm.go:309] 
	I0401 19:36:02.632532   70687 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 821cxx.fac41nwqi8u5mwgu \
	I0401 19:36:02.632695   70687 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b8a0197ad47aa27a5800307c57228d22e61e4d31af785fa8a896f2b7fab267b8 \
	I0401 19:36:02.632726   70687 kubeadm.go:309] 	--control-plane 
	I0401 19:36:02.632736   70687 kubeadm.go:309] 
	I0401 19:36:02.632860   70687 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0401 19:36:02.632875   70687 kubeadm.go:309] 
	I0401 19:36:02.632983   70687 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 821cxx.fac41nwqi8u5mwgu \
	I0401 19:36:02.633118   70687 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b8a0197ad47aa27a5800307c57228d22e61e4d31af785fa8a896f2b7fab267b8 
	I0401 19:36:02.633132   70687 cni.go:84] Creating CNI manager for ""
	I0401 19:36:02.633138   70687 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:36:02.634595   70687 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0401 19:36:02.635812   70687 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0401 19:36:02.671750   70687 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0401 19:36:02.705562   70687 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 19:36:02.705657   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:02.705671   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-882095 minikube.k8s.io/updated_at=2024_04_01T19_36_02_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2 minikube.k8s.io/name=embed-certs-882095 minikube.k8s.io/primary=true
	I0401 19:36:02.762626   70687 ops.go:34] apiserver oom_adj: -16
	I0401 19:36:03.065957   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:03.566513   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:04.066178   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:04.566321   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:05.066798   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:05.566877   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:06.066520   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:03.004878   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:05.505057   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:06.566982   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:07.066931   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:07.566107   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:08.066843   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:08.566186   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:09.066550   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:09.566205   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:10.066287   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:10.566902   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:11.066656   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:08.005380   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:10.504026   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:11.566894   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:12.066235   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:12.566599   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:13.066132   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:13.566865   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:14.066759   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:14.566435   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:15.066907   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:15.566851   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:16.066880   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:16.158125   70687 kubeadm.go:1107] duration metric: took 13.452541301s to wait for elevateKubeSystemPrivileges
	W0401 19:36:16.158168   70687 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0401 19:36:16.158176   70687 kubeadm.go:393] duration metric: took 5m8.800288084s to StartCluster
	I0401 19:36:16.158195   70687 settings.go:142] acquiring lock: {Name:mk5cd3d9600680d3808ad7ff6310a5e71b09e71d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:36:16.158268   70687 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 19:36:16.159976   70687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/kubeconfig: {Name:mkbd988e40ba29769e9f8a43c4d876f38e957f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:36:16.160254   70687 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.190 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 19:36:16.162239   70687 out.go:177] * Verifying Kubernetes components...
	I0401 19:36:16.160346   70687 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0401 19:36:16.162276   70687 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-882095"
	I0401 19:36:16.162311   70687 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-882095"
	W0401 19:36:16.162320   70687 addons.go:243] addon storage-provisioner should already be in state true
	I0401 19:36:16.162339   70687 addons.go:69] Setting default-storageclass=true in profile "embed-certs-882095"
	I0401 19:36:16.162348   70687 addons.go:69] Setting metrics-server=true in profile "embed-certs-882095"
	I0401 19:36:16.162363   70687 addons.go:234] Setting addon metrics-server=true in "embed-certs-882095"
	W0401 19:36:16.162371   70687 addons.go:243] addon metrics-server should already be in state true
	I0401 19:36:16.162377   70687 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-882095"
	I0401 19:36:16.162384   70687 host.go:66] Checking if "embed-certs-882095" exists ...
	I0401 19:36:16.162345   70687 host.go:66] Checking if "embed-certs-882095" exists ...
	I0401 19:36:16.163767   70687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:36:16.160484   70687 config.go:182] Loaded profile config "embed-certs-882095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 19:36:16.162673   70687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:16.162687   70687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:16.163886   70687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:16.163900   70687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:16.162704   70687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:16.163963   70687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:16.180743   70687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41647
	I0401 19:36:16.180759   70687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46707
	I0401 19:36:16.180746   70687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44419
	I0401 19:36:16.181334   70687 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:16.181342   70687 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:16.181369   70687 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:16.181830   70687 main.go:141] libmachine: Using API Version  1
	I0401 19:36:16.181848   70687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:16.181973   70687 main.go:141] libmachine: Using API Version  1
	I0401 19:36:16.181991   70687 main.go:141] libmachine: Using API Version  1
	I0401 19:36:16.182001   70687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:16.182007   70687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:16.182187   70687 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:16.182360   70687 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:16.182393   70687 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:16.182592   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetState
	I0401 19:36:16.182726   70687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:16.182753   70687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:16.182829   70687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:16.182871   70687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:16.186198   70687 addons.go:234] Setting addon default-storageclass=true in "embed-certs-882095"
	W0401 19:36:16.186226   70687 addons.go:243] addon default-storageclass should already be in state true
	I0401 19:36:16.186258   70687 host.go:66] Checking if "embed-certs-882095" exists ...
	I0401 19:36:16.186603   70687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:16.186636   70687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:16.198494   70687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36915
	I0401 19:36:16.198862   70687 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:16.199298   70687 main.go:141] libmachine: Using API Version  1
	I0401 19:36:16.199315   70687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:16.199777   70687 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:16.200056   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetState
	I0401 19:36:16.201955   70687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39769
	I0401 19:36:16.202167   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:36:16.202416   70687 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:16.204728   70687 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:36:16.202891   70687 main.go:141] libmachine: Using API Version  1
	I0401 19:36:16.205309   70687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35751
	I0401 19:36:16.207964   70687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:16.208022   70687 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 19:36:16.208038   70687 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 19:36:16.208057   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:36:16.208345   70687 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:16.208482   70687 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:16.208550   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetState
	I0401 19:36:16.209106   70687 main.go:141] libmachine: Using API Version  1
	I0401 19:36:16.209121   70687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:16.209764   70687 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:16.210220   70687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:16.210258   70687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:16.211015   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:36:16.213549   70687 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 19:36:16.212105   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:36:16.215606   70687 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 19:36:16.213577   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:36:16.215625   70687 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 19:36:16.215632   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:36:16.212867   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:36:16.215647   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:36:16.215791   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:36:16.215913   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:36:16.216028   70687 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/embed-certs-882095/id_rsa Username:docker}
	I0401 19:36:16.218302   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:36:16.218924   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:36:16.218948   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:36:16.219174   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:36:16.219340   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:36:16.219496   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:36:16.219818   70687 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/embed-certs-882095/id_rsa Username:docker}
	I0401 19:36:16.227813   70687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35001
	I0401 19:36:16.228198   70687 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:16.228612   70687 main.go:141] libmachine: Using API Version  1
	I0401 19:36:16.228635   70687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:16.228989   70687 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:16.229159   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetState
	I0401 19:36:16.230712   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:36:16.230969   70687 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 19:36:16.230987   70687 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 19:36:16.231003   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:36:16.233712   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:36:16.234102   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:36:16.234126   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:36:16.234273   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:36:16.234435   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:36:16.234593   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:36:16.234753   70687 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/embed-certs-882095/id_rsa Username:docker}
	I0401 19:36:16.332504   70687 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:36:16.354423   70687 node_ready.go:35] waiting up to 6m0s for node "embed-certs-882095" to be "Ready" ...
	I0401 19:36:16.363527   70687 node_ready.go:49] node "embed-certs-882095" has status "Ready":"True"
	I0401 19:36:16.363555   70687 node_ready.go:38] duration metric: took 9.10669ms for node "embed-certs-882095" to be "Ready" ...
	I0401 19:36:16.363567   70687 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:36:16.369606   70687 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-fx6hf" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:16.435769   70687 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 19:36:16.435793   70687 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 19:36:16.450934   70687 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 19:36:16.468137   70687 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 19:36:16.474209   70687 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 19:36:16.474233   70687 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 19:36:13.003028   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:15.004924   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:16.530201   70687 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 19:36:16.530222   70687 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0401 19:36:16.607557   70687 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 19:36:17.044156   70687 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:17.044183   70687 main.go:141] libmachine: (embed-certs-882095) Calling .Close
	I0401 19:36:17.044165   70687 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:17.044244   70687 main.go:141] libmachine: (embed-certs-882095) Calling .Close
	I0401 19:36:17.044569   70687 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:17.044606   70687 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:17.044617   70687 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:17.044624   70687 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:17.044630   70687 main.go:141] libmachine: (embed-certs-882095) Calling .Close
	I0401 19:36:17.044639   70687 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:17.044656   70687 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:17.044657   70687 main.go:141] libmachine: (embed-certs-882095) DBG | Closing plugin on server side
	I0401 19:36:17.044670   70687 main.go:141] libmachine: (embed-certs-882095) Calling .Close
	I0401 19:36:17.044616   70687 main.go:141] libmachine: (embed-certs-882095) DBG | Closing plugin on server side
	I0401 19:36:17.044947   70687 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:17.044963   70687 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:17.044964   70687 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:17.044973   70687 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:17.045019   70687 main.go:141] libmachine: (embed-certs-882095) DBG | Closing plugin on server side
	I0401 19:36:17.058441   70687 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:17.058469   70687 main.go:141] libmachine: (embed-certs-882095) Calling .Close
	I0401 19:36:17.058718   70687 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:17.058735   70687 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:17.276263   70687 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:17.276283   70687 main.go:141] libmachine: (embed-certs-882095) Calling .Close
	I0401 19:36:17.276548   70687 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:17.276562   70687 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:17.276571   70687 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:17.276584   70687 main.go:141] libmachine: (embed-certs-882095) Calling .Close
	I0401 19:36:17.276823   70687 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:17.276837   70687 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:17.276852   70687 addons.go:470] Verifying addon metrics-server=true in "embed-certs-882095"
	I0401 19:36:17.278536   70687 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0401 19:36:17.279740   70687 addons.go:505] duration metric: took 1.119396s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0401 19:36:18.412746   70687 pod_ready.go:102] pod "coredns-76f75df574-fx6hf" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:19.378799   70687 pod_ready.go:92] pod "coredns-76f75df574-fx6hf" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:19.378819   70687 pod_ready.go:81] duration metric: took 3.009189982s for pod "coredns-76f75df574-fx6hf" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.378828   70687 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-hwbw6" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.384482   70687 pod_ready.go:92] pod "coredns-76f75df574-hwbw6" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:19.384498   70687 pod_ready.go:81] duration metric: took 5.664781ms for pod "coredns-76f75df574-hwbw6" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.384507   70687 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.390258   70687 pod_ready.go:92] pod "etcd-embed-certs-882095" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:19.390274   70687 pod_ready.go:81] duration metric: took 5.761319ms for pod "etcd-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.390281   70687 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.395592   70687 pod_ready.go:92] pod "kube-apiserver-embed-certs-882095" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:19.395611   70687 pod_ready.go:81] duration metric: took 5.323181ms for pod "kube-apiserver-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.395622   70687 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.400979   70687 pod_ready.go:92] pod "kube-controller-manager-embed-certs-882095" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:19.400994   70687 pod_ready.go:81] duration metric: took 5.365282ms for pod "kube-controller-manager-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.401002   70687 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mbs4m" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.775009   70687 pod_ready.go:92] pod "kube-proxy-mbs4m" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:19.775036   70687 pod_ready.go:81] duration metric: took 374.027521ms for pod "kube-proxy-mbs4m" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.775047   70687 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:20.174962   70687 pod_ready.go:92] pod "kube-scheduler-embed-certs-882095" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:20.174986   70687 pod_ready.go:81] duration metric: took 399.930828ms for pod "kube-scheduler-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:20.174994   70687 pod_ready.go:38] duration metric: took 3.811414774s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:36:20.175006   70687 api_server.go:52] waiting for apiserver process to appear ...
	I0401 19:36:20.175064   70687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:36:20.191452   70687 api_server.go:72] duration metric: took 4.031156406s to wait for apiserver process to appear ...
	I0401 19:36:20.191477   70687 api_server.go:88] waiting for apiserver healthz status ...
	I0401 19:36:20.191498   70687 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0401 19:36:20.196706   70687 api_server.go:279] https://192.168.39.190:8443/healthz returned 200:
	ok
	I0401 19:36:20.197772   70687 api_server.go:141] control plane version: v1.29.3
	I0401 19:36:20.197791   70687 api_server.go:131] duration metric: took 6.308074ms to wait for apiserver health ...
	I0401 19:36:20.197799   70687 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 19:36:20.380616   70687 system_pods.go:59] 9 kube-system pods found
	I0401 19:36:20.380645   70687 system_pods.go:61] "coredns-76f75df574-fx6hf" [1c07b740-3374-4a54-a786-784b23ec6b83] Running
	I0401 19:36:20.380651   70687 system_pods.go:61] "coredns-76f75df574-hwbw6" [7b12145a-2689-47e9-9724-d80790ed079c] Running
	I0401 19:36:20.380657   70687 system_pods.go:61] "etcd-embed-certs-882095" [3848d128-2fde-42f5-9543-b8d0343ba15b] Running
	I0401 19:36:20.380663   70687 system_pods.go:61] "kube-apiserver-embed-certs-882095" [116c5cd1-2d04-4a85-96e9-bd1e6af4cba4] Running
	I0401 19:36:20.380668   70687 system_pods.go:61] "kube-controller-manager-embed-certs-882095" [8a2282cf-2a87-4cee-a482-355e92048642] Running
	I0401 19:36:20.380672   70687 system_pods.go:61] "kube-proxy-mbs4m" [ffccbae0-7538-4a75-a6ce-afce49865f07] Running
	I0401 19:36:20.380676   70687 system_pods.go:61] "kube-scheduler-embed-certs-882095" [d2554007-1c9c-4238-809a-72aae1fb7de3] Running
	I0401 19:36:20.380684   70687 system_pods.go:61] "metrics-server-57f55c9bc5-dktr6" [c6adfcab-c746-4ad8-abe2-8b300389a4f5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:36:20.380689   70687 system_pods.go:61] "storage-provisioner" [bcff0d1d-a555-4b25-9aa5-7ab1188c21fd] Running
	I0401 19:36:20.380700   70687 system_pods.go:74] duration metric: took 182.895079ms to wait for pod list to return data ...
	I0401 19:36:20.380711   70687 default_sa.go:34] waiting for default service account to be created ...
	I0401 19:36:20.574739   70687 default_sa.go:45] found service account: "default"
	I0401 19:36:20.574771   70687 default_sa.go:55] duration metric: took 194.049249ms for default service account to be created ...
	I0401 19:36:20.574785   70687 system_pods.go:116] waiting for k8s-apps to be running ...
	I0401 19:36:20.781600   70687 system_pods.go:86] 9 kube-system pods found
	I0401 19:36:20.781630   70687 system_pods.go:89] "coredns-76f75df574-fx6hf" [1c07b740-3374-4a54-a786-784b23ec6b83] Running
	I0401 19:36:20.781638   70687 system_pods.go:89] "coredns-76f75df574-hwbw6" [7b12145a-2689-47e9-9724-d80790ed079c] Running
	I0401 19:36:20.781658   70687 system_pods.go:89] "etcd-embed-certs-882095" [3848d128-2fde-42f5-9543-b8d0343ba15b] Running
	I0401 19:36:20.781664   70687 system_pods.go:89] "kube-apiserver-embed-certs-882095" [116c5cd1-2d04-4a85-96e9-bd1e6af4cba4] Running
	I0401 19:36:20.781672   70687 system_pods.go:89] "kube-controller-manager-embed-certs-882095" [8a2282cf-2a87-4cee-a482-355e92048642] Running
	I0401 19:36:20.781678   70687 system_pods.go:89] "kube-proxy-mbs4m" [ffccbae0-7538-4a75-a6ce-afce49865f07] Running
	I0401 19:36:20.781686   70687 system_pods.go:89] "kube-scheduler-embed-certs-882095" [d2554007-1c9c-4238-809a-72aae1fb7de3] Running
	I0401 19:36:20.781695   70687 system_pods.go:89] "metrics-server-57f55c9bc5-dktr6" [c6adfcab-c746-4ad8-abe2-8b300389a4f5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:36:20.781705   70687 system_pods.go:89] "storage-provisioner" [bcff0d1d-a555-4b25-9aa5-7ab1188c21fd] Running
	I0401 19:36:20.781722   70687 system_pods.go:126] duration metric: took 206.928658ms to wait for k8s-apps to be running ...
	I0401 19:36:20.781738   70687 system_svc.go:44] waiting for kubelet service to be running ....
	I0401 19:36:20.781789   70687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:36:20.798910   70687 system_svc.go:56] duration metric: took 17.163227ms WaitForService to wait for kubelet
	I0401 19:36:20.798940   70687 kubeadm.go:576] duration metric: took 4.638649198s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 19:36:20.798962   70687 node_conditions.go:102] verifying NodePressure condition ...
	I0401 19:36:20.975011   70687 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 19:36:20.975034   70687 node_conditions.go:123] node cpu capacity is 2
	I0401 19:36:20.975045   70687 node_conditions.go:105] duration metric: took 176.077669ms to run NodePressure ...
	I0401 19:36:20.975055   70687 start.go:240] waiting for startup goroutines ...
	I0401 19:36:20.975061   70687 start.go:245] waiting for cluster config update ...
	I0401 19:36:20.975070   70687 start.go:254] writing updated cluster config ...
	I0401 19:36:20.975313   70687 ssh_runner.go:195] Run: rm -f paused
	I0401 19:36:21.024261   70687 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0401 19:36:21.026583   70687 out.go:177] * Done! kubectl is now configured to use "embed-certs-882095" cluster and "default" namespace by default
	I0401 19:36:17.504621   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:20.003964   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:18.623277   70962 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.523094705s)
	I0401 19:36:18.623344   70962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:36:18.640939   70962 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:36:18.653983   70962 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:36:18.666162   70962 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:36:18.666182   70962 kubeadm.go:156] found existing configuration files:
	
	I0401 19:36:18.666233   70962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0401 19:36:18.679043   70962 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:36:18.679092   70962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:36:18.690185   70962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0401 19:36:18.703017   70962 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:36:18.703078   70962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:36:18.714986   70962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0401 19:36:18.727138   70962 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:36:18.727188   70962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:36:18.737886   70962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0401 19:36:18.748013   70962 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:36:18.748064   70962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:36:18.758552   70962 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 19:36:18.988309   70962 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 19:36:22.004400   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:24.004510   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:26.504264   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:28.053408   70962 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0401 19:36:28.053478   70962 kubeadm.go:309] [preflight] Running pre-flight checks
	I0401 19:36:28.053544   70962 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 19:36:28.053677   70962 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 19:36:28.053837   70962 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 19:36:28.053953   70962 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 19:36:28.055426   70962 out.go:204]   - Generating certificates and keys ...
	I0401 19:36:28.055513   70962 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0401 19:36:28.055614   70962 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0401 19:36:28.055742   70962 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0401 19:36:28.055834   70962 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0401 19:36:28.055942   70962 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0401 19:36:28.056022   70962 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0401 19:36:28.056104   70962 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0401 19:36:28.056167   70962 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0401 19:36:28.056250   70962 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0401 19:36:28.056331   70962 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0401 19:36:28.056371   70962 kubeadm.go:309] [certs] Using the existing "sa" key
	I0401 19:36:28.056449   70962 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 19:36:28.056531   70962 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 19:36:28.056600   70962 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 19:36:28.056677   70962 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 19:36:28.056772   70962 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 19:36:28.056870   70962 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 19:36:28.057006   70962 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 19:36:28.057100   70962 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 19:36:28.058575   70962 out.go:204]   - Booting up control plane ...
	I0401 19:36:28.058693   70962 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 19:36:28.058773   70962 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 19:36:28.058830   70962 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 19:36:28.058923   70962 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 19:36:28.058998   70962 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 19:36:28.059032   70962 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0401 19:36:28.059201   70962 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 19:36:28.059307   70962 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003148 seconds
	I0401 19:36:28.059432   70962 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 19:36:28.059592   70962 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 19:36:28.059665   70962 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 19:36:28.059892   70962 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-734648 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 19:36:28.059966   70962 kubeadm.go:309] [bootstrap-token] Using token: x76swh.zbuhmc8jrh5hodf9
	I0401 19:36:28.061321   70962 out.go:204]   - Configuring RBAC rules ...
	I0401 19:36:28.061450   70962 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 19:36:28.061577   70962 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 19:36:28.061803   70962 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 19:36:28.061993   70962 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 19:36:28.062153   70962 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 19:36:28.062252   70962 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 19:36:28.062363   70962 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 19:36:28.062422   70962 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0401 19:36:28.062481   70962 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0401 19:36:28.062493   70962 kubeadm.go:309] 
	I0401 19:36:28.062556   70962 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0401 19:36:28.062569   70962 kubeadm.go:309] 
	I0401 19:36:28.062686   70962 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0401 19:36:28.062697   70962 kubeadm.go:309] 
	I0401 19:36:28.062727   70962 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0401 19:36:28.062805   70962 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 19:36:28.062872   70962 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 19:36:28.062886   70962 kubeadm.go:309] 
	I0401 19:36:28.062959   70962 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0401 19:36:28.062969   70962 kubeadm.go:309] 
	I0401 19:36:28.063050   70962 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 19:36:28.063061   70962 kubeadm.go:309] 
	I0401 19:36:28.063103   70962 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0401 19:36:28.063172   70962 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 19:36:28.063234   70962 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 19:36:28.063240   70962 kubeadm.go:309] 
	I0401 19:36:28.063337   70962 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 19:36:28.063440   70962 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0401 19:36:28.063453   70962 kubeadm.go:309] 
	I0401 19:36:28.063559   70962 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token x76swh.zbuhmc8jrh5hodf9 \
	I0401 19:36:28.063676   70962 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b8a0197ad47aa27a5800307c57228d22e61e4d31af785fa8a896f2b7fab267b8 \
	I0401 19:36:28.063725   70962 kubeadm.go:309] 	--control-plane 
	I0401 19:36:28.063734   70962 kubeadm.go:309] 
	I0401 19:36:28.063835   70962 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0401 19:36:28.063844   70962 kubeadm.go:309] 
	I0401 19:36:28.063955   70962 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token x76swh.zbuhmc8jrh5hodf9 \
	I0401 19:36:28.064092   70962 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b8a0197ad47aa27a5800307c57228d22e61e4d31af785fa8a896f2b7fab267b8 
	I0401 19:36:28.064105   70962 cni.go:84] Creating CNI manager for ""
	I0401 19:36:28.064114   70962 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:36:28.065560   70962 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0401 19:36:28.505029   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:31.005436   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:28.066823   70962 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0401 19:36:28.089595   70962 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0401 19:36:28.150074   70962 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 19:36:28.150195   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:28.150206   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-734648 minikube.k8s.io/updated_at=2024_04_01T19_36_28_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2 minikube.k8s.io/name=default-k8s-diff-port-734648 minikube.k8s.io/primary=true
	I0401 19:36:28.494391   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:28.529148   70962 ops.go:34] apiserver oom_adj: -16
	I0401 19:36:28.994780   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:29.494976   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:29.994627   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:30.495192   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:30.995334   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:31.494861   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:31.994576   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:33.505264   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:35.506298   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:32.495185   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:32.995090   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:33.494755   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:33.994758   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:34.494609   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:34.995423   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:35.495219   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:35.994557   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:36.495175   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:36.994857   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:37.494725   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:37.994846   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:38.494687   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:38.994615   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:39.494929   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:39.994514   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:40.494838   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:40.994846   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:41.105036   70962 kubeadm.go:1107] duration metric: took 12.954907711s to wait for elevateKubeSystemPrivileges
	W0401 19:36:41.105072   70962 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0401 19:36:41.105080   70962 kubeadm.go:393] duration metric: took 5m13.291890816s to StartCluster
	I0401 19:36:41.105098   70962 settings.go:142] acquiring lock: {Name:mk5cd3d9600680d3808ad7ff6310a5e71b09e71d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:36:41.105193   70962 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 19:36:41.107226   70962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/kubeconfig: {Name:mkbd988e40ba29769e9f8a43c4d876f38e957f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:36:41.107451   70962 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.145 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 19:36:41.109245   70962 out.go:177] * Verifying Kubernetes components...
	I0401 19:36:41.107543   70962 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0401 19:36:41.107682   70962 config.go:182] Loaded profile config "default-k8s-diff-port-734648": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 19:36:41.110583   70962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:36:41.110596   70962 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-734648"
	I0401 19:36:41.110621   70962 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-734648"
	I0401 19:36:41.110620   70962 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-734648"
	I0401 19:36:41.110652   70962 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-734648"
	I0401 19:36:41.110588   70962 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-734648"
	W0401 19:36:41.110665   70962 addons.go:243] addon metrics-server should already be in state true
	I0401 19:36:41.110685   70962 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-734648"
	W0401 19:36:41.110699   70962 addons.go:243] addon storage-provisioner should already be in state true
	I0401 19:36:41.110700   70962 host.go:66] Checking if "default-k8s-diff-port-734648" exists ...
	I0401 19:36:41.110727   70962 host.go:66] Checking if "default-k8s-diff-port-734648" exists ...
	I0401 19:36:41.111032   70962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:41.111039   70962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:41.111062   70962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:41.111098   70962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:41.111126   70962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:41.111158   70962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:41.129376   70962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46657
	I0401 19:36:41.130833   70962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38623
	I0401 19:36:41.131158   70962 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:41.131258   70962 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:41.131761   70962 main.go:141] libmachine: Using API Version  1
	I0401 19:36:41.131786   70962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:41.132119   70962 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:41.132313   70962 main.go:141] libmachine: Using API Version  1
	I0401 19:36:41.132437   70962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:41.132477   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetState
	I0401 19:36:41.133129   70962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36213
	I0401 19:36:41.133449   70962 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:41.133456   70962 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:41.133871   70962 main.go:141] libmachine: Using API Version  1
	I0401 19:36:41.133894   70962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:41.133990   70962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:41.134021   70962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:41.134159   70962 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:41.134572   70962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:41.134609   70962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:41.143808   70962 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-734648"
	W0401 19:36:41.143829   70962 addons.go:243] addon default-storageclass should already be in state true
	I0401 19:36:41.143858   70962 host.go:66] Checking if "default-k8s-diff-port-734648" exists ...
	I0401 19:36:41.144202   70962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:41.144241   70962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:41.154009   70962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38703
	I0401 19:36:41.156112   70962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45449
	I0401 19:36:41.156579   70962 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:41.157085   70962 main.go:141] libmachine: Using API Version  1
	I0401 19:36:41.157112   70962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:41.157458   70962 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:41.157631   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetState
	I0401 19:36:41.157891   70962 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:41.158593   70962 main.go:141] libmachine: Using API Version  1
	I0401 19:36:41.158615   70962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:41.158924   70962 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:41.159123   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetState
	I0401 19:36:41.160683   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:36:41.162801   70962 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 19:36:41.164275   70962 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 19:36:41.164292   70962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 19:36:41.164310   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:36:41.162762   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:36:41.163321   70962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39643
	I0401 19:36:41.166161   70962 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:36:38.004666   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:40.005118   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:41.164866   70962 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:41.167473   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:36:41.167806   70962 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 19:36:41.167833   70962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 19:36:41.167850   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:36:41.168056   70962 main.go:141] libmachine: Using API Version  1
	I0401 19:36:41.168074   70962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:41.168145   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:36:41.168163   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:36:41.168194   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:36:41.168353   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:36:41.168429   70962 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:41.168583   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:36:41.168723   70962 sshutil.go:53] new ssh client: &{IP:192.168.61.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/default-k8s-diff-port-734648/id_rsa Username:docker}
	I0401 19:36:41.169323   70962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:41.169374   70962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:41.170857   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:36:41.171269   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:36:41.171323   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:36:41.171412   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:36:41.171576   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:36:41.171723   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:36:41.171860   70962 sshutil.go:53] new ssh client: &{IP:192.168.61.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/default-k8s-diff-port-734648/id_rsa Username:docker}
	I0401 19:36:41.191280   70962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42133
	I0401 19:36:41.191576   70962 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:41.192122   70962 main.go:141] libmachine: Using API Version  1
	I0401 19:36:41.192152   70962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:41.192511   70962 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:41.192673   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetState
	I0401 19:36:41.194286   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:36:41.194528   70962 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 19:36:41.194546   70962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 19:36:41.194564   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:36:41.197639   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:36:41.198235   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:36:41.198259   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:36:41.198296   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:36:41.198491   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:36:41.198670   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:36:41.198857   70962 sshutil.go:53] new ssh client: &{IP:192.168.61.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/default-k8s-diff-port-734648/id_rsa Username:docker}
	I0401 19:36:41.308472   70962 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:36:41.334121   70962 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-734648" to be "Ready" ...
	I0401 19:36:41.343898   70962 node_ready.go:49] node "default-k8s-diff-port-734648" has status "Ready":"True"
	I0401 19:36:41.343943   70962 node_ready.go:38] duration metric: took 9.780821ms for node "default-k8s-diff-port-734648" to be "Ready" ...
	I0401 19:36:41.343952   70962 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:36:41.352294   70962 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:41.362318   70962 pod_ready.go:92] pod "etcd-default-k8s-diff-port-734648" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:41.362345   70962 pod_ready.go:81] duration metric: took 10.020335ms for pod "etcd-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:41.362358   70962 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:41.367338   70962 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-734648" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:41.367356   70962 pod_ready.go:81] duration metric: took 4.990987ms for pod "kube-apiserver-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:41.367364   70962 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:41.372379   70962 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-734648" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:41.372401   70962 pod_ready.go:81] duration metric: took 5.030239ms for pod "kube-controller-manager-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:41.372412   70962 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:41.377862   70962 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:41.377881   70962 pod_ready.go:81] duration metric: took 5.460968ms for pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:41.377891   70962 pod_ready.go:38] duration metric: took 33.929349ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:36:41.377915   70962 api_server.go:52] waiting for apiserver process to appear ...
	I0401 19:36:41.377965   70962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:36:41.396518   70962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 19:36:41.407024   70962 api_server.go:72] duration metric: took 299.545156ms to wait for apiserver process to appear ...
	I0401 19:36:41.407049   70962 api_server.go:88] waiting for apiserver healthz status ...
	I0401 19:36:41.407068   70962 api_server.go:253] Checking apiserver healthz at https://192.168.61.145:8444/healthz ...
	I0401 19:36:41.411429   70962 api_server.go:279] https://192.168.61.145:8444/healthz returned 200:
	ok
	I0401 19:36:41.412620   70962 api_server.go:141] control plane version: v1.29.3
	I0401 19:36:41.412640   70962 api_server.go:131] duration metric: took 5.58478ms to wait for apiserver health ...
	I0401 19:36:41.412646   70962 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 19:36:41.426474   70962 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 19:36:41.426500   70962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 19:36:41.447003   70962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 19:36:41.470135   70962 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 19:36:41.470153   70962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 19:36:41.526684   70962 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 19:36:41.526710   70962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0401 19:36:41.540871   70962 system_pods.go:59] 4 kube-system pods found
	I0401 19:36:41.540894   70962 system_pods.go:61] "etcd-default-k8s-diff-port-734648" [7b60f629-8a15-420e-936c-872a0d55ce74] Running
	I0401 19:36:41.540900   70962 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-734648" [811a3391-02c8-43dd-9129-3fc50a4fab41] Running
	I0401 19:36:41.540905   70962 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-734648" [4b57b14a-5f46-482f-8661-8fa500db5390] Running
	I0401 19:36:41.540908   70962 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-734648" [e0fb5e6b-aaa8-45ba-9df9-be947cbbdb80] Running
	I0401 19:36:41.540914   70962 system_pods.go:74] duration metric: took 128.262683ms to wait for pod list to return data ...
	I0401 19:36:41.540920   70962 default_sa.go:34] waiting for default service account to be created ...
	I0401 19:36:41.625507   70962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 19:36:41.750232   70962 default_sa.go:45] found service account: "default"
	I0401 19:36:41.750261   70962 default_sa.go:55] duration metric: took 209.334562ms for default service account to be created ...
	I0401 19:36:41.750273   70962 system_pods.go:116] waiting for k8s-apps to be running ...
	I0401 19:36:41.968623   70962 system_pods.go:86] 7 kube-system pods found
	I0401 19:36:41.968651   70962 system_pods.go:89] "coredns-76f75df574-lwsms" [9f432161-c5e3-42fa-8857-8e61959511b0] Pending
	I0401 19:36:41.968657   70962 system_pods.go:89] "coredns-76f75df574-ws9cc" [65660abf-9856-4df4-a07b-854cfd8e3fc6] Pending
	I0401 19:36:41.968663   70962 system_pods.go:89] "etcd-default-k8s-diff-port-734648" [7b60f629-8a15-420e-936c-872a0d55ce74] Running
	I0401 19:36:41.968669   70962 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-734648" [811a3391-02c8-43dd-9129-3fc50a4fab41] Running
	I0401 19:36:41.968675   70962 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-734648" [4b57b14a-5f46-482f-8661-8fa500db5390] Running
	I0401 19:36:41.968683   70962 system_pods.go:89] "kube-proxy-p8wrc" [2f6b37e6-b3f9-44b6-8ff9-e8fd781ef1a3] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:36:41.968690   70962 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-734648" [e0fb5e6b-aaa8-45ba-9df9-be947cbbdb80] Running
	I0401 19:36:41.968712   70962 retry.go:31] will retry after 288.42332ms: missing components: kube-dns, kube-proxy
	I0401 19:36:42.231814   70962 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:42.231848   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .Close
	I0401 19:36:42.231904   70962 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:42.231925   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .Close
	I0401 19:36:42.232160   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | Closing plugin on server side
	I0401 19:36:42.232161   70962 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:42.232179   70962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:42.232187   70962 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:42.232191   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | Closing plugin on server side
	I0401 19:36:42.232199   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .Close
	I0401 19:36:42.232223   70962 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:42.232235   70962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:42.232244   70962 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:42.232255   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .Close
	I0401 19:36:42.232431   70962 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:42.232478   70962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:42.232578   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | Closing plugin on server side
	I0401 19:36:42.232612   70962 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:42.232629   70962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:42.251515   70962 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:42.251538   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .Close
	I0401 19:36:42.251795   70962 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:42.251809   70962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:42.267102   70962 system_pods.go:86] 8 kube-system pods found
	I0401 19:36:42.267135   70962 system_pods.go:89] "coredns-76f75df574-lwsms" [9f432161-c5e3-42fa-8857-8e61959511b0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:36:42.267148   70962 system_pods.go:89] "coredns-76f75df574-ws9cc" [65660abf-9856-4df4-a07b-854cfd8e3fc6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:36:42.267163   70962 system_pods.go:89] "etcd-default-k8s-diff-port-734648" [7b60f629-8a15-420e-936c-872a0d55ce74] Running
	I0401 19:36:42.267181   70962 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-734648" [811a3391-02c8-43dd-9129-3fc50a4fab41] Running
	I0401 19:36:42.267187   70962 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-734648" [4b57b14a-5f46-482f-8661-8fa500db5390] Running
	I0401 19:36:42.267196   70962 system_pods.go:89] "kube-proxy-p8wrc" [2f6b37e6-b3f9-44b6-8ff9-e8fd781ef1a3] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:36:42.267204   70962 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-734648" [e0fb5e6b-aaa8-45ba-9df9-be947cbbdb80] Running
	I0401 19:36:42.267222   70962 system_pods.go:89] "storage-provisioner" [8509e661-1b53-4018-b6b0-b6a5e242768d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:36:42.267244   70962 retry.go:31] will retry after 336.906399ms: missing components: kube-dns, kube-proxy
	I0401 19:36:42.632180   70962 system_pods.go:86] 9 kube-system pods found
	I0401 19:36:42.632212   70962 system_pods.go:89] "coredns-76f75df574-lwsms" [9f432161-c5e3-42fa-8857-8e61959511b0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:36:42.632223   70962 system_pods.go:89] "coredns-76f75df574-ws9cc" [65660abf-9856-4df4-a07b-854cfd8e3fc6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:36:42.632232   70962 system_pods.go:89] "etcd-default-k8s-diff-port-734648" [7b60f629-8a15-420e-936c-872a0d55ce74] Running
	I0401 19:36:42.632240   70962 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-734648" [811a3391-02c8-43dd-9129-3fc50a4fab41] Running
	I0401 19:36:42.632247   70962 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-734648" [4b57b14a-5f46-482f-8661-8fa500db5390] Running
	I0401 19:36:42.632257   70962 system_pods.go:89] "kube-proxy-p8wrc" [2f6b37e6-b3f9-44b6-8ff9-e8fd781ef1a3] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:36:42.632264   70962 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-734648" [e0fb5e6b-aaa8-45ba-9df9-be947cbbdb80] Running
	I0401 19:36:42.632275   70962 system_pods.go:89] "metrics-server-57f55c9bc5-fj5x5" [e25fa51c-d80e-4ddc-898f-3b9903746537] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:36:42.632289   70962 system_pods.go:89] "storage-provisioner" [8509e661-1b53-4018-b6b0-b6a5e242768d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:36:42.632313   70962 retry.go:31] will retry after 406.571029ms: missing components: kube-dns, kube-proxy
	I0401 19:36:42.739308   70962 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.113759645s)
	I0401 19:36:42.739364   70962 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:42.739383   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .Close
	I0401 19:36:42.739822   70962 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:42.739842   70962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:42.739859   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | Closing plugin on server side
	I0401 19:36:42.739867   70962 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:42.739890   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .Close
	I0401 19:36:42.740171   70962 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:42.740186   70962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:42.740198   70962 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-734648"
	I0401 19:36:42.742233   70962 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0401 19:36:42.743265   70962 addons.go:505] duration metric: took 1.635721448s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0401 19:36:43.053149   70962 system_pods.go:86] 9 kube-system pods found
	I0401 19:36:43.053183   70962 system_pods.go:89] "coredns-76f75df574-lwsms" [9f432161-c5e3-42fa-8857-8e61959511b0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:36:43.053195   70962 system_pods.go:89] "coredns-76f75df574-ws9cc" [65660abf-9856-4df4-a07b-854cfd8e3fc6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:36:43.053205   70962 system_pods.go:89] "etcd-default-k8s-diff-port-734648" [7b60f629-8a15-420e-936c-872a0d55ce74] Running
	I0401 19:36:43.053215   70962 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-734648" [811a3391-02c8-43dd-9129-3fc50a4fab41] Running
	I0401 19:36:43.053223   70962 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-734648" [4b57b14a-5f46-482f-8661-8fa500db5390] Running
	I0401 19:36:43.053235   70962 system_pods.go:89] "kube-proxy-p8wrc" [2f6b37e6-b3f9-44b6-8ff9-e8fd781ef1a3] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:36:43.053240   70962 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-734648" [e0fb5e6b-aaa8-45ba-9df9-be947cbbdb80] Running
	I0401 19:36:43.053249   70962 system_pods.go:89] "metrics-server-57f55c9bc5-fj5x5" [e25fa51c-d80e-4ddc-898f-3b9903746537] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:36:43.053258   70962 system_pods.go:89] "storage-provisioner" [8509e661-1b53-4018-b6b0-b6a5e242768d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:36:43.053275   70962 retry.go:31] will retry after 524.250739ms: missing components: kube-dns, kube-proxy
	I0401 19:36:43.591419   70962 system_pods.go:86] 9 kube-system pods found
	I0401 19:36:43.591451   70962 system_pods.go:89] "coredns-76f75df574-lwsms" [9f432161-c5e3-42fa-8857-8e61959511b0] Running
	I0401 19:36:43.591463   70962 system_pods.go:89] "coredns-76f75df574-ws9cc" [65660abf-9856-4df4-a07b-854cfd8e3fc6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:36:43.591471   70962 system_pods.go:89] "etcd-default-k8s-diff-port-734648" [7b60f629-8a15-420e-936c-872a0d55ce74] Running
	I0401 19:36:43.591480   70962 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-734648" [811a3391-02c8-43dd-9129-3fc50a4fab41] Running
	I0401 19:36:43.591487   70962 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-734648" [4b57b14a-5f46-482f-8661-8fa500db5390] Running
	I0401 19:36:43.591493   70962 system_pods.go:89] "kube-proxy-p8wrc" [2f6b37e6-b3f9-44b6-8ff9-e8fd781ef1a3] Running
	I0401 19:36:43.591498   70962 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-734648" [e0fb5e6b-aaa8-45ba-9df9-be947cbbdb80] Running
	I0401 19:36:43.591508   70962 system_pods.go:89] "metrics-server-57f55c9bc5-fj5x5" [e25fa51c-d80e-4ddc-898f-3b9903746537] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:36:43.591517   70962 system_pods.go:89] "storage-provisioner" [8509e661-1b53-4018-b6b0-b6a5e242768d] Running
	I0401 19:36:43.591529   70962 system_pods.go:126] duration metric: took 1.841248999s to wait for k8s-apps to be running ...
	I0401 19:36:43.591561   70962 system_svc.go:44] waiting for kubelet service to be running ....
	I0401 19:36:43.591613   70962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:36:43.611873   70962 system_svc.go:56] duration metric: took 20.296001ms WaitForService to wait for kubelet
	I0401 19:36:43.611907   70962 kubeadm.go:576] duration metric: took 2.504430824s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 19:36:43.611930   70962 node_conditions.go:102] verifying NodePressure condition ...
	I0401 19:36:43.617697   70962 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 19:36:43.617720   70962 node_conditions.go:123] node cpu capacity is 2
	I0401 19:36:43.617732   70962 node_conditions.go:105] duration metric: took 5.796357ms to run NodePressure ...
	I0401 19:36:43.617745   70962 start.go:240] waiting for startup goroutines ...
	I0401 19:36:43.617754   70962 start.go:245] waiting for cluster config update ...
	I0401 19:36:43.617765   70962 start.go:254] writing updated cluster config ...
	I0401 19:36:43.618023   70962 ssh_runner.go:195] Run: rm -f paused
	I0401 19:36:43.666581   70962 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0401 19:36:43.668685   70962 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-734648" cluster and "default" namespace by default
	I0401 19:36:42.505149   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:45.003855   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:47.004247   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:49.504898   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:51.505403   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:54.005163   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:56.503395   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:58.503791   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:00.504001   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:02.504193   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:05.003540   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:07.003582   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:09.503975   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:12.005037   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:14.503460   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:16.504630   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:19.004307   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:21.004909   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:23.503286   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:25.503469   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:27.503520   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:30.004792   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:32.503693   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:35.005137   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:37.504848   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:39.504961   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:41.510644   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:44.004680   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:46.005118   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:51.561231   71168 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0401 19:37:51.561356   71168 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0401 19:37:51.563350   71168 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0401 19:37:51.563417   71168 kubeadm.go:309] [preflight] Running pre-flight checks
	I0401 19:37:51.563497   71168 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 19:37:51.563596   71168 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 19:37:51.563711   71168 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 19:37:51.563797   71168 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 19:37:51.565710   71168 out.go:204]   - Generating certificates and keys ...
	I0401 19:37:51.565809   71168 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0401 19:37:51.565908   71168 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0401 19:37:51.566051   71168 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0401 19:37:51.566136   71168 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0401 19:37:51.566230   71168 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0401 19:37:51.566325   71168 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0401 19:37:51.566402   71168 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0401 19:37:51.566464   71168 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0401 19:37:51.566580   71168 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0401 19:37:51.566688   71168 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0401 19:37:51.566727   71168 kubeadm.go:309] [certs] Using the existing "sa" key
	I0401 19:37:51.566774   71168 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 19:37:51.566822   71168 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 19:37:51.566917   71168 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 19:37:51.567001   71168 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 19:37:51.567068   71168 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 19:37:51.567210   71168 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 19:37:51.567314   71168 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 19:37:51.567371   71168 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0401 19:37:51.567473   71168 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 19:37:48.504708   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:51.005355   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:51.569285   71168 out.go:204]   - Booting up control plane ...
	I0401 19:37:51.569394   71168 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 19:37:51.569498   71168 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 19:37:51.569568   71168 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 19:37:51.569661   71168 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 19:37:51.569802   71168 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 19:37:51.569866   71168 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0401 19:37:51.569957   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:37:51.570195   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:37:51.570287   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:37:51.570514   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:37:51.570589   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:37:51.570769   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:37:51.570859   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:37:51.571033   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:37:51.571134   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:37:51.571342   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:37:51.571351   71168 kubeadm.go:309] 
	I0401 19:37:51.571394   71168 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0401 19:37:51.571453   71168 kubeadm.go:309] 		timed out waiting for the condition
	I0401 19:37:51.571475   71168 kubeadm.go:309] 
	I0401 19:37:51.571521   71168 kubeadm.go:309] 	This error is likely caused by:
	I0401 19:37:51.571558   71168 kubeadm.go:309] 		- The kubelet is not running
	I0401 19:37:51.571676   71168 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0401 19:37:51.571687   71168 kubeadm.go:309] 
	I0401 19:37:51.571824   71168 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0401 19:37:51.571880   71168 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0401 19:37:51.571921   71168 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0401 19:37:51.571931   71168 kubeadm.go:309] 
	I0401 19:37:51.572077   71168 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0401 19:37:51.572198   71168 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0401 19:37:51.572209   71168 kubeadm.go:309] 
	I0401 19:37:51.572359   71168 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0401 19:37:51.572477   71168 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0401 19:37:51.572576   71168 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0401 19:37:51.572676   71168 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0401 19:37:51.572731   71168 kubeadm.go:309] 
	W0401 19:37:51.572793   71168 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0401 19:37:51.572851   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0401 19:37:52.428554   71168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:37:52.445151   71168 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:37:52.456989   71168 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:37:52.457010   71168 kubeadm.go:156] found existing configuration files:
	
	I0401 19:37:52.457053   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 19:37:52.468305   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:37:52.468375   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:37:52.479305   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 19:37:52.489703   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:37:52.489753   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:37:52.501023   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 19:37:52.512418   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:37:52.512480   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:37:52.523850   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 19:37:52.534358   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:37:52.534425   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:37:52.546135   71168 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 19:37:52.779427   71168 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 19:37:52.997253   70284 pod_ready.go:81] duration metric: took 4m0.000092266s for pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace to be "Ready" ...
	E0401 19:37:52.997287   70284 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace to be "Ready" (will not retry!)
	I0401 19:37:52.997309   70284 pod_ready.go:38] duration metric: took 4m43.911595731s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:37:52.997333   70284 kubeadm.go:591] duration metric: took 5m31.840082505s to restartPrimaryControlPlane
	W0401 19:37:52.997393   70284 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0401 19:37:52.997421   70284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0401 19:38:25.458760   70284 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.46129187s)
	I0401 19:38:25.458845   70284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:38:25.476633   70284 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:38:25.487615   70284 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:38:25.498590   70284 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:38:25.498616   70284 kubeadm.go:156] found existing configuration files:
	
	I0401 19:38:25.498701   70284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 19:38:25.509063   70284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:38:25.509128   70284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:38:25.519806   70284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 19:38:25.530433   70284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:38:25.530488   70284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:38:25.540979   70284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 19:38:25.550786   70284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:38:25.550847   70284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:38:25.561979   70284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 19:38:25.571832   70284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:38:25.571898   70284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:38:25.582501   70284 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 19:38:25.646956   70284 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0-rc.0
	I0401 19:38:25.647046   70284 kubeadm.go:309] [preflight] Running pre-flight checks
	I0401 19:38:25.825328   70284 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 19:38:25.825459   70284 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 19:38:25.825574   70284 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 19:38:26.066201   70284 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 19:38:26.069071   70284 out.go:204]   - Generating certificates and keys ...
	I0401 19:38:26.069170   70284 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0401 19:38:26.069260   70284 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0401 19:38:26.069402   70284 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0401 19:38:26.069493   70284 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0401 19:38:26.069588   70284 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0401 19:38:26.069703   70284 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0401 19:38:26.069765   70284 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0401 19:38:26.069822   70284 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0401 19:38:26.069986   70284 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0401 19:38:26.070644   70284 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0401 19:38:26.071149   70284 kubeadm.go:309] [certs] Using the existing "sa" key
	I0401 19:38:26.071308   70284 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 19:38:26.204651   70284 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 19:38:26.368926   70284 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 19:38:26.586004   70284 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 19:38:26.710851   70284 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 19:38:26.858015   70284 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 19:38:26.858741   70284 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 19:38:26.863879   70284 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 19:38:26.865794   70284 out.go:204]   - Booting up control plane ...
	I0401 19:38:26.865898   70284 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 19:38:26.865984   70284 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 19:38:26.866081   70284 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 19:38:26.886171   70284 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 19:38:26.887118   70284 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 19:38:26.887177   70284 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0401 19:38:27.021053   70284 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0401 19:38:27.021142   70284 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0401 19:38:28.023462   70284 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002303634s
	I0401 19:38:28.023549   70284 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0401 19:38:34.026967   70284 kubeadm.go:309] [api-check] The API server is healthy after 6.003391014s
	I0401 19:38:34.044095   70284 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 19:38:34.061716   70284 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 19:38:34.092708   70284 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 19:38:34.093037   70284 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-472858 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 19:38:34.111758   70284 kubeadm.go:309] [bootstrap-token] Using token: 45cmca.rj16278sw3ueq3us
	I0401 19:38:34.113211   70284 out.go:204]   - Configuring RBAC rules ...
	I0401 19:38:34.113333   70284 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 19:38:34.122292   70284 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 19:38:34.133114   70284 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 19:38:34.138441   70284 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 19:38:34.143964   70284 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 19:38:34.148675   70284 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 19:38:34.438167   70284 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 19:38:34.885250   70284 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0401 19:38:35.439990   70284 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0401 19:38:35.441439   70284 kubeadm.go:309] 
	I0401 19:38:35.441532   70284 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0401 19:38:35.441545   70284 kubeadm.go:309] 
	I0401 19:38:35.441659   70284 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0401 19:38:35.441690   70284 kubeadm.go:309] 
	I0401 19:38:35.441752   70284 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0401 19:38:35.441845   70284 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 19:38:35.441930   70284 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 19:38:35.441938   70284 kubeadm.go:309] 
	I0401 19:38:35.442014   70284 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0401 19:38:35.442028   70284 kubeadm.go:309] 
	I0401 19:38:35.442067   70284 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 19:38:35.442073   70284 kubeadm.go:309] 
	I0401 19:38:35.442120   70284 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0401 19:38:35.442186   70284 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 19:38:35.442295   70284 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 19:38:35.442307   70284 kubeadm.go:309] 
	I0401 19:38:35.442426   70284 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 19:38:35.442552   70284 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0401 19:38:35.442565   70284 kubeadm.go:309] 
	I0401 19:38:35.442643   70284 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 45cmca.rj16278sw3ueq3us \
	I0401 19:38:35.442766   70284 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b8a0197ad47aa27a5800307c57228d22e61e4d31af785fa8a896f2b7fab267b8 \
	I0401 19:38:35.442803   70284 kubeadm.go:309] 	--control-plane 
	I0401 19:38:35.442813   70284 kubeadm.go:309] 
	I0401 19:38:35.442922   70284 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0401 19:38:35.442936   70284 kubeadm.go:309] 
	I0401 19:38:35.443008   70284 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 45cmca.rj16278sw3ueq3us \
	I0401 19:38:35.443097   70284 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b8a0197ad47aa27a5800307c57228d22e61e4d31af785fa8a896f2b7fab267b8 
	I0401 19:38:35.443436   70284 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 19:38:35.443530   70284 cni.go:84] Creating CNI manager for ""
	I0401 19:38:35.443546   70284 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:38:35.445089   70284 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0401 19:38:35.446328   70284 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0401 19:38:35.459788   70284 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0401 19:38:35.486202   70284 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 19:38:35.486300   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:35.486308   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-472858 minikube.k8s.io/updated_at=2024_04_01T19_38_35_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2 minikube.k8s.io/name=no-preload-472858 minikube.k8s.io/primary=true
	I0401 19:38:35.700677   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:35.731567   70284 ops.go:34] apiserver oom_adj: -16
	I0401 19:38:36.200955   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:36.701003   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:37.201632   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:37.700719   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:38.201316   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:38.701334   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:39.201609   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:39.701034   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:40.201771   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:40.700786   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:41.201750   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:41.701709   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:42.201682   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:42.700838   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:43.201123   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:43.701587   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:44.200860   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:44.700795   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:45.200850   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:45.701273   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:46.201701   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:46.701450   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:47.201496   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:47.701351   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:47.800239   70284 kubeadm.go:1107] duration metric: took 12.313994383s to wait for elevateKubeSystemPrivileges
	W0401 19:38:47.800287   70284 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0401 19:38:47.800298   70284 kubeadm.go:393] duration metric: took 6m26.705086714s to StartCluster
	I0401 19:38:47.800320   70284 settings.go:142] acquiring lock: {Name:mk5cd3d9600680d3808ad7ff6310a5e71b09e71d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:38:47.800410   70284 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 19:38:47.802818   70284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/kubeconfig: {Name:mkbd988e40ba29769e9f8a43c4d876f38e957f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:38:47.803132   70284 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.119 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 19:38:47.805445   70284 out.go:177] * Verifying Kubernetes components...
	I0401 19:38:47.803273   70284 config.go:182] Loaded profile config "no-preload-472858": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0401 19:38:47.803252   70284 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0401 19:38:47.806734   70284 addons.go:69] Setting storage-provisioner=true in profile "no-preload-472858"
	I0401 19:38:47.806761   70284 addons.go:69] Setting default-storageclass=true in profile "no-preload-472858"
	I0401 19:38:47.806774   70284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:38:47.806777   70284 addons.go:69] Setting metrics-server=true in profile "no-preload-472858"
	I0401 19:38:47.806802   70284 addons.go:234] Setting addon metrics-server=true in "no-preload-472858"
	W0401 19:38:47.806815   70284 addons.go:243] addon metrics-server should already be in state true
	I0401 19:38:47.806850   70284 host.go:66] Checking if "no-preload-472858" exists ...
	I0401 19:38:47.806802   70284 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-472858"
	I0401 19:38:47.806768   70284 addons.go:234] Setting addon storage-provisioner=true in "no-preload-472858"
	W0401 19:38:47.807229   70284 addons.go:243] addon storage-provisioner should already be in state true
	I0401 19:38:47.807257   70284 host.go:66] Checking if "no-preload-472858" exists ...
	I0401 19:38:47.807289   70284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:38:47.807332   70284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:38:47.807340   70284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:38:47.807366   70284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:38:47.807620   70284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:38:47.807690   70284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:38:47.823665   70284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38305
	I0401 19:38:47.823684   70284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35487
	I0401 19:38:47.824174   70284 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:38:47.824205   70284 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:38:47.824709   70284 main.go:141] libmachine: Using API Version  1
	I0401 19:38:47.824732   70284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:38:47.824838   70284 main.go:141] libmachine: Using API Version  1
	I0401 19:38:47.824867   70284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:38:47.825094   70284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:38:47.825276   70284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:38:47.825700   70284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:38:47.825746   70284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:38:47.825844   70284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:38:47.825866   70284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:38:47.826415   70284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38845
	I0401 19:38:47.826845   70284 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:38:47.827305   70284 main.go:141] libmachine: Using API Version  1
	I0401 19:38:47.827330   70284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:38:47.827800   70284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:38:47.828004   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetState
	I0401 19:38:47.831735   70284 addons.go:234] Setting addon default-storageclass=true in "no-preload-472858"
	W0401 19:38:47.831760   70284 addons.go:243] addon default-storageclass should already be in state true
	I0401 19:38:47.831791   70284 host.go:66] Checking if "no-preload-472858" exists ...
	I0401 19:38:47.832170   70284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:38:47.832218   70284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:38:47.842050   70284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42037
	I0401 19:38:47.842479   70284 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:38:47.842963   70284 main.go:141] libmachine: Using API Version  1
	I0401 19:38:47.842983   70284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:38:47.843354   70284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:38:47.843513   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetState
	I0401 19:38:47.845360   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:38:47.845430   70284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33357
	I0401 19:38:47.847622   70284 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:38:47.845959   70284 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:38:47.847568   70284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38785
	I0401 19:38:47.849255   70284 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 19:38:47.849283   70284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 19:38:47.849303   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:38:47.849356   70284 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:38:47.849524   70284 main.go:141] libmachine: Using API Version  1
	I0401 19:38:47.849536   70284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:38:47.850173   70284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:38:47.850228   70284 main.go:141] libmachine: Using API Version  1
	I0401 19:38:47.850238   70284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:38:47.850362   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetState
	I0401 19:38:47.851206   70284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:38:47.851773   70284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:38:47.851803   70284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:38:47.852404   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:38:47.854167   70284 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 19:38:47.853141   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:38:47.853926   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:38:47.855729   70284 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 19:38:47.855746   70284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 19:38:47.855763   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:38:47.855728   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:38:47.855809   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:38:47.855854   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:38:47.856000   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:38:47.856160   70284 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/no-preload-472858/id_rsa Username:docker}
	I0401 19:38:47.858726   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:38:47.859782   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:38:47.859826   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:38:47.859948   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:38:47.860138   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:38:47.860310   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:38:47.860593   70284 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/no-preload-472858/id_rsa Username:docker}
	I0401 19:38:47.870182   70284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34517
	I0401 19:38:47.870616   70284 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:38:47.871182   70284 main.go:141] libmachine: Using API Version  1
	I0401 19:38:47.871203   70284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:38:47.871561   70284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:38:47.871947   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetState
	I0401 19:38:47.873606   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:38:47.873931   70284 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 19:38:47.873949   70284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 19:38:47.873967   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:38:47.876826   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:38:47.877259   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:38:47.877286   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:38:47.877389   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:38:47.877672   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:38:47.877816   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:38:47.877974   70284 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/no-preload-472858/id_rsa Username:docker}
	I0401 19:38:48.053731   70284 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:38:48.081160   70284 node_ready.go:35] waiting up to 6m0s for node "no-preload-472858" to be "Ready" ...
	I0401 19:38:48.107976   70284 node_ready.go:49] node "no-preload-472858" has status "Ready":"True"
	I0401 19:38:48.107998   70284 node_ready.go:38] duration metric: took 26.793115ms for node "no-preload-472858" to be "Ready" ...
	I0401 19:38:48.108009   70284 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:38:48.115968   70284 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:38:48.158349   70284 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 19:38:48.158383   70284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 19:38:48.166047   70284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 19:38:48.181902   70284 pod_ready.go:92] pod "etcd-no-preload-472858" in "kube-system" namespace has status "Ready":"True"
	I0401 19:38:48.181922   70284 pod_ready.go:81] duration metric: took 65.920299ms for pod "etcd-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:38:48.181935   70284 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:38:48.199372   70284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 19:38:48.232110   70284 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 19:38:48.232140   70284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 19:38:48.251891   70284 pod_ready.go:92] pod "kube-apiserver-no-preload-472858" in "kube-system" namespace has status "Ready":"True"
	I0401 19:38:48.251914   70284 pod_ready.go:81] duration metric: took 69.970077ms for pod "kube-apiserver-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:38:48.251929   70284 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:38:48.309605   70284 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 19:38:48.309627   70284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0401 19:38:48.325907   70284 pod_ready.go:92] pod "kube-controller-manager-no-preload-472858" in "kube-system" namespace has status "Ready":"True"
	I0401 19:38:48.325928   70284 pod_ready.go:81] duration metric: took 73.991711ms for pod "kube-controller-manager-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:38:48.325938   70284 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:38:48.373418   70284 pod_ready.go:92] pod "kube-scheduler-no-preload-472858" in "kube-system" namespace has status "Ready":"True"
	I0401 19:38:48.373448   70284 pod_ready.go:81] duration metric: took 47.503272ms for pod "kube-scheduler-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:38:48.373456   70284 pod_ready.go:38] duration metric: took 265.436317ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:38:48.373479   70284 api_server.go:52] waiting for apiserver process to appear ...
	I0401 19:38:48.373543   70284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:38:48.396444   70284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 19:38:48.564838   70284 main.go:141] libmachine: Making call to close driver server
	I0401 19:38:48.564860   70284 main.go:141] libmachine: (no-preload-472858) Calling .Close
	I0401 19:38:48.565180   70284 main.go:141] libmachine: (no-preload-472858) DBG | Closing plugin on server side
	I0401 19:38:48.565197   70284 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:38:48.565227   70284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:38:48.565247   70284 main.go:141] libmachine: Making call to close driver server
	I0401 19:38:48.565258   70284 main.go:141] libmachine: (no-preload-472858) Calling .Close
	I0401 19:38:48.565489   70284 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:38:48.565506   70284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:38:48.579332   70284 main.go:141] libmachine: Making call to close driver server
	I0401 19:38:48.579355   70284 main.go:141] libmachine: (no-preload-472858) Calling .Close
	I0401 19:38:48.579599   70284 main.go:141] libmachine: (no-preload-472858) DBG | Closing plugin on server side
	I0401 19:38:48.579637   70284 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:38:48.579645   70284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:38:48.884887   70284 main.go:141] libmachine: Making call to close driver server
	I0401 19:38:48.884920   70284 main.go:141] libmachine: (no-preload-472858) Calling .Close
	I0401 19:38:48.884938   70284 api_server.go:72] duration metric: took 1.08176251s to wait for apiserver process to appear ...
	I0401 19:38:48.884958   70284 api_server.go:88] waiting for apiserver healthz status ...
	I0401 19:38:48.885018   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:38:48.885232   70284 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:38:48.885252   70284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:38:48.885260   70284 main.go:141] libmachine: Making call to close driver server
	I0401 19:38:48.885269   70284 main.go:141] libmachine: (no-preload-472858) Calling .Close
	I0401 19:38:48.885236   70284 main.go:141] libmachine: (no-preload-472858) DBG | Closing plugin on server side
	I0401 19:38:48.885519   70284 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:38:48.887182   70284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:38:48.885555   70284 main.go:141] libmachine: (no-preload-472858) DBG | Closing plugin on server side
	I0401 19:38:48.895737   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 200:
	ok
	I0401 19:38:48.899521   70284 api_server.go:141] control plane version: v1.30.0-rc.0
	I0401 19:38:48.899539   70284 api_server.go:131] duration metric: took 14.574989ms to wait for apiserver health ...
	I0401 19:38:48.899547   70284 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 19:38:48.914064   70284 system_pods.go:59] 8 kube-system pods found
	I0401 19:38:48.914090   70284 system_pods.go:61] "coredns-7db6d8ff4d-8285w" [c450ac4a-974e-4322-9857-fb65792a142b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:48.914106   70284 system_pods.go:61] "coredns-7db6d8ff4d-wmbsp" [7a73f081-42f4-4854-8785-25e54eb0a391] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:48.914112   70284 system_pods.go:61] "etcd-no-preload-472858" [d96862c6-4b97-4239-a79a-e877f2825eb6] Running
	I0401 19:38:48.914117   70284 system_pods.go:61] "kube-apiserver-no-preload-472858" [78418540-b912-4457-98ef-94cf57cf9379] Running
	I0401 19:38:48.914122   70284 system_pods.go:61] "kube-controller-manager-no-preload-472858" [4a48aaa7-c47f-4d1f-aace-f02d2f24c791] Running
	I0401 19:38:48.914126   70284 system_pods.go:61] "kube-proxy-5dmtl" [c243321b-b01a-4fd5-895a-888d18ee8527] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:38:48.914134   70284 system_pods.go:61] "kube-scheduler-no-preload-472858" [3564e7d0-f6cc-4584-a2cc-39fc6f884836] Running
	I0401 19:38:48.914138   70284 system_pods.go:61] "storage-provisioner" [844e010a-3bee-4fd1-942f-10fa50306617] Pending
	I0401 19:38:48.914146   70284 system_pods.go:74] duration metric: took 14.594359ms to wait for pod list to return data ...
	I0401 19:38:48.914156   70284 default_sa.go:34] waiting for default service account to be created ...
	I0401 19:38:48.924790   70284 default_sa.go:45] found service account: "default"
	I0401 19:38:48.924814   70284 default_sa.go:55] duration metric: took 10.649887ms for default service account to be created ...
	I0401 19:38:48.924825   70284 system_pods.go:116] waiting for k8s-apps to be running ...
	I0401 19:38:48.930993   70284 system_pods.go:86] 8 kube-system pods found
	I0401 19:38:48.931020   70284 system_pods.go:89] "coredns-7db6d8ff4d-8285w" [c450ac4a-974e-4322-9857-fb65792a142b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:48.931037   70284 system_pods.go:89] "coredns-7db6d8ff4d-wmbsp" [7a73f081-42f4-4854-8785-25e54eb0a391] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:48.931047   70284 system_pods.go:89] "etcd-no-preload-472858" [d96862c6-4b97-4239-a79a-e877f2825eb6] Running
	I0401 19:38:48.931056   70284 system_pods.go:89] "kube-apiserver-no-preload-472858" [78418540-b912-4457-98ef-94cf57cf9379] Running
	I0401 19:38:48.931066   70284 system_pods.go:89] "kube-controller-manager-no-preload-472858" [4a48aaa7-c47f-4d1f-aace-f02d2f24c791] Running
	I0401 19:38:48.931074   70284 system_pods.go:89] "kube-proxy-5dmtl" [c243321b-b01a-4fd5-895a-888d18ee8527] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:38:48.931089   70284 system_pods.go:89] "kube-scheduler-no-preload-472858" [3564e7d0-f6cc-4584-a2cc-39fc6f884836] Running
	I0401 19:38:48.931098   70284 system_pods.go:89] "storage-provisioner" [844e010a-3bee-4fd1-942f-10fa50306617] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:38:48.931117   70284 retry.go:31] will retry after 297.45527ms: missing components: kube-dns, kube-proxy
	I0401 19:38:49.123999   70284 main.go:141] libmachine: Making call to close driver server
	I0401 19:38:49.124019   70284 main.go:141] libmachine: (no-preload-472858) Calling .Close
	I0401 19:38:49.124344   70284 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:38:49.124394   70284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:38:49.124406   70284 main.go:141] libmachine: Making call to close driver server
	I0401 19:38:49.124414   70284 main.go:141] libmachine: (no-preload-472858) Calling .Close
	I0401 19:38:49.124356   70284 main.go:141] libmachine: (no-preload-472858) DBG | Closing plugin on server side
	I0401 19:38:49.124627   70284 main.go:141] libmachine: (no-preload-472858) DBG | Closing plugin on server side
	I0401 19:38:49.124661   70284 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:38:49.124677   70284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:38:49.124690   70284 addons.go:470] Verifying addon metrics-server=true in "no-preload-472858"
	I0401 19:38:49.127415   70284 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0401 19:38:49.129047   70284 addons.go:505] duration metric: took 1.325796036s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0401 19:38:49.236094   70284 system_pods.go:86] 9 kube-system pods found
	I0401 19:38:49.236127   70284 system_pods.go:89] "coredns-7db6d8ff4d-8285w" [c450ac4a-974e-4322-9857-fb65792a142b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:49.236136   70284 system_pods.go:89] "coredns-7db6d8ff4d-wmbsp" [7a73f081-42f4-4854-8785-25e54eb0a391] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:49.236145   70284 system_pods.go:89] "etcd-no-preload-472858" [d96862c6-4b97-4239-a79a-e877f2825eb6] Running
	I0401 19:38:49.236152   70284 system_pods.go:89] "kube-apiserver-no-preload-472858" [78418540-b912-4457-98ef-94cf57cf9379] Running
	I0401 19:38:49.236159   70284 system_pods.go:89] "kube-controller-manager-no-preload-472858" [4a48aaa7-c47f-4d1f-aace-f02d2f24c791] Running
	I0401 19:38:49.236168   70284 system_pods.go:89] "kube-proxy-5dmtl" [c243321b-b01a-4fd5-895a-888d18ee8527] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:38:49.236175   70284 system_pods.go:89] "kube-scheduler-no-preload-472858" [3564e7d0-f6cc-4584-a2cc-39fc6f884836] Running
	I0401 19:38:49.236185   70284 system_pods.go:89] "metrics-server-569cc877fc-wj2tt" [5259722c-3d0b-468f-b941-419806e91177] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:38:49.236198   70284 system_pods.go:89] "storage-provisioner" [844e010a-3bee-4fd1-942f-10fa50306617] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:38:49.236218   70284 retry.go:31] will retry after 287.299528ms: missing components: kube-dns, kube-proxy
	I0401 19:38:49.530606   70284 system_pods.go:86] 9 kube-system pods found
	I0401 19:38:49.530643   70284 system_pods.go:89] "coredns-7db6d8ff4d-8285w" [c450ac4a-974e-4322-9857-fb65792a142b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:49.530654   70284 system_pods.go:89] "coredns-7db6d8ff4d-wmbsp" [7a73f081-42f4-4854-8785-25e54eb0a391] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:49.530663   70284 system_pods.go:89] "etcd-no-preload-472858" [d96862c6-4b97-4239-a79a-e877f2825eb6] Running
	I0401 19:38:49.530670   70284 system_pods.go:89] "kube-apiserver-no-preload-472858" [78418540-b912-4457-98ef-94cf57cf9379] Running
	I0401 19:38:49.530678   70284 system_pods.go:89] "kube-controller-manager-no-preload-472858" [4a48aaa7-c47f-4d1f-aace-f02d2f24c791] Running
	I0401 19:38:49.530687   70284 system_pods.go:89] "kube-proxy-5dmtl" [c243321b-b01a-4fd5-895a-888d18ee8527] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:38:49.530697   70284 system_pods.go:89] "kube-scheduler-no-preload-472858" [3564e7d0-f6cc-4584-a2cc-39fc6f884836] Running
	I0401 19:38:49.530711   70284 system_pods.go:89] "metrics-server-569cc877fc-wj2tt" [5259722c-3d0b-468f-b941-419806e91177] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:38:49.530721   70284 system_pods.go:89] "storage-provisioner" [844e010a-3bee-4fd1-942f-10fa50306617] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:38:49.530744   70284 retry.go:31] will retry after 435.286919ms: missing components: kube-dns, kube-proxy
	I0401 19:38:49.974049   70284 system_pods.go:86] 9 kube-system pods found
	I0401 19:38:49.974090   70284 system_pods.go:89] "coredns-7db6d8ff4d-8285w" [c450ac4a-974e-4322-9857-fb65792a142b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:49.974103   70284 system_pods.go:89] "coredns-7db6d8ff4d-wmbsp" [7a73f081-42f4-4854-8785-25e54eb0a391] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:49.974113   70284 system_pods.go:89] "etcd-no-preload-472858" [d96862c6-4b97-4239-a79a-e877f2825eb6] Running
	I0401 19:38:49.974121   70284 system_pods.go:89] "kube-apiserver-no-preload-472858" [78418540-b912-4457-98ef-94cf57cf9379] Running
	I0401 19:38:49.974128   70284 system_pods.go:89] "kube-controller-manager-no-preload-472858" [4a48aaa7-c47f-4d1f-aace-f02d2f24c791] Running
	I0401 19:38:49.974142   70284 system_pods.go:89] "kube-proxy-5dmtl" [c243321b-b01a-4fd5-895a-888d18ee8527] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:38:49.974153   70284 system_pods.go:89] "kube-scheduler-no-preload-472858" [3564e7d0-f6cc-4584-a2cc-39fc6f884836] Running
	I0401 19:38:49.974168   70284 system_pods.go:89] "metrics-server-569cc877fc-wj2tt" [5259722c-3d0b-468f-b941-419806e91177] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:38:49.974181   70284 system_pods.go:89] "storage-provisioner" [844e010a-3bee-4fd1-942f-10fa50306617] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:38:49.974203   70284 retry.go:31] will retry after 577.959209ms: missing components: kube-dns, kube-proxy
	I0401 19:38:50.558750   70284 system_pods.go:86] 9 kube-system pods found
	I0401 19:38:50.558780   70284 system_pods.go:89] "coredns-7db6d8ff4d-8285w" [c450ac4a-974e-4322-9857-fb65792a142b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:50.558787   70284 system_pods.go:89] "coredns-7db6d8ff4d-wmbsp" [7a73f081-42f4-4854-8785-25e54eb0a391] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:50.558795   70284 system_pods.go:89] "etcd-no-preload-472858" [d96862c6-4b97-4239-a79a-e877f2825eb6] Running
	I0401 19:38:50.558805   70284 system_pods.go:89] "kube-apiserver-no-preload-472858" [78418540-b912-4457-98ef-94cf57cf9379] Running
	I0401 19:38:50.558812   70284 system_pods.go:89] "kube-controller-manager-no-preload-472858" [4a48aaa7-c47f-4d1f-aace-f02d2f24c791] Running
	I0401 19:38:50.558820   70284 system_pods.go:89] "kube-proxy-5dmtl" [c243321b-b01a-4fd5-895a-888d18ee8527] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:38:50.558833   70284 system_pods.go:89] "kube-scheduler-no-preload-472858" [3564e7d0-f6cc-4584-a2cc-39fc6f884836] Running
	I0401 19:38:50.558840   70284 system_pods.go:89] "metrics-server-569cc877fc-wj2tt" [5259722c-3d0b-468f-b941-419806e91177] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:38:50.558846   70284 system_pods.go:89] "storage-provisioner" [844e010a-3bee-4fd1-942f-10fa50306617] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:38:50.558863   70284 retry.go:31] will retry after 723.380101ms: missing components: kube-dns, kube-proxy
	I0401 19:38:51.291450   70284 system_pods.go:86] 9 kube-system pods found
	I0401 19:38:51.291487   70284 system_pods.go:89] "coredns-7db6d8ff4d-8285w" [c450ac4a-974e-4322-9857-fb65792a142b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:51.291498   70284 system_pods.go:89] "coredns-7db6d8ff4d-wmbsp" [7a73f081-42f4-4854-8785-25e54eb0a391] Running
	I0401 19:38:51.291508   70284 system_pods.go:89] "etcd-no-preload-472858" [d96862c6-4b97-4239-a79a-e877f2825eb6] Running
	I0401 19:38:51.291514   70284 system_pods.go:89] "kube-apiserver-no-preload-472858" [78418540-b912-4457-98ef-94cf57cf9379] Running
	I0401 19:38:51.291521   70284 system_pods.go:89] "kube-controller-manager-no-preload-472858" [4a48aaa7-c47f-4d1f-aace-f02d2f24c791] Running
	I0401 19:38:51.291527   70284 system_pods.go:89] "kube-proxy-5dmtl" [c243321b-b01a-4fd5-895a-888d18ee8527] Running
	I0401 19:38:51.291532   70284 system_pods.go:89] "kube-scheduler-no-preload-472858" [3564e7d0-f6cc-4584-a2cc-39fc6f884836] Running
	I0401 19:38:51.291543   70284 system_pods.go:89] "metrics-server-569cc877fc-wj2tt" [5259722c-3d0b-468f-b941-419806e91177] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:38:51.291551   70284 system_pods.go:89] "storage-provisioner" [844e010a-3bee-4fd1-942f-10fa50306617] Running
	I0401 19:38:51.291559   70284 system_pods.go:126] duration metric: took 2.366728733s to wait for k8s-apps to be running ...
	I0401 19:38:51.291576   70284 system_svc.go:44] waiting for kubelet service to be running ....
	I0401 19:38:51.291622   70284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:38:51.310224   70284 system_svc.go:56] duration metric: took 18.63923ms WaitForService to wait for kubelet
	I0401 19:38:51.310250   70284 kubeadm.go:576] duration metric: took 3.50708191s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 19:38:51.310269   70284 node_conditions.go:102] verifying NodePressure condition ...
	I0401 19:38:51.312899   70284 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 19:38:51.312919   70284 node_conditions.go:123] node cpu capacity is 2
	I0401 19:38:51.312930   70284 node_conditions.go:105] duration metric: took 2.654739ms to run NodePressure ...
	I0401 19:38:51.312945   70284 start.go:240] waiting for startup goroutines ...
	I0401 19:38:51.312958   70284 start.go:245] waiting for cluster config update ...
	I0401 19:38:51.312985   70284 start.go:254] writing updated cluster config ...
	I0401 19:38:51.313269   70284 ssh_runner.go:195] Run: rm -f paused
	I0401 19:38:51.365041   70284 start.go:600] kubectl: 1.29.3, cluster: 1.30.0-rc.0 (minor skew: 1)
	I0401 19:38:51.367173   70284 out.go:177] * Done! kubectl is now configured to use "no-preload-472858" cluster and "default" namespace by default
	I0401 19:39:48.856665   71168 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0401 19:39:48.856779   71168 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0401 19:39:48.858840   71168 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0401 19:39:48.858896   71168 kubeadm.go:309] [preflight] Running pre-flight checks
	I0401 19:39:48.858987   71168 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 19:39:48.859122   71168 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 19:39:48.859222   71168 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 19:39:48.859314   71168 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 19:39:48.861104   71168 out.go:204]   - Generating certificates and keys ...
	I0401 19:39:48.861202   71168 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0401 19:39:48.861277   71168 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0401 19:39:48.861381   71168 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0401 19:39:48.861492   71168 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0401 19:39:48.861596   71168 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0401 19:39:48.861699   71168 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0401 19:39:48.861791   71168 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0401 19:39:48.861897   71168 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0401 19:39:48.862009   71168 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0401 19:39:48.862118   71168 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0401 19:39:48.862176   71168 kubeadm.go:309] [certs] Using the existing "sa" key
	I0401 19:39:48.862260   71168 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 19:39:48.862338   71168 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 19:39:48.862420   71168 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 19:39:48.862480   71168 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 19:39:48.862527   71168 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 19:39:48.862618   71168 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 19:39:48.862693   71168 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 19:39:48.862734   71168 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0401 19:39:48.862804   71168 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 19:39:48.864199   71168 out.go:204]   - Booting up control plane ...
	I0401 19:39:48.864291   71168 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 19:39:48.864359   71168 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 19:39:48.864420   71168 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 19:39:48.864504   71168 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 19:39:48.864712   71168 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 19:39:48.864788   71168 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0401 19:39:48.864871   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:39:48.865069   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:39:48.865153   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:39:48.865344   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:39:48.865453   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:39:48.865674   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:39:48.865755   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:39:48.865989   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:39:48.866095   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:39:48.866269   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:39:48.866285   71168 kubeadm.go:309] 
	I0401 19:39:48.866343   71168 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0401 19:39:48.866402   71168 kubeadm.go:309] 		timed out waiting for the condition
	I0401 19:39:48.866414   71168 kubeadm.go:309] 
	I0401 19:39:48.866458   71168 kubeadm.go:309] 	This error is likely caused by:
	I0401 19:39:48.866506   71168 kubeadm.go:309] 		- The kubelet is not running
	I0401 19:39:48.866651   71168 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0401 19:39:48.866665   71168 kubeadm.go:309] 
	I0401 19:39:48.866816   71168 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0401 19:39:48.866865   71168 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0401 19:39:48.866895   71168 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0401 19:39:48.866901   71168 kubeadm.go:309] 
	I0401 19:39:48.866989   71168 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0401 19:39:48.867061   71168 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0401 19:39:48.867070   71168 kubeadm.go:309] 
	I0401 19:39:48.867194   71168 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0401 19:39:48.867327   71168 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0401 19:39:48.867417   71168 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0401 19:39:48.867526   71168 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0401 19:39:48.867555   71168 kubeadm.go:309] 
	I0401 19:39:48.867633   71168 kubeadm.go:393] duration metric: took 7m58.404831893s to StartCluster
	I0401 19:39:48.867702   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:39:48.867764   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:39:48.922329   71168 cri.go:89] found id: ""
	I0401 19:39:48.922359   71168 logs.go:276] 0 containers: []
	W0401 19:39:48.922369   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:39:48.922377   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:39:48.922435   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:39:48.966212   71168 cri.go:89] found id: ""
	I0401 19:39:48.966235   71168 logs.go:276] 0 containers: []
	W0401 19:39:48.966243   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:39:48.966248   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:39:48.966309   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:39:49.015141   71168 cri.go:89] found id: ""
	I0401 19:39:49.015171   71168 logs.go:276] 0 containers: []
	W0401 19:39:49.015182   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:39:49.015189   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:39:49.015249   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:39:49.053042   71168 cri.go:89] found id: ""
	I0401 19:39:49.053067   71168 logs.go:276] 0 containers: []
	W0401 19:39:49.053077   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:39:49.053085   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:39:49.053144   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:39:49.093880   71168 cri.go:89] found id: ""
	I0401 19:39:49.093906   71168 logs.go:276] 0 containers: []
	W0401 19:39:49.093914   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:39:49.093923   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:39:49.093976   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:39:49.129730   71168 cri.go:89] found id: ""
	I0401 19:39:49.129752   71168 logs.go:276] 0 containers: []
	W0401 19:39:49.129760   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:39:49.129766   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:39:49.129818   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:39:49.171075   71168 cri.go:89] found id: ""
	I0401 19:39:49.171107   71168 logs.go:276] 0 containers: []
	W0401 19:39:49.171118   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:39:49.171125   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:39:49.171204   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:39:49.208279   71168 cri.go:89] found id: ""
	I0401 19:39:49.208308   71168 logs.go:276] 0 containers: []
	W0401 19:39:49.208319   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:39:49.208330   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:39:49.208345   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:39:49.294128   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:39:49.294148   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:39:49.294162   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:39:49.400930   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:39:49.400963   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:39:49.443111   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:39:49.443140   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:39:49.501382   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:39:49.501417   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0401 19:39:49.516418   71168 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0401 19:39:49.516461   71168 out.go:239] * 
	W0401 19:39:49.516521   71168 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0401 19:39:49.516591   71168 out.go:239] * 
	W0401 19:39:49.517377   71168 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 19:39:49.520389   71168 out.go:177] 
	W0401 19:39:49.521593   71168 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0401 19:39:49.521639   71168 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0401 19:39:49.521686   71168 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0401 19:39:49.523181   71168 out.go:177] 
	
	
	==> CRI-O <==
	Apr 01 19:47:53 no-preload-472858 crio[702]: time="2024-04-01 19:47:53.544283094Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712000873544253822,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97389,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=548379a1-78c1-46b9-8f78-2abaae3c5fab name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:47:53 no-preload-472858 crio[702]: time="2024-04-01 19:47:53.545060178Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b0b9d3f5-07b7-4cd9-9c7f-8f3b8315d1fe name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:47:53 no-preload-472858 crio[702]: time="2024-04-01 19:47:53.545183191Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b0b9d3f5-07b7-4cd9-9c7f-8f3b8315d1fe name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:47:53 no-preload-472858 crio[702]: time="2024-04-01 19:47:53.545537186Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0e2c4351e647df49ae68dd6f2fa48f97da0f1ed020146f07c9bbdb71c7322f49,PodSandboxId:4701e6ea238d3a457ae5d4bc391b2accae58745f1cf91ea34bcc52cd75572c95,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712000330615521794,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8285w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c450ac4a-974e-4322-9857-fb65792a142b,},Annotations:map[string]string{io.kubernetes.container.hash: dbadeb1f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd2af3697d6e04bee6abaee3f98618549c39fca9d37eddd9100e9771f5eba7b1,PodSandboxId:3f5d21b0de00d1968a6ebc70f5fb997ad6e4dc10ac8a3026c5fd5168a5cc3c63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712000330510043914,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wmbsp,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 7a73f081-42f4-4854-8785-25e54eb0a391,},Annotations:map[string]string{io.kubernetes.container.hash: 1d4d9764,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46ca3d05f0a38f853d588e5798478f86bb56b88053de18a36ea6b1e870765da7,PodSandboxId:cda23238357a2063801c1abff5e6ad8f29637f887f36a5e983eee1fc766fa94b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,State:CONTAINER_RUNNIN
G,CreatedAt:1712000330235492797,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5dmtl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c243321b-b01a-4fd5-895a-888d18ee8527,},Annotations:map[string]string{io.kubernetes.container.hash: 1e020a5c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9502cf2be2504fa95b6c845c5edbe06eec2770fd8f386b59ff0912c421b5487,PodSandboxId:10ec86b24247f820acd7ac516c02d1aa6ce20c41db4c2edbd2a1132ef78f6beb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:171200033028
7549498,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 844e010a-3bee-4fd1-942f-10fa50306617,},Annotations:map[string]string{io.kubernetes.container.hash: 270324,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d8c76c0d24fb6089861d6209885e1e72f2160d2a54aa2ae20ee28159bf7d04f,PodSandboxId:7e6e848a4b8f86f422c68afd32a16ed2602dcfcff914090100461fbebee7046f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,State:CONTAINER_RUNNING,CreatedAt:1712000308571586900,Labels:map[
string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-472858,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b789cd5a965a93fdde5e5001723f860,},Annotations:map[string]string{io.kubernetes.container.hash: e817c594,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72b29fac8ad2c0ec17d315f60a2c02d84311bda4e914417b34f76337547f7e08,PodSandboxId:3192a1acf8fe3aa65d5c638eb83b366935becfb9224ae3954541ddae7e0c414d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712000308553339506,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-472858,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2756d053566b209913a2136d1c6d31a2,},Annotations:map[string]string{io.kubernetes.container.hash: 99525366,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a23edb2b9de3784d6936a19fdaf8e118994492e2fced9fefada45236cb9557e,PodSandboxId:e74d490fbbd3448b2889e49065c366b4bad295c4c2e353146c37b612926968a1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,State:CONTAINER_RUNNING,CreatedAt:1712000308512361713,Labels:map[string]string{io.kubernetes.container.name: kub
e-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-472858,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f722d8aca3d6408d9cd66a3365e1a4,},Annotations:map[string]string{io.kubernetes.container.hash: e124cbce,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddf976a2ea41c7979ac65b40414e90e16efe03daee69ec3f9ce96f1244b6438c,PodSandboxId:f1639384d1e8344ca240afa1c5d14eace564211fc2c6c7589db56929dc22cb7b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,State:CONTAINER_RUNNING,CreatedAt:1712000308438602838,Labels:map[string]string{io.kubernetes.container.name
: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-472858,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f0e863cc75ae379be03fd049b1c5a0e,},Annotations:map[string]string{io.kubernetes.container.hash: 2d785418,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b0b9d3f5-07b7-4cd9-9c7f-8f3b8315d1fe name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:47:53 no-preload-472858 crio[702]: time="2024-04-01 19:47:53.588452259Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8b32b67c-eacc-4a85-91aa-11d66f19c102 name=/runtime.v1.RuntimeService/Version
	Apr 01 19:47:53 no-preload-472858 crio[702]: time="2024-04-01 19:47:53.588557958Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8b32b67c-eacc-4a85-91aa-11d66f19c102 name=/runtime.v1.RuntimeService/Version
	Apr 01 19:47:53 no-preload-472858 crio[702]: time="2024-04-01 19:47:53.596406670Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=97e60a74-ae35-49e4-9531-b2521fe9aca3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:47:53 no-preload-472858 crio[702]: time="2024-04-01 19:47:53.597358731Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712000873597329065,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97389,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=97e60a74-ae35-49e4-9531-b2521fe9aca3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:47:53 no-preload-472858 crio[702]: time="2024-04-01 19:47:53.598304456Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d6b03c60-5610-444c-a595-b47c9cb23d80 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:47:53 no-preload-472858 crio[702]: time="2024-04-01 19:47:53.598358863Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d6b03c60-5610-444c-a595-b47c9cb23d80 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:47:53 no-preload-472858 crio[702]: time="2024-04-01 19:47:53.598562652Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0e2c4351e647df49ae68dd6f2fa48f97da0f1ed020146f07c9bbdb71c7322f49,PodSandboxId:4701e6ea238d3a457ae5d4bc391b2accae58745f1cf91ea34bcc52cd75572c95,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712000330615521794,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8285w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c450ac4a-974e-4322-9857-fb65792a142b,},Annotations:map[string]string{io.kubernetes.container.hash: dbadeb1f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd2af3697d6e04bee6abaee3f98618549c39fca9d37eddd9100e9771f5eba7b1,PodSandboxId:3f5d21b0de00d1968a6ebc70f5fb997ad6e4dc10ac8a3026c5fd5168a5cc3c63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712000330510043914,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wmbsp,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 7a73f081-42f4-4854-8785-25e54eb0a391,},Annotations:map[string]string{io.kubernetes.container.hash: 1d4d9764,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46ca3d05f0a38f853d588e5798478f86bb56b88053de18a36ea6b1e870765da7,PodSandboxId:cda23238357a2063801c1abff5e6ad8f29637f887f36a5e983eee1fc766fa94b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,State:CONTAINER_RUNNIN
G,CreatedAt:1712000330235492797,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5dmtl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c243321b-b01a-4fd5-895a-888d18ee8527,},Annotations:map[string]string{io.kubernetes.container.hash: 1e020a5c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9502cf2be2504fa95b6c845c5edbe06eec2770fd8f386b59ff0912c421b5487,PodSandboxId:10ec86b24247f820acd7ac516c02d1aa6ce20c41db4c2edbd2a1132ef78f6beb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:171200033028
7549498,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 844e010a-3bee-4fd1-942f-10fa50306617,},Annotations:map[string]string{io.kubernetes.container.hash: 270324,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d8c76c0d24fb6089861d6209885e1e72f2160d2a54aa2ae20ee28159bf7d04f,PodSandboxId:7e6e848a4b8f86f422c68afd32a16ed2602dcfcff914090100461fbebee7046f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,State:CONTAINER_RUNNING,CreatedAt:1712000308571586900,Labels:map[
string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-472858,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b789cd5a965a93fdde5e5001723f860,},Annotations:map[string]string{io.kubernetes.container.hash: e817c594,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72b29fac8ad2c0ec17d315f60a2c02d84311bda4e914417b34f76337547f7e08,PodSandboxId:3192a1acf8fe3aa65d5c638eb83b366935becfb9224ae3954541ddae7e0c414d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712000308553339506,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-472858,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2756d053566b209913a2136d1c6d31a2,},Annotations:map[string]string{io.kubernetes.container.hash: 99525366,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a23edb2b9de3784d6936a19fdaf8e118994492e2fced9fefada45236cb9557e,PodSandboxId:e74d490fbbd3448b2889e49065c366b4bad295c4c2e353146c37b612926968a1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,State:CONTAINER_RUNNING,CreatedAt:1712000308512361713,Labels:map[string]string{io.kubernetes.container.name: kub
e-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-472858,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f722d8aca3d6408d9cd66a3365e1a4,},Annotations:map[string]string{io.kubernetes.container.hash: e124cbce,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddf976a2ea41c7979ac65b40414e90e16efe03daee69ec3f9ce96f1244b6438c,PodSandboxId:f1639384d1e8344ca240afa1c5d14eace564211fc2c6c7589db56929dc22cb7b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,State:CONTAINER_RUNNING,CreatedAt:1712000308438602838,Labels:map[string]string{io.kubernetes.container.name
: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-472858,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f0e863cc75ae379be03fd049b1c5a0e,},Annotations:map[string]string{io.kubernetes.container.hash: 2d785418,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d6b03c60-5610-444c-a595-b47c9cb23d80 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:47:53 no-preload-472858 crio[702]: time="2024-04-01 19:47:53.641043708Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8ce6f861-84bf-4e94-8b96-1af0cb282ccb name=/runtime.v1.RuntimeService/Version
	Apr 01 19:47:53 no-preload-472858 crio[702]: time="2024-04-01 19:47:53.641246206Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8ce6f861-84bf-4e94-8b96-1af0cb282ccb name=/runtime.v1.RuntimeService/Version
	Apr 01 19:47:53 no-preload-472858 crio[702]: time="2024-04-01 19:47:53.645914434Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=83114b1b-ef64-48ce-963d-46c6565dfba6 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:47:53 no-preload-472858 crio[702]: time="2024-04-01 19:47:53.646391555Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712000873646361226,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97389,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=83114b1b-ef64-48ce-963d-46c6565dfba6 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:47:53 no-preload-472858 crio[702]: time="2024-04-01 19:47:53.649783058Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=92e9f4ca-98fc-4bf4-8093-b36486fbf26b name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:47:53 no-preload-472858 crio[702]: time="2024-04-01 19:47:53.650209777Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=92e9f4ca-98fc-4bf4-8093-b36486fbf26b name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:47:53 no-preload-472858 crio[702]: time="2024-04-01 19:47:53.651096721Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0e2c4351e647df49ae68dd6f2fa48f97da0f1ed020146f07c9bbdb71c7322f49,PodSandboxId:4701e6ea238d3a457ae5d4bc391b2accae58745f1cf91ea34bcc52cd75572c95,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712000330615521794,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8285w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c450ac4a-974e-4322-9857-fb65792a142b,},Annotations:map[string]string{io.kubernetes.container.hash: dbadeb1f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd2af3697d6e04bee6abaee3f98618549c39fca9d37eddd9100e9771f5eba7b1,PodSandboxId:3f5d21b0de00d1968a6ebc70f5fb997ad6e4dc10ac8a3026c5fd5168a5cc3c63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712000330510043914,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wmbsp,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 7a73f081-42f4-4854-8785-25e54eb0a391,},Annotations:map[string]string{io.kubernetes.container.hash: 1d4d9764,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46ca3d05f0a38f853d588e5798478f86bb56b88053de18a36ea6b1e870765da7,PodSandboxId:cda23238357a2063801c1abff5e6ad8f29637f887f36a5e983eee1fc766fa94b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,State:CONTAINER_RUNNIN
G,CreatedAt:1712000330235492797,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5dmtl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c243321b-b01a-4fd5-895a-888d18ee8527,},Annotations:map[string]string{io.kubernetes.container.hash: 1e020a5c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9502cf2be2504fa95b6c845c5edbe06eec2770fd8f386b59ff0912c421b5487,PodSandboxId:10ec86b24247f820acd7ac516c02d1aa6ce20c41db4c2edbd2a1132ef78f6beb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:171200033028
7549498,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 844e010a-3bee-4fd1-942f-10fa50306617,},Annotations:map[string]string{io.kubernetes.container.hash: 270324,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d8c76c0d24fb6089861d6209885e1e72f2160d2a54aa2ae20ee28159bf7d04f,PodSandboxId:7e6e848a4b8f86f422c68afd32a16ed2602dcfcff914090100461fbebee7046f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,State:CONTAINER_RUNNING,CreatedAt:1712000308571586900,Labels:map[
string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-472858,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b789cd5a965a93fdde5e5001723f860,},Annotations:map[string]string{io.kubernetes.container.hash: e817c594,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72b29fac8ad2c0ec17d315f60a2c02d84311bda4e914417b34f76337547f7e08,PodSandboxId:3192a1acf8fe3aa65d5c638eb83b366935becfb9224ae3954541ddae7e0c414d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712000308553339506,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-472858,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2756d053566b209913a2136d1c6d31a2,},Annotations:map[string]string{io.kubernetes.container.hash: 99525366,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a23edb2b9de3784d6936a19fdaf8e118994492e2fced9fefada45236cb9557e,PodSandboxId:e74d490fbbd3448b2889e49065c366b4bad295c4c2e353146c37b612926968a1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,State:CONTAINER_RUNNING,CreatedAt:1712000308512361713,Labels:map[string]string{io.kubernetes.container.name: kub
e-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-472858,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f722d8aca3d6408d9cd66a3365e1a4,},Annotations:map[string]string{io.kubernetes.container.hash: e124cbce,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddf976a2ea41c7979ac65b40414e90e16efe03daee69ec3f9ce96f1244b6438c,PodSandboxId:f1639384d1e8344ca240afa1c5d14eace564211fc2c6c7589db56929dc22cb7b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,State:CONTAINER_RUNNING,CreatedAt:1712000308438602838,Labels:map[string]string{io.kubernetes.container.name
: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-472858,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f0e863cc75ae379be03fd049b1c5a0e,},Annotations:map[string]string{io.kubernetes.container.hash: 2d785418,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=92e9f4ca-98fc-4bf4-8093-b36486fbf26b name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:47:53 no-preload-472858 crio[702]: time="2024-04-01 19:47:53.695827799Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f286ca7b-4376-4558-b71a-7bdeadc73fab name=/runtime.v1.RuntimeService/Version
	Apr 01 19:47:53 no-preload-472858 crio[702]: time="2024-04-01 19:47:53.695929720Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f286ca7b-4376-4558-b71a-7bdeadc73fab name=/runtime.v1.RuntimeService/Version
	Apr 01 19:47:53 no-preload-472858 crio[702]: time="2024-04-01 19:47:53.697492562Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0d0685a1-adf7-4839-a94c-c38cae34363c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:47:53 no-preload-472858 crio[702]: time="2024-04-01 19:47:53.697875528Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712000873697847781,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97389,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0d0685a1-adf7-4839-a94c-c38cae34363c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:47:53 no-preload-472858 crio[702]: time="2024-04-01 19:47:53.698671687Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1afa44f2-fc1e-4b06-82c5-9fc0bf5b1841 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:47:53 no-preload-472858 crio[702]: time="2024-04-01 19:47:53.698760021Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1afa44f2-fc1e-4b06-82c5-9fc0bf5b1841 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:47:53 no-preload-472858 crio[702]: time="2024-04-01 19:47:53.698962372Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0e2c4351e647df49ae68dd6f2fa48f97da0f1ed020146f07c9bbdb71c7322f49,PodSandboxId:4701e6ea238d3a457ae5d4bc391b2accae58745f1cf91ea34bcc52cd75572c95,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712000330615521794,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8285w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c450ac4a-974e-4322-9857-fb65792a142b,},Annotations:map[string]string{io.kubernetes.container.hash: dbadeb1f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd2af3697d6e04bee6abaee3f98618549c39fca9d37eddd9100e9771f5eba7b1,PodSandboxId:3f5d21b0de00d1968a6ebc70f5fb997ad6e4dc10ac8a3026c5fd5168a5cc3c63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712000330510043914,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wmbsp,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 7a73f081-42f4-4854-8785-25e54eb0a391,},Annotations:map[string]string{io.kubernetes.container.hash: 1d4d9764,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46ca3d05f0a38f853d588e5798478f86bb56b88053de18a36ea6b1e870765da7,PodSandboxId:cda23238357a2063801c1abff5e6ad8f29637f887f36a5e983eee1fc766fa94b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,State:CONTAINER_RUNNIN
G,CreatedAt:1712000330235492797,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5dmtl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c243321b-b01a-4fd5-895a-888d18ee8527,},Annotations:map[string]string{io.kubernetes.container.hash: 1e020a5c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9502cf2be2504fa95b6c845c5edbe06eec2770fd8f386b59ff0912c421b5487,PodSandboxId:10ec86b24247f820acd7ac516c02d1aa6ce20c41db4c2edbd2a1132ef78f6beb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:171200033028
7549498,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 844e010a-3bee-4fd1-942f-10fa50306617,},Annotations:map[string]string{io.kubernetes.container.hash: 270324,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d8c76c0d24fb6089861d6209885e1e72f2160d2a54aa2ae20ee28159bf7d04f,PodSandboxId:7e6e848a4b8f86f422c68afd32a16ed2602dcfcff914090100461fbebee7046f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,State:CONTAINER_RUNNING,CreatedAt:1712000308571586900,Labels:map[
string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-472858,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b789cd5a965a93fdde5e5001723f860,},Annotations:map[string]string{io.kubernetes.container.hash: e817c594,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72b29fac8ad2c0ec17d315f60a2c02d84311bda4e914417b34f76337547f7e08,PodSandboxId:3192a1acf8fe3aa65d5c638eb83b366935becfb9224ae3954541ddae7e0c414d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712000308553339506,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-472858,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2756d053566b209913a2136d1c6d31a2,},Annotations:map[string]string{io.kubernetes.container.hash: 99525366,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a23edb2b9de3784d6936a19fdaf8e118994492e2fced9fefada45236cb9557e,PodSandboxId:e74d490fbbd3448b2889e49065c366b4bad295c4c2e353146c37b612926968a1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,State:CONTAINER_RUNNING,CreatedAt:1712000308512361713,Labels:map[string]string{io.kubernetes.container.name: kub
e-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-472858,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f722d8aca3d6408d9cd66a3365e1a4,},Annotations:map[string]string{io.kubernetes.container.hash: e124cbce,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddf976a2ea41c7979ac65b40414e90e16efe03daee69ec3f9ce96f1244b6438c,PodSandboxId:f1639384d1e8344ca240afa1c5d14eace564211fc2c6c7589db56929dc22cb7b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,State:CONTAINER_RUNNING,CreatedAt:1712000308438602838,Labels:map[string]string{io.kubernetes.container.name
: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-472858,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f0e863cc75ae379be03fd049b1c5a0e,},Annotations:map[string]string{io.kubernetes.container.hash: 2d785418,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1afa44f2-fc1e-4b06-82c5-9fc0bf5b1841 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0e2c4351e647d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   4701e6ea238d3       coredns-7db6d8ff4d-8285w
	bd2af3697d6e0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   3f5d21b0de00d       coredns-7db6d8ff4d-wmbsp
	f9502cf2be250       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   10ec86b24247f       storage-provisioner
	46ca3d05f0a38       33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652   9 minutes ago       Running             kube-proxy                0                   cda23238357a2       kube-proxy-5dmtl
	7d8c76c0d24fb       fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5   9 minutes ago       Running             kube-scheduler            2                   7e6e848a4b8f8       kube-scheduler-no-preload-472858
	72b29fac8ad2c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   3192a1acf8fe3       etcd-no-preload-472858
	8a23edb2b9de3       ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a   9 minutes ago       Running             kube-controller-manager   3                   e74d490fbbd34       kube-controller-manager-no-preload-472858
	ddf976a2ea41c       e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3   9 minutes ago       Running             kube-apiserver            3                   f1639384d1e83       kube-apiserver-no-preload-472858
	
	
	==> coredns [0e2c4351e647df49ae68dd6f2fa48f97da0f1ed020146f07c9bbdb71c7322f49] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [bd2af3697d6e04bee6abaee3f98618549c39fca9d37eddd9100e9771f5eba7b1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-472858
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-472858
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2
	                    minikube.k8s.io/name=no-preload-472858
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_01T19_38_35_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Apr 2024 19:38:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-472858
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Apr 2024 19:47:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Apr 2024 19:44:01 +0000   Mon, 01 Apr 2024 19:38:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Apr 2024 19:44:01 +0000   Mon, 01 Apr 2024 19:38:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Apr 2024 19:44:01 +0000   Mon, 01 Apr 2024 19:38:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Apr 2024 19:44:01 +0000   Mon, 01 Apr 2024 19:38:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.119
	  Hostname:    no-preload-472858
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f58e7d8dd7b64c348661c23ebbbcfe34
	  System UUID:                f58e7d8d-d7b6-4c34-8661-c23ebbbcfe34
	  Boot ID:                    7413a65d-979c-478f-b26e-c08fd2fd5be2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0-rc.0
	  Kube-Proxy Version:         v1.30.0-rc.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-8285w                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m6s
	  kube-system                 coredns-7db6d8ff4d-wmbsp                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m6s
	  kube-system                 etcd-no-preload-472858                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-apiserver-no-preload-472858             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-controller-manager-no-preload-472858    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-proxy-5dmtl                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	  kube-system                 kube-scheduler-no-preload-472858             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 metrics-server-569cc877fc-wj2tt              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m6s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m2s   kube-proxy       
	  Normal  Starting                 9m20s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m20s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m20s  kubelet          Node no-preload-472858 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m20s  kubelet          Node no-preload-472858 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m20s  kubelet          Node no-preload-472858 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m7s   node-controller  Node no-preload-472858 event: Registered Node no-preload-472858 in Controller
	
	
	==> dmesg <==
	[  +5.070005] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.515202] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.757516] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr 1 19:32] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.064665] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068468] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.203188] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.132355] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +0.305124] systemd-fstab-generator[687]: Ignoring "noauto" option for root device
	[ +17.314964] systemd-fstab-generator[1196]: Ignoring "noauto" option for root device
	[  +0.073112] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.892653] systemd-fstab-generator[1320]: Ignoring "noauto" option for root device
	[ +22.443307] kauditd_printk_skb: 90 callbacks suppressed
	[Apr 1 19:33] kauditd_printk_skb: 5 callbacks suppressed
	[  +7.039978] kauditd_printk_skb: 30 callbacks suppressed
	[ +30.417040] kauditd_printk_skb: 24 callbacks suppressed
	[Apr 1 19:38] kauditd_printk_skb: 6 callbacks suppressed
	[  +1.757296] systemd-fstab-generator[3945]: Ignoring "noauto" option for root device
	[  +7.563746] systemd-fstab-generator[4273]: Ignoring "noauto" option for root device
	[  +0.083322] kauditd_printk_skb: 57 callbacks suppressed
	[ +13.331040] systemd-fstab-generator[4476]: Ignoring "noauto" option for root device
	[  +0.116770] kauditd_printk_skb: 12 callbacks suppressed
	[Apr 1 19:39] kauditd_printk_skb: 76 callbacks suppressed
	
	
	==> etcd [72b29fac8ad2c0ec17d315f60a2c02d84311bda4e914417b34f76337547f7e08] <==
	{"level":"info","ts":"2024-04-01T19:38:28.973683Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-01T19:38:28.978652Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-01T19:38:28.978892Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-01T19:38:28.979009Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-01T19:38:28.979227Z","caller":"etcdserver/server.go:744","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"a39a7858c1cd6fec","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-04-01T19:38:28.982617Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a39a7858c1cd6fec switched to configuration voters=(11788867297199615980)"}
	{"level":"info","ts":"2024-04-01T19:38:28.98301Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"807e03d5c68d6646","local-member-id":"a39a7858c1cd6fec","added-peer-id":"a39a7858c1cd6fec","added-peer-peer-urls":["https://192.168.72.119:2380"]}
	{"level":"info","ts":"2024-04-01T19:38:29.926235Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a39a7858c1cd6fec is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-01T19:38:29.926338Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a39a7858c1cd6fec became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-01T19:38:29.92639Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a39a7858c1cd6fec received MsgPreVoteResp from a39a7858c1cd6fec at term 1"}
	{"level":"info","ts":"2024-04-01T19:38:29.926426Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a39a7858c1cd6fec became candidate at term 2"}
	{"level":"info","ts":"2024-04-01T19:38:29.92645Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a39a7858c1cd6fec received MsgVoteResp from a39a7858c1cd6fec at term 2"}
	{"level":"info","ts":"2024-04-01T19:38:29.926479Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a39a7858c1cd6fec became leader at term 2"}
	{"level":"info","ts":"2024-04-01T19:38:29.926508Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a39a7858c1cd6fec elected leader a39a7858c1cd6fec at term 2"}
	{"level":"info","ts":"2024-04-01T19:38:29.930492Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"a39a7858c1cd6fec","local-member-attributes":"{Name:no-preload-472858 ClientURLs:[https://192.168.72.119:2379]}","request-path":"/0/members/a39a7858c1cd6fec/attributes","cluster-id":"807e03d5c68d6646","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-01T19:38:29.930888Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-01T19:38:29.931245Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-01T19:38:29.931639Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-01T19:38:29.937775Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.119:2379"}
	{"level":"info","ts":"2024-04-01T19:38:29.939616Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"807e03d5c68d6646","local-member-id":"a39a7858c1cd6fec","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-01T19:38:29.939812Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-01T19:38:29.941269Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-01T19:38:29.941379Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-01T19:38:29.941422Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-01T19:38:29.942737Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 19:47:54 up 16 min,  0 users,  load average: 0.23, 0.21, 0.17
	Linux no-preload-472858 5.10.207 #1 SMP Wed Mar 27 22:02:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ddf976a2ea41c7979ac65b40414e90e16efe03daee69ec3f9ce96f1244b6438c] <==
	I0401 19:41:49.954357       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0401 19:43:31.886820       1 handler_proxy.go:93] no RequestInfo found in the context
	E0401 19:43:31.887017       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0401 19:43:32.887482       1 handler_proxy.go:93] no RequestInfo found in the context
	E0401 19:43:32.887560       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0401 19:43:32.887569       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0401 19:43:32.887747       1 handler_proxy.go:93] no RequestInfo found in the context
	E0401 19:43:32.887902       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0401 19:43:32.889177       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0401 19:44:32.888250       1 handler_proxy.go:93] no RequestInfo found in the context
	E0401 19:44:32.888973       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0401 19:44:32.889409       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0401 19:44:32.889363       1 handler_proxy.go:93] no RequestInfo found in the context
	E0401 19:44:32.889571       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0401 19:44:32.890524       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0401 19:46:32.890043       1 handler_proxy.go:93] no RequestInfo found in the context
	E0401 19:46:32.890557       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0401 19:46:32.890593       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0401 19:46:32.890813       1 handler_proxy.go:93] no RequestInfo found in the context
	E0401 19:46:32.890994       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0401 19:46:32.892607       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [8a23edb2b9de3784d6936a19fdaf8e118994492e2fced9fefada45236cb9557e] <==
	I0401 19:42:17.785394       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:42:47.250548       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:42:47.796633       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:43:17.256899       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:43:17.808432       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:43:47.266772       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:43:47.819714       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:44:17.273442       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:44:17.828769       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:44:47.280355       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:44:47.838697       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0401 19:44:51.807393       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="300.665µs"
	I0401 19:45:03.804612       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="173.187µs"
	E0401 19:45:17.286650       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:45:17.849437       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:45:47.294322       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:45:47.861550       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:46:17.299628       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:46:17.870360       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:46:47.305077       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:46:47.878399       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:47:17.311015       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:47:17.887563       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:47:47.318669       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:47:47.896269       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [46ca3d05f0a38f853d588e5798478f86bb56b88053de18a36ea6b1e870765da7] <==
	I0401 19:38:50.854926       1 server_linux.go:69] "Using iptables proxy"
	I0401 19:38:50.939277       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.72.119"]
	I0401 19:38:51.062492       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0401 19:38:51.062605       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0401 19:38:51.062720       1 server_linux.go:165] "Using iptables Proxier"
	I0401 19:38:51.066084       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0401 19:38:51.066355       1 server.go:872] "Version info" version="v1.30.0-rc.0"
	I0401 19:38:51.066556       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 19:38:51.067749       1 config.go:192] "Starting service config controller"
	I0401 19:38:51.067809       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0401 19:38:51.068018       1 config.go:101] "Starting endpoint slice config controller"
	I0401 19:38:51.068049       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0401 19:38:51.068803       1 config.go:319] "Starting node config controller"
	I0401 19:38:51.068883       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0401 19:38:51.169017       1 shared_informer.go:320] Caches are synced for node config
	I0401 19:38:51.169108       1 shared_informer.go:320] Caches are synced for service config
	I0401 19:38:51.169408       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [7d8c76c0d24fb6089861d6209885e1e72f2160d2a54aa2ae20ee28159bf7d04f] <==
	E0401 19:38:31.916210       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0401 19:38:31.916354       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0401 19:38:32.734357       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0401 19:38:32.734412       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0401 19:38:32.809526       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0401 19:38:32.809619       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0401 19:38:32.840560       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0401 19:38:32.841290       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0401 19:38:32.872515       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0401 19:38:32.872609       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0401 19:38:32.920055       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0401 19:38:32.920243       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0401 19:38:32.985992       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0401 19:38:32.986063       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0401 19:38:33.025946       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0401 19:38:33.026003       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0401 19:38:33.031506       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0401 19:38:33.031555       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0401 19:38:33.056988       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0401 19:38:33.057046       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0401 19:38:33.191520       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0401 19:38:33.191600       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0401 19:38:33.323205       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0401 19:38:33.323301       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0401 19:38:35.705232       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 01 19:45:34 no-preload-472858 kubelet[4280]: E0401 19:45:34.840807    4280 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 01 19:45:34 no-preload-472858 kubelet[4280]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 01 19:45:34 no-preload-472858 kubelet[4280]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 01 19:45:34 no-preload-472858 kubelet[4280]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 01 19:45:34 no-preload-472858 kubelet[4280]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 01 19:45:39 no-preload-472858 kubelet[4280]: E0401 19:45:39.787328    4280 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wj2tt" podUID="5259722c-3d0b-468f-b941-419806e91177"
	Apr 01 19:45:53 no-preload-472858 kubelet[4280]: E0401 19:45:53.787604    4280 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wj2tt" podUID="5259722c-3d0b-468f-b941-419806e91177"
	Apr 01 19:46:08 no-preload-472858 kubelet[4280]: E0401 19:46:08.787456    4280 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wj2tt" podUID="5259722c-3d0b-468f-b941-419806e91177"
	Apr 01 19:46:20 no-preload-472858 kubelet[4280]: E0401 19:46:20.789387    4280 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wj2tt" podUID="5259722c-3d0b-468f-b941-419806e91177"
	Apr 01 19:46:33 no-preload-472858 kubelet[4280]: E0401 19:46:33.787891    4280 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wj2tt" podUID="5259722c-3d0b-468f-b941-419806e91177"
	Apr 01 19:46:34 no-preload-472858 kubelet[4280]: E0401 19:46:34.840896    4280 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 01 19:46:34 no-preload-472858 kubelet[4280]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 01 19:46:34 no-preload-472858 kubelet[4280]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 01 19:46:34 no-preload-472858 kubelet[4280]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 01 19:46:34 no-preload-472858 kubelet[4280]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 01 19:46:47 no-preload-472858 kubelet[4280]: E0401 19:46:47.786701    4280 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wj2tt" podUID="5259722c-3d0b-468f-b941-419806e91177"
	Apr 01 19:47:01 no-preload-472858 kubelet[4280]: E0401 19:47:01.786996    4280 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wj2tt" podUID="5259722c-3d0b-468f-b941-419806e91177"
	Apr 01 19:47:15 no-preload-472858 kubelet[4280]: E0401 19:47:15.787088    4280 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wj2tt" podUID="5259722c-3d0b-468f-b941-419806e91177"
	Apr 01 19:47:27 no-preload-472858 kubelet[4280]: E0401 19:47:27.786905    4280 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wj2tt" podUID="5259722c-3d0b-468f-b941-419806e91177"
	Apr 01 19:47:34 no-preload-472858 kubelet[4280]: E0401 19:47:34.840957    4280 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 01 19:47:34 no-preload-472858 kubelet[4280]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 01 19:47:34 no-preload-472858 kubelet[4280]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 01 19:47:34 no-preload-472858 kubelet[4280]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 01 19:47:34 no-preload-472858 kubelet[4280]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 01 19:47:41 no-preload-472858 kubelet[4280]: E0401 19:47:41.786757    4280 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wj2tt" podUID="5259722c-3d0b-468f-b941-419806e91177"
	
	
	==> storage-provisioner [f9502cf2be2504fa95b6c845c5edbe06eec2770fd8f386b59ff0912c421b5487] <==
	I0401 19:38:50.540773       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0401 19:38:50.571915       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0401 19:38:50.571995       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0401 19:38:50.600199       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0401 19:38:50.600417       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-472858_8e8fb087-282f-403f-822d-4406a8190986!
	I0401 19:38:50.601184       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"13e18572-9570-426f-89a4-6efeed51df99", APIVersion:"v1", ResourceVersion:"395", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-472858_8e8fb087-282f-403f-822d-4406a8190986 became leader
	I0401 19:38:50.727383       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-472858_8e8fb087-282f-403f-822d-4406a8190986!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-472858 -n no-preload-472858
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-472858 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-wj2tt
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-472858 describe pod metrics-server-569cc877fc-wj2tt
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-472858 describe pod metrics-server-569cc877fc-wj2tt: exit status 1 (63.598752ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-wj2tt" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-472858 describe pod metrics-server-569cc877fc-wj2tt: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
E0401 19:40:06.173142   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/custom-flannel-408543/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
E0401 19:40:58.744423   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/enable-default-cni-408543/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
E0401 19:41:08.540492   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/calico-408543/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
E0401 19:41:29.220900   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/custom-flannel-408543/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
E0401 19:41:44.321892   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/flannel-408543/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
E0401 19:41:55.903241   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
E0401 19:42:21.788714   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/enable-default-cni-408543/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
E0401 19:42:45.217234   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/bridge-408543/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
E0401 19:42:59.323040   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/auto-408543/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
E0401 19:43:07.367445   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/flannel-408543/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
E0401 19:43:14.750783   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
E0401 19:43:52.854892   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
E0401 19:44:08.261842   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/bridge-408543/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
E0401 19:44:16.857211   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/functional-784295/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
E0401 19:44:45.495614   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/calico-408543/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
E0401 19:45:06.172497   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/custom-flannel-408543/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
E0401 19:45:58.743650   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/enable-default-cni-408543/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
E0401 19:46:44.321775   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/flannel-408543/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
E0401 19:47:19.906606   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/functional-784295/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
E0401 19:47:45.217243   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/bridge-408543/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
E0401 19:47:59.323213   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/auto-408543/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
E0401 19:48:14.750247   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
E0401 19:48:52.855297   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-163608 -n old-k8s-version-163608
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-163608 -n old-k8s-version-163608: exit status 2 (250.990691ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-163608" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-163608 -n old-k8s-version-163608
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-163608 -n old-k8s-version-163608: exit status 2 (240.832813ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-163608 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-163608 logs -n 25: (1.639646242s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p bridge-408543 sudo cat                              | bridge-408543                | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |                |                     |                     |
	| ssh     | -p bridge-408543 sudo                                  | bridge-408543                | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | containerd config dump                                 |                              |         |                |                     |                     |
	| ssh     | -p bridge-408543 sudo                                  | bridge-408543                | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | systemctl status crio --all                            |                              |         |                |                     |                     |
	|         | --full --no-pager                                      |                              |         |                |                     |                     |
	| ssh     | -p bridge-408543 sudo                                  | bridge-408543                | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |                |                     |                     |
	| ssh     | -p bridge-408543 sudo find                             | bridge-408543                | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |                |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |                |                     |                     |
	| ssh     | -p bridge-408543 sudo crio                             | bridge-408543                | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | config                                                 |                              |         |                |                     |                     |
	| delete  | -p bridge-408543                                       | bridge-408543                | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	| delete  | -p                                                     | disable-driver-mounts-580301 | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | disable-driver-mounts-580301                           |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-734648 | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:24 UTC |
	|         | default-k8s-diff-port-734648                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p no-preload-472858             | no-preload-472858            | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p no-preload-472858                                   | no-preload-472858            | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-882095            | embed-certs-882095           | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:24 UTC | 01 Apr 24 19:24 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-882095                                  | embed-certs-882095           | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:24 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-734648  | default-k8s-diff-port-734648 | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:25 UTC | 01 Apr 24 19:25 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-734648 | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:25 UTC |                     |
	|         | default-k8s-diff-port-734648                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-472858                  | no-preload-472858            | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p no-preload-472858                                   | no-preload-472858            | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:26 UTC | 01 Apr 24 19:38 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |                |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.0                      |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-163608        | old-k8s-version-163608       | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:26 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-882095                 | embed-certs-882095           | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:26 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-882095                                  | embed-certs-882095           | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:26 UTC | 01 Apr 24 19:36 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-734648       | default-k8s-diff-port-734648 | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:27 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-734648 | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:27 UTC | 01 Apr 24 19:36 UTC |
	|         | default-k8s-diff-port-734648                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-163608                              | old-k8s-version-163608       | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:27 UTC | 01 Apr 24 19:27 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-163608             | old-k8s-version-163608       | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:27 UTC | 01 Apr 24 19:27 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-163608                              | old-k8s-version-163608       | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:27 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/01 19:27:52
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 19:27:52.967684   71168 out.go:291] Setting OutFile to fd 1 ...
	I0401 19:27:52.967904   71168 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 19:27:52.967912   71168 out.go:304] Setting ErrFile to fd 2...
	I0401 19:27:52.967916   71168 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 19:27:52.968071   71168 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-10493/.minikube/bin
	I0401 19:27:52.968601   71168 out.go:298] Setting JSON to false
	I0401 19:27:52.969458   71168 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7825,"bootTime":1711991848,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 19:27:52.969511   71168 start.go:139] virtualization: kvm guest
	I0401 19:27:52.972337   71168 out.go:177] * [old-k8s-version-163608] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 19:27:52.973728   71168 out.go:177]   - MINIKUBE_LOCATION=18233
	I0401 19:27:52.973774   71168 notify.go:220] Checking for updates...
	I0401 19:27:52.975050   71168 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 19:27:52.976498   71168 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 19:27:52.977880   71168 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-10493/.minikube
	I0401 19:27:52.979140   71168 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 19:27:52.980397   71168 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 19:27:52.982116   71168 config.go:182] Loaded profile config "old-k8s-version-163608": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 19:27:52.982478   71168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:27:52.982569   71168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:27:52.996903   71168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44083
	I0401 19:27:52.997230   71168 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:27:52.997702   71168 main.go:141] libmachine: Using API Version  1
	I0401 19:27:52.997724   71168 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:27:52.998082   71168 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:27:52.998286   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:27:53.000287   71168 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0401 19:27:53.001714   71168 driver.go:392] Setting default libvirt URI to qemu:///system
	I0401 19:27:53.001993   71168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:27:53.002030   71168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:27:53.016155   71168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43947
	I0401 19:27:53.016524   71168 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:27:53.016981   71168 main.go:141] libmachine: Using API Version  1
	I0401 19:27:53.017003   71168 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:27:53.017352   71168 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:27:53.017550   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:27:53.051163   71168 out.go:177] * Using the kvm2 driver based on existing profile
	I0401 19:27:53.052475   71168 start.go:297] selected driver: kvm2
	I0401 19:27:53.052488   71168 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-163608 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-163608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.106 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:27:53.052621   71168 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 19:27:53.053266   71168 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 19:27:53.053349   71168 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18233-10493/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0401 19:27:53.067629   71168 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0401 19:27:53.067994   71168 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 19:27:53.068065   71168 cni.go:84] Creating CNI manager for ""
	I0401 19:27:53.068083   71168 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:27:53.068130   71168 start.go:340] cluster config:
	{Name:old-k8s-version-163608 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-163608 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.106 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:27:53.068640   71168 iso.go:125] acquiring lock: {Name:mka511ffe42ecd86bd7f46e7a17ddcdd3e5e4327 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 19:27:53.070506   71168 out.go:177] * Starting "old-k8s-version-163608" primary control-plane node in "old-k8s-version-163608" cluster
	I0401 19:27:53.071686   71168 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0401 19:27:53.071716   71168 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0401 19:27:53.071726   71168 cache.go:56] Caching tarball of preloaded images
	I0401 19:27:53.071807   71168 preload.go:173] Found /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 19:27:53.071818   71168 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0401 19:27:53.071904   71168 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/config.json ...
	I0401 19:27:53.072076   71168 start.go:360] acquireMachinesLock for old-k8s-version-163608: {Name:mk6b7472209a8db5f40be4c2f0565da7e0094c19 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 19:27:57.821850   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:00.893934   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:06.973950   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:10.045903   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:16.125969   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:19.197902   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:25.277903   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:28.349963   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:34.429888   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:37.501886   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:43.581910   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:46.653871   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:52.733856   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:55.805957   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:01.885878   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:04.957919   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:11.037896   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:14.109854   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:20.189885   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:23.261848   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:29.341931   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:32.414013   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:38.493870   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:41.565912   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:47.645887   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:50.717882   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:56.797886   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:59.869824   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:30:05.949894   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:30:09.021905   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:30:15.101943   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:30:18.173911   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:30:24.253875   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:30:27.325874   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:30:33.405945   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:30:36.477889   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:30:39.482773   70687 start.go:364] duration metric: took 3m52.901392005s to acquireMachinesLock for "embed-certs-882095"
	I0401 19:30:39.482825   70687 start.go:96] Skipping create...Using existing machine configuration
	I0401 19:30:39.482831   70687 fix.go:54] fixHost starting: 
	I0401 19:30:39.483206   70687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:30:39.483272   70687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:30:39.498155   70687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43779
	I0401 19:30:39.498587   70687 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:30:39.499013   70687 main.go:141] libmachine: Using API Version  1
	I0401 19:30:39.499032   70687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:30:39.499400   70687 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:30:39.499572   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:30:39.499760   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetState
	I0401 19:30:39.501361   70687 fix.go:112] recreateIfNeeded on embed-certs-882095: state=Stopped err=<nil>
	I0401 19:30:39.501398   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	W0401 19:30:39.501552   70687 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 19:30:39.504183   70687 out.go:177] * Restarting existing kvm2 VM for "embed-certs-882095" ...
	I0401 19:30:39.505410   70687 main.go:141] libmachine: (embed-certs-882095) Calling .Start
	I0401 19:30:39.505549   70687 main.go:141] libmachine: (embed-certs-882095) Ensuring networks are active...
	I0401 19:30:39.506257   70687 main.go:141] libmachine: (embed-certs-882095) Ensuring network default is active
	I0401 19:30:39.506533   70687 main.go:141] libmachine: (embed-certs-882095) Ensuring network mk-embed-certs-882095 is active
	I0401 19:30:39.506892   70687 main.go:141] libmachine: (embed-certs-882095) Getting domain xml...
	I0401 19:30:39.507632   70687 main.go:141] libmachine: (embed-certs-882095) Creating domain...
	I0401 19:30:40.693316   70687 main.go:141] libmachine: (embed-certs-882095) Waiting to get IP...
	I0401 19:30:40.694095   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:40.694551   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:40.694597   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:40.694519   71595 retry.go:31] will retry after 283.185096ms: waiting for machine to come up
	I0401 19:30:40.979028   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:40.979500   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:40.979523   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:40.979452   71595 retry.go:31] will retry after 297.637907ms: waiting for machine to come up
	I0401 19:30:41.279111   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:41.279457   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:41.279479   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:41.279411   71595 retry.go:31] will retry after 366.625363ms: waiting for machine to come up
	I0401 19:30:39.480214   70284 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 19:30:39.480252   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetMachineName
	I0401 19:30:39.480557   70284 buildroot.go:166] provisioning hostname "no-preload-472858"
	I0401 19:30:39.480583   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetMachineName
	I0401 19:30:39.480787   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:30:39.482626   70284 machine.go:97] duration metric: took 4m37.415031648s to provisionDockerMachine
	I0401 19:30:39.482666   70284 fix.go:56] duration metric: took 4m37.43830515s for fixHost
	I0401 19:30:39.482676   70284 start.go:83] releasing machines lock for "no-preload-472858", held for 4m37.438344965s
	W0401 19:30:39.482704   70284 start.go:713] error starting host: provision: host is not running
	W0401 19:30:39.482794   70284 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0401 19:30:39.482805   70284 start.go:728] Will try again in 5 seconds ...
	I0401 19:30:41.647682   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:41.648045   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:41.648097   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:41.648026   71595 retry.go:31] will retry after 373.762437ms: waiting for machine to come up
	I0401 19:30:42.023500   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:42.023868   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:42.023904   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:42.023836   71595 retry.go:31] will retry after 461.430639ms: waiting for machine to come up
	I0401 19:30:42.486384   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:42.486836   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:42.486863   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:42.486784   71595 retry.go:31] will retry after 718.511667ms: waiting for machine to come up
	I0401 19:30:43.206555   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:43.206983   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:43.207006   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:43.206939   71595 retry.go:31] will retry after 907.934415ms: waiting for machine to come up
	I0401 19:30:44.115840   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:44.116223   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:44.116259   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:44.116173   71595 retry.go:31] will retry after 1.178492069s: waiting for machine to come up
	I0401 19:30:45.295704   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:45.296117   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:45.296146   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:45.296071   71595 retry.go:31] will retry after 1.188920707s: waiting for machine to come up
	I0401 19:30:44.484802   70284 start.go:360] acquireMachinesLock for no-preload-472858: {Name:mk6b7472209a8db5f40be4c2f0565da7e0094c19 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 19:30:46.486217   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:46.486777   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:46.486816   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:46.486740   71595 retry.go:31] will retry after 2.12728618s: waiting for machine to come up
	I0401 19:30:48.617124   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:48.617521   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:48.617553   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:48.617468   71595 retry.go:31] will retry after 2.867613028s: waiting for machine to come up
	I0401 19:30:51.488009   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:51.491502   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:51.491533   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:51.488532   71595 retry.go:31] will retry after 3.42206094s: waiting for machine to come up
	I0401 19:30:54.911723   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:54.912098   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:54.912127   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:54.912059   71595 retry.go:31] will retry after 4.263880792s: waiting for machine to come up
	I0401 19:31:00.450770   70962 start.go:364] duration metric: took 3m22.921307899s to acquireMachinesLock for "default-k8s-diff-port-734648"
	I0401 19:31:00.450836   70962 start.go:96] Skipping create...Using existing machine configuration
	I0401 19:31:00.450854   70962 fix.go:54] fixHost starting: 
	I0401 19:31:00.451364   70962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:31:00.451401   70962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:31:00.467219   70962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45255
	I0401 19:31:00.467579   70962 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:31:00.467998   70962 main.go:141] libmachine: Using API Version  1
	I0401 19:31:00.468021   70962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:31:00.468368   70962 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:31:00.468567   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:31:00.468740   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetState
	I0401 19:31:00.470224   70962 fix.go:112] recreateIfNeeded on default-k8s-diff-port-734648: state=Stopped err=<nil>
	I0401 19:31:00.470251   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	W0401 19:31:00.470396   70962 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 19:31:00.472906   70962 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-734648" ...
	I0401 19:30:59.180302   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.180756   70687 main.go:141] libmachine: (embed-certs-882095) Found IP for machine: 192.168.39.190
	I0401 19:30:59.180778   70687 main.go:141] libmachine: (embed-certs-882095) Reserving static IP address...
	I0401 19:30:59.180794   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has current primary IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.181269   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "embed-certs-882095", mac: "52:54:00:8c:f1:a7", ip: "192.168.39.190"} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.181300   70687 main.go:141] libmachine: (embed-certs-882095) DBG | skip adding static IP to network mk-embed-certs-882095 - found existing host DHCP lease matching {name: "embed-certs-882095", mac: "52:54:00:8c:f1:a7", ip: "192.168.39.190"}
	I0401 19:30:59.181311   70687 main.go:141] libmachine: (embed-certs-882095) Reserved static IP address: 192.168.39.190
	I0401 19:30:59.181324   70687 main.go:141] libmachine: (embed-certs-882095) DBG | Getting to WaitForSSH function...
	I0401 19:30:59.181331   70687 main.go:141] libmachine: (embed-certs-882095) Waiting for SSH to be available...
	I0401 19:30:59.183293   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.183599   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.183630   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.183756   70687 main.go:141] libmachine: (embed-certs-882095) DBG | Using SSH client type: external
	I0401 19:30:59.183784   70687 main.go:141] libmachine: (embed-certs-882095) DBG | Using SSH private key: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/embed-certs-882095/id_rsa (-rw-------)
	I0401 19:30:59.183837   70687 main.go:141] libmachine: (embed-certs-882095) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.190 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18233-10493/.minikube/machines/embed-certs-882095/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0401 19:30:59.183863   70687 main.go:141] libmachine: (embed-certs-882095) DBG | About to run SSH command:
	I0401 19:30:59.183924   70687 main.go:141] libmachine: (embed-certs-882095) DBG | exit 0
	I0401 19:30:59.305707   70687 main.go:141] libmachine: (embed-certs-882095) DBG | SSH cmd err, output: <nil>: 
	I0401 19:30:59.306036   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetConfigRaw
	I0401 19:30:59.306679   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetIP
	I0401 19:30:59.309266   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.309680   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.309711   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.309938   70687 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/embed-certs-882095/config.json ...
	I0401 19:30:59.310193   70687 machine.go:94] provisionDockerMachine start ...
	I0401 19:30:59.310219   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:30:59.310435   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:30:59.312549   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.312908   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.312930   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.313088   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:30:59.313247   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:30:59.313385   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:30:59.313502   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:30:59.313721   70687 main.go:141] libmachine: Using SSH client type: native
	I0401 19:30:59.313894   70687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0401 19:30:59.313904   70687 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 19:30:59.418216   70687 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0401 19:30:59.418244   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetMachineName
	I0401 19:30:59.418506   70687 buildroot.go:166] provisioning hostname "embed-certs-882095"
	I0401 19:30:59.418537   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetMachineName
	I0401 19:30:59.418703   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:30:59.421075   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.421411   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.421453   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.421534   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:30:59.421721   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:30:59.421867   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:30:59.421978   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:30:59.422122   70687 main.go:141] libmachine: Using SSH client type: native
	I0401 19:30:59.422317   70687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0401 19:30:59.422332   70687 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-882095 && echo "embed-certs-882095" | sudo tee /etc/hostname
	I0401 19:30:59.541974   70687 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-882095
	
	I0401 19:30:59.542006   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:30:59.544628   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.544992   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.545025   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.545193   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:30:59.545403   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:30:59.545566   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:30:59.545720   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:30:59.545906   70687 main.go:141] libmachine: Using SSH client type: native
	I0401 19:30:59.546060   70687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0401 19:30:59.546077   70687 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-882095' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-882095/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-882095' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 19:30:59.660103   70687 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 19:30:59.660134   70687 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18233-10493/.minikube CaCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18233-10493/.minikube}
	I0401 19:30:59.660161   70687 buildroot.go:174] setting up certificates
	I0401 19:30:59.660172   70687 provision.go:84] configureAuth start
	I0401 19:30:59.660193   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetMachineName
	I0401 19:30:59.660465   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetIP
	I0401 19:30:59.662943   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.663260   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.663302   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.663413   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:30:59.665390   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.665688   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.665719   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.665821   70687 provision.go:143] copyHostCerts
	I0401 19:30:59.665879   70687 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem, removing ...
	I0401 19:30:59.665892   70687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem
	I0401 19:30:59.665956   70687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem (1679 bytes)
	I0401 19:30:59.666041   70687 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem, removing ...
	I0401 19:30:59.666048   70687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem
	I0401 19:30:59.666071   70687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem (1082 bytes)
	I0401 19:30:59.666121   70687 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem, removing ...
	I0401 19:30:59.666128   70687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem
	I0401 19:30:59.666148   70687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem (1123 bytes)
	I0401 19:30:59.666193   70687 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem org=jenkins.embed-certs-882095 san=[127.0.0.1 192.168.39.190 embed-certs-882095 localhost minikube]
	I0401 19:30:59.761975   70687 provision.go:177] copyRemoteCerts
	I0401 19:30:59.762033   70687 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 19:30:59.762058   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:30:59.764277   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.764601   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.764626   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.764832   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:30:59.765006   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:30:59.765155   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:30:59.765250   70687 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/embed-certs-882095/id_rsa Username:docker}
	I0401 19:30:59.848158   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 19:30:59.875879   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 19:30:59.902573   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0401 19:30:59.928757   70687 provision.go:87] duration metric: took 268.570153ms to configureAuth
	I0401 19:30:59.928781   70687 buildroot.go:189] setting minikube options for container-runtime
	I0401 19:30:59.928924   70687 config.go:182] Loaded profile config "embed-certs-882095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 19:30:59.928988   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:30:59.931187   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.931571   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.931600   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.931755   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:30:59.931914   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:30:59.932067   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:30:59.932176   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:30:59.932325   70687 main.go:141] libmachine: Using SSH client type: native
	I0401 19:30:59.932506   70687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0401 19:30:59.932530   70687 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 19:31:00.214527   70687 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 19:31:00.214552   70687 machine.go:97] duration metric: took 904.342981ms to provisionDockerMachine
	I0401 19:31:00.214563   70687 start.go:293] postStartSetup for "embed-certs-882095" (driver="kvm2")
	I0401 19:31:00.214574   70687 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 19:31:00.214587   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:31:00.214892   70687 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 19:31:00.214920   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:31:00.217289   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.217580   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:31:00.217608   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.217828   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:31:00.218014   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:31:00.218137   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:31:00.218267   70687 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/embed-certs-882095/id_rsa Username:docker}
	I0401 19:31:00.301379   70687 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 19:31:00.306211   70687 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 19:31:00.306231   70687 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/addons for local assets ...
	I0401 19:31:00.306284   70687 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/files for local assets ...
	I0401 19:31:00.306377   70687 filesync.go:149] local asset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> 177512.pem in /etc/ssl/certs
	I0401 19:31:00.306459   70687 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 19:31:00.316524   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:31:00.342848   70687 start.go:296] duration metric: took 128.272743ms for postStartSetup
	I0401 19:31:00.342887   70687 fix.go:56] duration metric: took 20.860054972s for fixHost
	I0401 19:31:00.342910   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:31:00.345429   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.345883   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:31:00.345915   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.346060   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:31:00.346288   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:31:00.346504   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:31:00.346656   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:31:00.346806   70687 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:00.346961   70687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0401 19:31:00.346972   70687 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 19:31:00.450606   70687 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711999860.420567604
	
	I0401 19:31:00.450627   70687 fix.go:216] guest clock: 1711999860.420567604
	I0401 19:31:00.450635   70687 fix.go:229] Guest: 2024-04-01 19:31:00.420567604 +0000 UTC Remote: 2024-04-01 19:31:00.34289204 +0000 UTC m=+253.905703085 (delta=77.675564ms)
	I0401 19:31:00.450683   70687 fix.go:200] guest clock delta is within tolerance: 77.675564ms
	I0401 19:31:00.450693   70687 start.go:83] releasing machines lock for "embed-certs-882095", held for 20.967887876s
	I0401 19:31:00.450725   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:31:00.451011   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetIP
	I0401 19:31:00.453581   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.453959   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:31:00.453990   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.454112   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:31:00.454613   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:31:00.454788   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:31:00.454844   70687 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 19:31:00.454886   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:31:00.454997   70687 ssh_runner.go:195] Run: cat /version.json
	I0401 19:31:00.455019   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:31:00.457540   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.457811   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.457846   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:31:00.457878   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.458053   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:31:00.458141   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:31:00.458173   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.458217   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:31:00.458295   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:31:00.458387   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:31:00.458471   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:31:00.458556   70687 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/embed-certs-882095/id_rsa Username:docker}
	I0401 19:31:00.458602   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:31:00.458741   70687 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/embed-certs-882095/id_rsa Username:docker}
	I0401 19:31:00.569039   70687 ssh_runner.go:195] Run: systemctl --version
	I0401 19:31:00.575452   70687 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 19:31:00.728549   70687 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 19:31:00.735559   70687 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 19:31:00.735642   70687 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 19:31:00.756640   70687 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 19:31:00.756669   70687 start.go:494] detecting cgroup driver to use...
	I0401 19:31:00.756743   70687 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 19:31:00.776638   70687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 19:31:00.793006   70687 docker.go:217] disabling cri-docker service (if available) ...
	I0401 19:31:00.793063   70687 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 19:31:00.809240   70687 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 19:31:00.825245   70687 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 19:31:00.952595   70687 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 19:31:01.109771   70687 docker.go:233] disabling docker service ...
	I0401 19:31:01.109841   70687 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 19:31:01.126814   70687 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 19:31:01.141976   70687 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 19:31:01.301634   70687 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 19:31:01.440350   70687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 19:31:01.458083   70687 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 19:31:01.479653   70687 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0401 19:31:01.479730   70687 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:01.492598   70687 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 19:31:01.492677   70687 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:01.506469   70687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:01.521981   70687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:01.534406   70687 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 19:31:01.546817   70687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:01.558857   70687 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:01.578922   70687 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:01.593381   70687 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 19:31:01.605265   70687 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0401 19:31:01.605341   70687 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0401 19:31:01.621681   70687 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 19:31:01.633336   70687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:31:01.770373   70687 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 19:31:01.927892   70687 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 19:31:01.927952   70687 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 19:31:01.935046   70687 start.go:562] Will wait 60s for crictl version
	I0401 19:31:01.935101   70687 ssh_runner.go:195] Run: which crictl
	I0401 19:31:01.940563   70687 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 19:31:01.986956   70687 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 19:31:01.987030   70687 ssh_runner.go:195] Run: crio --version
	I0401 19:31:02.018567   70687 ssh_runner.go:195] Run: crio --version
	I0401 19:31:02.059077   70687 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0401 19:31:00.474118   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .Start
	I0401 19:31:00.474275   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Ensuring networks are active...
	I0401 19:31:00.474896   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Ensuring network default is active
	I0401 19:31:00.475289   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Ensuring network mk-default-k8s-diff-port-734648 is active
	I0401 19:31:00.475650   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Getting domain xml...
	I0401 19:31:00.476263   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Creating domain...
	I0401 19:31:01.736646   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting to get IP...
	I0401 19:31:01.737490   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:01.737889   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:01.737939   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:01.737867   71724 retry.go:31] will retry after 198.445345ms: waiting for machine to come up
	I0401 19:31:01.938446   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:01.938981   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:01.939012   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:01.938936   71724 retry.go:31] will retry after 320.128802ms: waiting for machine to come up
	I0401 19:31:02.260257   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:02.260673   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:02.260703   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:02.260633   71724 retry.go:31] will retry after 357.316906ms: waiting for machine to come up
	I0401 19:31:02.060343   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetIP
	I0401 19:31:02.063382   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:02.063775   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:31:02.063808   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:02.064047   70687 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0401 19:31:02.069227   70687 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:31:02.085344   70687 kubeadm.go:877] updating cluster {Name:embed-certs-882095 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.3 ClusterName:embed-certs-882095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.190 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 19:31:02.085451   70687 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0401 19:31:02.085490   70687 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:31:02.139383   70687 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0401 19:31:02.139454   70687 ssh_runner.go:195] Run: which lz4
	I0401 19:31:02.144331   70687 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0401 19:31:02.149534   70687 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 19:31:02.149561   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0401 19:31:03.954448   70687 crio.go:462] duration metric: took 1.810143668s to copy over tarball
	I0401 19:31:03.954523   70687 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 19:31:06.445735   70687 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.491184732s)
	I0401 19:31:06.445759   70687 crio.go:469] duration metric: took 2.491285648s to extract the tarball
	I0401 19:31:06.445765   70687 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 19:31:02.620250   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:02.620729   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:02.620760   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:02.620666   71724 retry.go:31] will retry after 520.509423ms: waiting for machine to come up
	I0401 19:31:03.142471   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:03.142902   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:03.142930   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:03.142864   71724 retry.go:31] will retry after 714.309176ms: waiting for machine to come up
	I0401 19:31:03.858594   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:03.859071   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:03.859104   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:03.859035   71724 retry.go:31] will retry after 620.601084ms: waiting for machine to come up
	I0401 19:31:04.480923   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:04.481350   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:04.481381   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:04.481313   71724 retry.go:31] will retry after 1.00716549s: waiting for machine to come up
	I0401 19:31:05.489788   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:05.490243   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:05.490273   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:05.490186   71724 retry.go:31] will retry after 1.158564029s: waiting for machine to come up
	I0401 19:31:06.650440   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:06.650969   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:06.650997   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:06.650915   71724 retry.go:31] will retry after 1.172294728s: waiting for machine to come up
	I0401 19:31:06.485475   70687 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:31:06.532426   70687 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 19:31:06.532448   70687 cache_images.go:84] Images are preloaded, skipping loading
	I0401 19:31:06.532455   70687 kubeadm.go:928] updating node { 192.168.39.190 8443 v1.29.3 crio true true} ...
	I0401 19:31:06.532544   70687 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-882095 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.190
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:embed-certs-882095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 19:31:06.532611   70687 ssh_runner.go:195] Run: crio config
	I0401 19:31:06.585119   70687 cni.go:84] Creating CNI manager for ""
	I0401 19:31:06.585144   70687 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:31:06.585158   70687 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 19:31:06.585185   70687 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.190 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-882095 NodeName:embed-certs-882095 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.190"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.190 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 19:31:06.585374   70687 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.190
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-882095"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.190
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.190"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 19:31:06.585473   70687 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0401 19:31:06.596747   70687 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 19:31:06.596818   70687 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 19:31:06.606959   70687 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0401 19:31:06.628202   70687 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 19:31:06.649043   70687 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0401 19:31:06.668400   70687 ssh_runner.go:195] Run: grep 192.168.39.190	control-plane.minikube.internal$ /etc/hosts
	I0401 19:31:06.672469   70687 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.190	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:31:06.685666   70687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:31:06.806186   70687 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:31:06.823315   70687 certs.go:68] Setting up /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/embed-certs-882095 for IP: 192.168.39.190
	I0401 19:31:06.823355   70687 certs.go:194] generating shared ca certs ...
	I0401 19:31:06.823376   70687 certs.go:226] acquiring lock for ca certs: {Name:mk348b3e250c104b662139cd7212c6c6dfda3180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:31:06.823569   70687 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key
	I0401 19:31:06.823645   70687 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key
	I0401 19:31:06.823659   70687 certs.go:256] generating profile certs ...
	I0401 19:31:06.823764   70687 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/embed-certs-882095/client.key
	I0401 19:31:06.823872   70687 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/embed-certs-882095/apiserver.key.c07921ce
	I0401 19:31:06.823945   70687 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/embed-certs-882095/proxy-client.key
	I0401 19:31:06.824092   70687 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem (1338 bytes)
	W0401 19:31:06.824132   70687 certs.go:480] ignoring /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751_empty.pem, impossibly tiny 0 bytes
	I0401 19:31:06.824145   70687 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem (1675 bytes)
	I0401 19:31:06.824183   70687 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem (1082 bytes)
	I0401 19:31:06.824223   70687 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem (1123 bytes)
	I0401 19:31:06.824254   70687 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem (1679 bytes)
	I0401 19:31:06.824309   70687 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:31:06.824942   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 19:31:06.867274   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 19:31:06.907288   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 19:31:06.948328   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 19:31:06.975058   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/embed-certs-882095/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0401 19:31:07.003183   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/embed-certs-882095/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0401 19:31:07.032030   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/embed-certs-882095/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 19:31:07.061612   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/embed-certs-882095/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 19:31:07.090149   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /usr/share/ca-certificates/177512.pem (1708 bytes)
	I0401 19:31:07.116885   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 19:31:07.143296   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem --> /usr/share/ca-certificates/17751.pem (1338 bytes)
	I0401 19:31:07.169420   70687 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0401 19:31:07.188908   70687 ssh_runner.go:195] Run: openssl version
	I0401 19:31:07.195591   70687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177512.pem && ln -fs /usr/share/ca-certificates/177512.pem /etc/ssl/certs/177512.pem"
	I0401 19:31:07.211583   70687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177512.pem
	I0401 19:31:07.217049   70687 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 18:15 /usr/share/ca-certificates/177512.pem
	I0401 19:31:07.217110   70687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177512.pem
	I0401 19:31:07.223751   70687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177512.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 19:31:07.237393   70687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 19:31:07.250523   70687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:07.255928   70687 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 18:07 /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:07.255981   70687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:07.262373   70687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 19:31:07.275174   70687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17751.pem && ln -fs /usr/share/ca-certificates/17751.pem /etc/ssl/certs/17751.pem"
	I0401 19:31:07.288039   70687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17751.pem
	I0401 19:31:07.293339   70687 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 18:15 /usr/share/ca-certificates/17751.pem
	I0401 19:31:07.293392   70687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17751.pem
	I0401 19:31:07.299983   70687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17751.pem /etc/ssl/certs/51391683.0"
	I0401 19:31:07.313120   70687 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 19:31:07.318425   70687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 19:31:07.325172   70687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 19:31:07.331674   70687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 19:31:07.338299   70687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 19:31:07.344896   70687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 19:31:07.351424   70687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 19:31:07.357898   70687 kubeadm.go:391] StartCluster: {Name:embed-certs-882095 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.3 ClusterName:embed-certs-882095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.190 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:31:07.357995   70687 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 19:31:07.358047   70687 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:31:07.401268   70687 cri.go:89] found id: ""
	I0401 19:31:07.401326   70687 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0401 19:31:07.414232   70687 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0401 19:31:07.414255   70687 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0401 19:31:07.414262   70687 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0401 19:31:07.414308   70687 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 19:31:07.425972   70687 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 19:31:07.426977   70687 kubeconfig.go:125] found "embed-certs-882095" server: "https://192.168.39.190:8443"
	I0401 19:31:07.428767   70687 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 19:31:07.440164   70687 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.190
	I0401 19:31:07.440191   70687 kubeadm.go:1154] stopping kube-system containers ...
	I0401 19:31:07.440201   70687 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0401 19:31:07.440244   70687 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:31:07.484303   70687 cri.go:89] found id: ""
	I0401 19:31:07.484407   70687 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0401 19:31:07.505186   70687 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:31:07.518316   70687 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:31:07.518342   70687 kubeadm.go:156] found existing configuration files:
	
	I0401 19:31:07.518393   70687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 19:31:07.530759   70687 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:31:07.530832   70687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:31:07.542799   70687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 19:31:07.553972   70687 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:31:07.554031   70687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:31:07.565324   70687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 19:31:07.576244   70687 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:31:07.576318   70687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:31:07.588874   70687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 19:31:07.600440   70687 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:31:07.600526   70687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:31:07.611963   70687 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:31:07.623225   70687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:07.740800   70687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:09.050887   70687 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.310046744s)
	I0401 19:31:09.050920   70687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:09.266170   70687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:09.336585   70687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:09.422513   70687 api_server.go:52] waiting for apiserver process to appear ...
	I0401 19:31:09.422594   70687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:09.923709   70687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:10.422822   70687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:10.922892   70687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:10.946590   70687 api_server.go:72] duration metric: took 1.524076694s to wait for apiserver process to appear ...
	I0401 19:31:10.946627   70687 api_server.go:88] waiting for apiserver healthz status ...
	I0401 19:31:10.946650   70687 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0401 19:31:07.825239   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:07.825629   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:07.825676   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:07.825586   71724 retry.go:31] will retry after 1.412332675s: waiting for machine to come up
	I0401 19:31:09.240010   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:09.240385   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:09.240416   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:09.240327   71724 retry.go:31] will retry after 2.601344034s: waiting for machine to come up
	I0401 19:31:11.843464   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:11.843948   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:11.843976   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:11.843900   71724 retry.go:31] will retry after 3.297720076s: waiting for machine to come up
	I0401 19:31:13.350274   70687 api_server.go:279] https://192.168.39.190:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0401 19:31:13.350309   70687 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0401 19:31:13.350325   70687 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0401 19:31:13.383494   70687 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0401 19:31:13.383543   70687 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0401 19:31:13.447744   70687 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0401 19:31:13.452796   70687 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0401 19:31:13.452852   70687 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0401 19:31:13.946971   70687 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0401 19:31:13.951522   70687 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0401 19:31:13.951554   70687 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0401 19:31:14.447104   70687 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0401 19:31:14.455165   70687 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0401 19:31:14.455204   70687 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0401 19:31:14.947278   70687 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0401 19:31:14.951487   70687 api_server.go:279] https://192.168.39.190:8443/healthz returned 200:
	ok
	I0401 19:31:14.958647   70687 api_server.go:141] control plane version: v1.29.3
	I0401 19:31:14.958670   70687 api_server.go:131] duration metric: took 4.012036456s to wait for apiserver health ...
	I0401 19:31:14.958687   70687 cni.go:84] Creating CNI manager for ""
	I0401 19:31:14.958693   70687 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:31:14.960494   70687 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0401 19:31:14.961899   70687 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0401 19:31:14.973709   70687 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0401 19:31:14.998105   70687 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 19:31:15.008481   70687 system_pods.go:59] 8 kube-system pods found
	I0401 19:31:15.008525   70687 system_pods.go:61] "coredns-76f75df574-nvcq4" [663bd69b-6da8-4a66-b20f-ea1eb507096a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:31:15.008536   70687 system_pods.go:61] "etcd-embed-certs-882095" [2b56dddc-b309-4965-811e-459c59b86dac] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0401 19:31:15.008551   70687 system_pods.go:61] "kube-apiserver-embed-certs-882095" [2e376ce4-504c-441a-baf8-0184a17e5bf4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0401 19:31:15.008561   70687 system_pods.go:61] "kube-controller-manager-embed-certs-882095" [e6bf3b2f-289b-4719-86f7-43e873fe8d85] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0401 19:31:15.008571   70687 system_pods.go:61] "kube-proxy-td6jk" [275536ff-4ec0-4d2c-8658-57aadda367b2] Running
	I0401 19:31:15.008580   70687 system_pods.go:61] "kube-scheduler-embed-certs-882095" [4551eb2a-9560-4d4f-aac0-9cfe6c790649] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0401 19:31:15.008591   70687 system_pods.go:61] "metrics-server-57f55c9bc5-g6z6c" [dc8aee6a-f101-4109-a259-351fddbddd44] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:31:15.008599   70687 system_pods.go:61] "storage-provisioner" [82a76833-c874-45d8-8ba7-1a483c15a997] Running
	I0401 19:31:15.008609   70687 system_pods.go:74] duration metric: took 10.480741ms to wait for pod list to return data ...
	I0401 19:31:15.008622   70687 node_conditions.go:102] verifying NodePressure condition ...
	I0401 19:31:15.012256   70687 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 19:31:15.012289   70687 node_conditions.go:123] node cpu capacity is 2
	I0401 19:31:15.012303   70687 node_conditions.go:105] duration metric: took 3.672159ms to run NodePressure ...
	I0401 19:31:15.012327   70687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:15.288861   70687 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0401 19:31:15.293731   70687 kubeadm.go:733] kubelet initialised
	I0401 19:31:15.293750   70687 kubeadm.go:734] duration metric: took 4.868595ms waiting for restarted kubelet to initialise ...
	I0401 19:31:15.293758   70687 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:31:15.298657   70687 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-nvcq4" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:15.304795   70687 pod_ready.go:97] node "embed-certs-882095" hosting pod "coredns-76f75df574-nvcq4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-882095" has status "Ready":"False"
	I0401 19:31:15.304813   70687 pod_ready.go:81] duration metric: took 6.134849ms for pod "coredns-76f75df574-nvcq4" in "kube-system" namespace to be "Ready" ...
	E0401 19:31:15.304822   70687 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-882095" hosting pod "coredns-76f75df574-nvcq4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-882095" has status "Ready":"False"
	I0401 19:31:15.304827   70687 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:15.309184   70687 pod_ready.go:97] node "embed-certs-882095" hosting pod "etcd-embed-certs-882095" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-882095" has status "Ready":"False"
	I0401 19:31:15.309204   70687 pod_ready.go:81] duration metric: took 4.369325ms for pod "etcd-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	E0401 19:31:15.309213   70687 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-882095" hosting pod "etcd-embed-certs-882095" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-882095" has status "Ready":"False"
	I0401 19:31:15.309221   70687 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:15.313737   70687 pod_ready.go:97] node "embed-certs-882095" hosting pod "kube-apiserver-embed-certs-882095" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-882095" has status "Ready":"False"
	I0401 19:31:15.313755   70687 pod_ready.go:81] duration metric: took 4.525801ms for pod "kube-apiserver-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	E0401 19:31:15.313764   70687 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-882095" hosting pod "kube-apiserver-embed-certs-882095" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-882095" has status "Ready":"False"
	I0401 19:31:15.313771   70687 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:15.401827   70687 pod_ready.go:97] node "embed-certs-882095" hosting pod "kube-controller-manager-embed-certs-882095" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-882095" has status "Ready":"False"
	I0401 19:31:15.401857   70687 pod_ready.go:81] duration metric: took 88.077915ms for pod "kube-controller-manager-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	E0401 19:31:15.401871   70687 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-882095" hosting pod "kube-controller-manager-embed-certs-882095" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-882095" has status "Ready":"False"
	I0401 19:31:15.401878   70687 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-td6jk" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:15.802462   70687 pod_ready.go:92] pod "kube-proxy-td6jk" in "kube-system" namespace has status "Ready":"True"
	I0401 19:31:15.802484   70687 pod_ready.go:81] duration metric: took 400.599194ms for pod "kube-proxy-td6jk" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:15.802494   70687 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:15.142653   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:15.143000   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:15.143062   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:15.142972   71724 retry.go:31] will retry after 3.764823961s: waiting for machine to come up
	I0401 19:31:20.350903   71168 start.go:364] duration metric: took 3m27.278785625s to acquireMachinesLock for "old-k8s-version-163608"
	I0401 19:31:20.350993   71168 start.go:96] Skipping create...Using existing machine configuration
	I0401 19:31:20.351010   71168 fix.go:54] fixHost starting: 
	I0401 19:31:20.351490   71168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:31:20.351571   71168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:31:20.368575   71168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38247
	I0401 19:31:20.368936   71168 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:31:20.369448   71168 main.go:141] libmachine: Using API Version  1
	I0401 19:31:20.369469   71168 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:31:20.369822   71168 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:31:20.370033   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:31:20.370195   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetState
	I0401 19:31:20.371625   71168 fix.go:112] recreateIfNeeded on old-k8s-version-163608: state=Stopped err=<nil>
	I0401 19:31:20.371681   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	W0401 19:31:20.371842   71168 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 19:31:20.374328   71168 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-163608" ...
	I0401 19:31:17.809256   70687 pod_ready.go:102] pod "kube-scheduler-embed-certs-882095" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:19.809947   70687 pod_ready.go:102] pod "kube-scheduler-embed-certs-882095" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:20.818455   70687 pod_ready.go:92] pod "kube-scheduler-embed-certs-882095" in "kube-system" namespace has status "Ready":"True"
	I0401 19:31:20.818481   70687 pod_ready.go:81] duration metric: took 5.015979611s for pod "kube-scheduler-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:20.818493   70687 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:18.910798   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:18.911231   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Found IP for machine: 192.168.61.145
	I0401 19:31:18.911266   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has current primary IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:18.911277   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Reserving static IP address...
	I0401 19:31:18.911761   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-734648", mac: "52:54:00:49:dc:50", ip: "192.168.61.145"} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:18.911795   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | skip adding static IP to network mk-default-k8s-diff-port-734648 - found existing host DHCP lease matching {name: "default-k8s-diff-port-734648", mac: "52:54:00:49:dc:50", ip: "192.168.61.145"}
	I0401 19:31:18.911819   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Reserved static IP address: 192.168.61.145
	I0401 19:31:18.911835   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for SSH to be available...
	I0401 19:31:18.911869   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | Getting to WaitForSSH function...
	I0401 19:31:18.913767   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:18.914054   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:18.914082   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:18.914207   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | Using SSH client type: external
	I0401 19:31:18.914236   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | Using SSH private key: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/default-k8s-diff-port-734648/id_rsa (-rw-------)
	I0401 19:31:18.914278   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.145 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18233-10493/.minikube/machines/default-k8s-diff-port-734648/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0401 19:31:18.914300   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | About to run SSH command:
	I0401 19:31:18.914313   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | exit 0
	I0401 19:31:19.037713   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | SSH cmd err, output: <nil>: 
	I0401 19:31:19.038080   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetConfigRaw
	I0401 19:31:19.038767   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetIP
	I0401 19:31:19.042390   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.043249   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:19.043311   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.043949   70962 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/default-k8s-diff-port-734648/config.json ...
	I0401 19:31:19.044504   70962 machine.go:94] provisionDockerMachine start ...
	I0401 19:31:19.044554   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:31:19.044916   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:19.047637   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.047908   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:19.047941   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.048088   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:31:19.048265   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:19.048408   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:19.048522   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:31:19.048636   70962 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:19.048790   70962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.145 22 <nil> <nil>}
	I0401 19:31:19.048800   70962 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 19:31:19.154415   70962 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0401 19:31:19.154444   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetMachineName
	I0401 19:31:19.154683   70962 buildroot.go:166] provisioning hostname "default-k8s-diff-port-734648"
	I0401 19:31:19.154713   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetMachineName
	I0401 19:31:19.154887   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:19.157442   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.157867   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:19.157896   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.158041   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:31:19.158237   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:19.158402   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:19.158540   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:31:19.158713   70962 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:19.158905   70962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.145 22 <nil> <nil>}
	I0401 19:31:19.158920   70962 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-734648 && echo "default-k8s-diff-port-734648" | sudo tee /etc/hostname
	I0401 19:31:19.276129   70962 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-734648
	
	I0401 19:31:19.276160   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:19.278657   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.278918   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:19.278940   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.279158   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:31:19.279353   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:19.279523   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:19.279671   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:31:19.279831   70962 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:19.280057   70962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.145 22 <nil> <nil>}
	I0401 19:31:19.280082   70962 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-734648' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-734648/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-734648' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 19:31:19.395730   70962 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 19:31:19.395755   70962 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18233-10493/.minikube CaCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18233-10493/.minikube}
	I0401 19:31:19.395779   70962 buildroot.go:174] setting up certificates
	I0401 19:31:19.395788   70962 provision.go:84] configureAuth start
	I0401 19:31:19.395798   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetMachineName
	I0401 19:31:19.396046   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetIP
	I0401 19:31:19.398668   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.399036   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:19.399065   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.399219   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:19.401309   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.401611   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:19.401656   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.401750   70962 provision.go:143] copyHostCerts
	I0401 19:31:19.401812   70962 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem, removing ...
	I0401 19:31:19.401822   70962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem
	I0401 19:31:19.401876   70962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem (1082 bytes)
	I0401 19:31:19.401978   70962 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem, removing ...
	I0401 19:31:19.401988   70962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem
	I0401 19:31:19.402015   70962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem (1123 bytes)
	I0401 19:31:19.402121   70962 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem, removing ...
	I0401 19:31:19.402129   70962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem
	I0401 19:31:19.402147   70962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem (1679 bytes)
	I0401 19:31:19.402205   70962 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-734648 san=[127.0.0.1 192.168.61.145 default-k8s-diff-port-734648 localhost minikube]
	I0401 19:31:19.655203   70962 provision.go:177] copyRemoteCerts
	I0401 19:31:19.655256   70962 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 19:31:19.655281   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:19.658194   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.658512   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:19.658540   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.658693   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:31:19.658896   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:19.659039   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:31:19.659187   70962 sshutil.go:53] new ssh client: &{IP:192.168.61.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/default-k8s-diff-port-734648/id_rsa Username:docker}
	I0401 19:31:19.743131   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 19:31:19.771327   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0401 19:31:19.797350   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 19:31:19.824244   70962 provision.go:87] duration metric: took 428.444366ms to configureAuth
	I0401 19:31:19.824274   70962 buildroot.go:189] setting minikube options for container-runtime
	I0401 19:31:19.824473   70962 config.go:182] Loaded profile config "default-k8s-diff-port-734648": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 19:31:19.824563   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:19.827376   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.827798   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:19.827838   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.827984   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:31:19.828184   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:19.828352   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:19.828496   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:31:19.828653   70962 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:19.828827   70962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.145 22 <nil> <nil>}
	I0401 19:31:19.828865   70962 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 19:31:20.107291   70962 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 19:31:20.107320   70962 machine.go:97] duration metric: took 1.062788118s to provisionDockerMachine
	I0401 19:31:20.107333   70962 start.go:293] postStartSetup for "default-k8s-diff-port-734648" (driver="kvm2")
	I0401 19:31:20.107347   70962 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 19:31:20.107369   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:31:20.107671   70962 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 19:31:20.107693   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:20.110380   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.110739   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:20.110780   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.110895   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:31:20.111075   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:20.111218   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:31:20.111353   70962 sshutil.go:53] new ssh client: &{IP:192.168.61.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/default-k8s-diff-port-734648/id_rsa Username:docker}
	I0401 19:31:20.193908   70962 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 19:31:20.198544   70962 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 19:31:20.198572   70962 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/addons for local assets ...
	I0401 19:31:20.198639   70962 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/files for local assets ...
	I0401 19:31:20.198704   70962 filesync.go:149] local asset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> 177512.pem in /etc/ssl/certs
	I0401 19:31:20.198788   70962 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 19:31:20.209866   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:31:20.240362   70962 start.go:296] duration metric: took 133.016405ms for postStartSetup
	I0401 19:31:20.240399   70962 fix.go:56] duration metric: took 19.789546756s for fixHost
	I0401 19:31:20.240418   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:20.243069   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.243448   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:20.243479   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.243657   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:31:20.243865   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:20.244061   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:20.244209   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:31:20.244399   70962 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:20.244600   70962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.145 22 <nil> <nil>}
	I0401 19:31:20.244616   70962 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 19:31:20.350752   70962 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711999880.326440079
	
	I0401 19:31:20.350779   70962 fix.go:216] guest clock: 1711999880.326440079
	I0401 19:31:20.350789   70962 fix.go:229] Guest: 2024-04-01 19:31:20.326440079 +0000 UTC Remote: 2024-04-01 19:31:20.240403038 +0000 UTC m=+222.858311555 (delta=86.037041ms)
	I0401 19:31:20.350808   70962 fix.go:200] guest clock delta is within tolerance: 86.037041ms
	I0401 19:31:20.350812   70962 start.go:83] releasing machines lock for "default-k8s-diff-port-734648", held for 19.899997669s
	I0401 19:31:20.350838   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:31:20.351118   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetIP
	I0401 19:31:20.354040   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.354395   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:20.354413   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.354595   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:31:20.355068   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:31:20.355238   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:31:20.355317   70962 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 19:31:20.355356   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:20.355530   70962 ssh_runner.go:195] Run: cat /version.json
	I0401 19:31:20.355557   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:20.357970   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.358372   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:20.358405   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.358430   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.358585   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:31:20.358766   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:20.358807   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:20.358834   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.358957   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:31:20.359013   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:31:20.359150   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:20.359203   70962 sshutil.go:53] new ssh client: &{IP:192.168.61.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/default-k8s-diff-port-734648/id_rsa Username:docker}
	I0401 19:31:20.359292   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:31:20.359439   70962 sshutil.go:53] new ssh client: &{IP:192.168.61.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/default-k8s-diff-port-734648/id_rsa Username:docker}
	I0401 19:31:20.466422   70962 ssh_runner.go:195] Run: systemctl --version
	I0401 19:31:20.472949   70962 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 19:31:20.626069   70962 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 19:31:20.633425   70962 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 19:31:20.633497   70962 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 19:31:20.658883   70962 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 19:31:20.658910   70962 start.go:494] detecting cgroup driver to use...
	I0401 19:31:20.658979   70962 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 19:31:20.686302   70962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 19:31:20.704507   70962 docker.go:217] disabling cri-docker service (if available) ...
	I0401 19:31:20.704583   70962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 19:31:20.725216   70962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 19:31:20.740635   70962 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 19:31:20.864184   70962 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 19:31:21.010752   70962 docker.go:233] disabling docker service ...
	I0401 19:31:21.010821   70962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 19:31:21.030718   70962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 19:31:21.047787   70962 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 19:31:21.194455   70962 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 19:31:21.337547   70962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 19:31:21.357144   70962 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 19:31:21.381709   70962 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0401 19:31:21.381782   70962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:21.393160   70962 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 19:31:21.393229   70962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:21.405047   70962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:21.416810   70962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:21.428947   70962 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 19:31:21.440886   70962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:21.452872   70962 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:21.473096   70962 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:21.484427   70962 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 19:31:21.494121   70962 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0401 19:31:21.494190   70962 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0401 19:31:21.509859   70962 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 19:31:21.520329   70962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:31:21.671075   70962 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 19:31:21.818822   70962 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 19:31:21.818892   70962 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 19:31:21.825189   70962 start.go:562] Will wait 60s for crictl version
	I0401 19:31:21.825260   70962 ssh_runner.go:195] Run: which crictl
	I0401 19:31:21.830058   70962 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 19:31:21.869617   70962 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 19:31:21.869721   70962 ssh_runner.go:195] Run: crio --version
	I0401 19:31:21.906091   70962 ssh_runner.go:195] Run: crio --version
	I0401 19:31:21.946240   70962 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0401 19:31:21.947653   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetIP
	I0401 19:31:21.950691   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:21.951156   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:21.951201   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:21.951445   70962 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0401 19:31:21.959376   70962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:31:21.974226   70962 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-734648 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:default-k8s-diff-port-734648 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.145 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 19:31:21.974348   70962 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0401 19:31:21.974426   70962 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:31:22.011856   70962 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0401 19:31:22.011930   70962 ssh_runner.go:195] Run: which lz4
	I0401 19:31:22.016672   70962 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0401 19:31:22.021864   70962 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 19:31:22.021893   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0401 19:31:20.375755   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .Start
	I0401 19:31:20.375932   71168 main.go:141] libmachine: (old-k8s-version-163608) Ensuring networks are active...
	I0401 19:31:20.376713   71168 main.go:141] libmachine: (old-k8s-version-163608) Ensuring network default is active
	I0401 19:31:20.377858   71168 main.go:141] libmachine: (old-k8s-version-163608) Ensuring network mk-old-k8s-version-163608 is active
	I0401 19:31:20.378278   71168 main.go:141] libmachine: (old-k8s-version-163608) Getting domain xml...
	I0401 19:31:20.378972   71168 main.go:141] libmachine: (old-k8s-version-163608) Creating domain...
	I0401 19:31:21.643237   71168 main.go:141] libmachine: (old-k8s-version-163608) Waiting to get IP...
	I0401 19:31:21.644082   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:21.644468   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:21.644535   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:21.644446   71902 retry.go:31] will retry after 208.251344ms: waiting for machine to come up
	I0401 19:31:21.854070   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:21.854545   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:21.854593   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:21.854527   71902 retry.go:31] will retry after 240.466964ms: waiting for machine to come up
	I0401 19:31:22.096940   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:22.097447   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:22.097470   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:22.097405   71902 retry.go:31] will retry after 480.217755ms: waiting for machine to come up
	I0401 19:31:22.579111   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:22.579596   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:22.579628   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:22.579518   71902 retry.go:31] will retry after 581.713487ms: waiting for machine to come up
	I0401 19:31:22.826723   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:25.326165   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:23.813558   70962 crio.go:462] duration metric: took 1.796902191s to copy over tarball
	I0401 19:31:23.813619   70962 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 19:31:26.447802   70962 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.634145928s)
	I0401 19:31:26.447840   70962 crio.go:469] duration metric: took 2.634257029s to extract the tarball
	I0401 19:31:26.447849   70962 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 19:31:26.488228   70962 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:31:26.535741   70962 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 19:31:26.535770   70962 cache_images.go:84] Images are preloaded, skipping loading
	I0401 19:31:26.535780   70962 kubeadm.go:928] updating node { 192.168.61.145 8444 v1.29.3 crio true true} ...
	I0401 19:31:26.535931   70962 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-734648 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.145
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-734648 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 19:31:26.536019   70962 ssh_runner.go:195] Run: crio config
	I0401 19:31:26.590211   70962 cni.go:84] Creating CNI manager for ""
	I0401 19:31:26.590239   70962 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:31:26.590254   70962 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 19:31:26.590282   70962 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.145 APIServerPort:8444 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-734648 NodeName:default-k8s-diff-port-734648 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.145"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.145 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 19:31:26.590459   70962 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.145
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-734648"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.145
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.145"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 19:31:26.590533   70962 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0401 19:31:26.602186   70962 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 19:31:26.602264   70962 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 19:31:26.616193   70962 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0401 19:31:26.636634   70962 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 19:31:26.660339   70962 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0401 19:31:26.687935   70962 ssh_runner.go:195] Run: grep 192.168.61.145	control-plane.minikube.internal$ /etc/hosts
	I0401 19:31:26.693966   70962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.145	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:31:26.709876   70962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:31:26.854990   70962 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:31:26.877303   70962 certs.go:68] Setting up /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/default-k8s-diff-port-734648 for IP: 192.168.61.145
	I0401 19:31:26.877327   70962 certs.go:194] generating shared ca certs ...
	I0401 19:31:26.877350   70962 certs.go:226] acquiring lock for ca certs: {Name:mk348b3e250c104b662139cd7212c6c6dfda3180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:31:26.877578   70962 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key
	I0401 19:31:26.877621   70962 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key
	I0401 19:31:26.877637   70962 certs.go:256] generating profile certs ...
	I0401 19:31:26.877777   70962 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/default-k8s-diff-port-734648/client.key
	I0401 19:31:26.877864   70962 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/default-k8s-diff-port-734648/apiserver.key.e4671486
	I0401 19:31:26.877909   70962 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/default-k8s-diff-port-734648/proxy-client.key
	I0401 19:31:26.878007   70962 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem (1338 bytes)
	W0401 19:31:26.878049   70962 certs.go:480] ignoring /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751_empty.pem, impossibly tiny 0 bytes
	I0401 19:31:26.878062   70962 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem (1675 bytes)
	I0401 19:31:26.878094   70962 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem (1082 bytes)
	I0401 19:31:26.878128   70962 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem (1123 bytes)
	I0401 19:31:26.878153   70962 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem (1679 bytes)
	I0401 19:31:26.878203   70962 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:31:26.879101   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 19:31:26.917600   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 19:31:26.968606   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 19:31:27.012527   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 19:31:27.078525   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/default-k8s-diff-port-734648/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0401 19:31:27.125195   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/default-k8s-diff-port-734648/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 19:31:27.157190   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/default-k8s-diff-port-734648/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 19:31:27.185434   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/default-k8s-diff-port-734648/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 19:31:27.215215   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 19:31:27.246938   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem --> /usr/share/ca-certificates/17751.pem (1338 bytes)
	I0401 19:31:27.277210   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /usr/share/ca-certificates/177512.pem (1708 bytes)
	I0401 19:31:27.307099   70962 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0401 19:31:27.326664   70962 ssh_runner.go:195] Run: openssl version
	I0401 19:31:27.333292   70962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 19:31:27.344724   70962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:27.350096   70962 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 18:07 /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:27.350146   70962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:27.356421   70962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 19:31:27.368124   70962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17751.pem && ln -fs /usr/share/ca-certificates/17751.pem /etc/ssl/certs/17751.pem"
	I0401 19:31:27.379331   70962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17751.pem
	I0401 19:31:27.384465   70962 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 18:15 /usr/share/ca-certificates/17751.pem
	I0401 19:31:27.384518   70962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17751.pem
	I0401 19:31:27.391192   70962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17751.pem /etc/ssl/certs/51391683.0"
	I0401 19:31:27.403898   70962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177512.pem && ln -fs /usr/share/ca-certificates/177512.pem /etc/ssl/certs/177512.pem"
	I0401 19:31:27.418676   70962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177512.pem
	I0401 19:31:27.424254   70962 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 18:15 /usr/share/ca-certificates/177512.pem
	I0401 19:31:27.424308   70962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177512.pem
	I0401 19:31:23.163331   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:23.163803   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:23.163838   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:23.163770   71902 retry.go:31] will retry after 737.12898ms: waiting for machine to come up
	I0401 19:31:23.902739   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:23.903192   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:23.903222   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:23.903139   71902 retry.go:31] will retry after 718.826495ms: waiting for machine to come up
	I0401 19:31:24.624169   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:24.624620   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:24.624648   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:24.624574   71902 retry.go:31] will retry after 1.020701715s: waiting for machine to come up
	I0401 19:31:25.647470   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:25.647957   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:25.647988   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:25.647921   71902 retry.go:31] will retry after 1.318891306s: waiting for machine to come up
	I0401 19:31:26.968134   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:26.968588   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:26.968613   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:26.968535   71902 retry.go:31] will retry after 1.465864517s: waiting for machine to come up
	I0401 19:31:27.752110   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:29.827324   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:27.431798   70962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177512.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 19:31:27.749367   70962 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 19:31:27.757123   70962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 19:31:27.768626   70962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 19:31:27.778119   70962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 19:31:27.786893   70962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 19:31:27.797129   70962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 19:31:27.804804   70962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 19:31:27.813194   70962 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-734648 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.29.3 ClusterName:default-k8s-diff-port-734648 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.145 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:31:27.813274   70962 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 19:31:27.813325   70962 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:31:27.864565   70962 cri.go:89] found id: ""
	I0401 19:31:27.864637   70962 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0401 19:31:27.876745   70962 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0401 19:31:27.876789   70962 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0401 19:31:27.876797   70962 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0401 19:31:27.876862   70962 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 19:31:27.887494   70962 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 19:31:27.888632   70962 kubeconfig.go:125] found "default-k8s-diff-port-734648" server: "https://192.168.61.145:8444"
	I0401 19:31:27.890729   70962 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 19:31:27.900847   70962 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.145
	I0401 19:31:27.900877   70962 kubeadm.go:1154] stopping kube-system containers ...
	I0401 19:31:27.900889   70962 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0401 19:31:27.900936   70962 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:31:27.952874   70962 cri.go:89] found id: ""
	I0401 19:31:27.952954   70962 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0401 19:31:27.971647   70962 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:31:27.982541   70962 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:31:27.982576   70962 kubeadm.go:156] found existing configuration files:
	
	I0401 19:31:27.982612   70962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0401 19:31:27.992341   70962 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:31:27.992414   70962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:31:28.002685   70962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0401 19:31:28.012599   70962 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:31:28.012658   70962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:31:28.022731   70962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0401 19:31:28.033584   70962 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:31:28.033661   70962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:31:28.044940   70962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0401 19:31:28.055832   70962 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:31:28.055886   70962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:31:28.066919   70962 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:31:28.078715   70962 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:28.212251   70962 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:29.214190   70962 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.001904972s)
	I0401 19:31:29.214224   70962 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:29.444484   70962 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:29.536112   70962 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:29.664087   70962 api_server.go:52] waiting for apiserver process to appear ...
	I0401 19:31:29.664201   70962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:30.165117   70962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:30.664872   70962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:30.707251   70962 api_server.go:72] duration metric: took 1.04316448s to wait for apiserver process to appear ...
	I0401 19:31:30.707280   70962 api_server.go:88] waiting for apiserver healthz status ...
	I0401 19:31:30.707297   70962 api_server.go:253] Checking apiserver healthz at https://192.168.61.145:8444/healthz ...
	I0401 19:31:30.707881   70962 api_server.go:269] stopped: https://192.168.61.145:8444/healthz: Get "https://192.168.61.145:8444/healthz": dial tcp 192.168.61.145:8444: connect: connection refused
	I0401 19:31:31.207434   70962 api_server.go:253] Checking apiserver healthz at https://192.168.61.145:8444/healthz ...
	I0401 19:31:28.435890   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:28.436304   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:28.436334   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:28.436255   71902 retry.go:31] will retry after 2.062597688s: waiting for machine to come up
	I0401 19:31:30.500523   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:30.500999   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:30.501027   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:30.500954   71902 retry.go:31] will retry after 2.068480339s: waiting for machine to come up
	I0401 19:31:32.571229   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:32.571603   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:32.571635   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:32.571550   71902 retry.go:31] will retry after 3.355965883s: waiting for machine to come up
	I0401 19:31:33.707613   70962 api_server.go:279] https://192.168.61.145:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0401 19:31:33.707647   70962 api_server.go:103] status: https://192.168.61.145:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0401 19:31:33.707663   70962 api_server.go:253] Checking apiserver healthz at https://192.168.61.145:8444/healthz ...
	I0401 19:31:33.728509   70962 api_server.go:279] https://192.168.61.145:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0401 19:31:33.728582   70962 api_server.go:103] status: https://192.168.61.145:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0401 19:31:34.208163   70962 api_server.go:253] Checking apiserver healthz at https://192.168.61.145:8444/healthz ...
	I0401 19:31:34.212754   70962 api_server.go:279] https://192.168.61.145:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0401 19:31:34.212784   70962 api_server.go:103] status: https://192.168.61.145:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0401 19:31:34.708282   70962 api_server.go:253] Checking apiserver healthz at https://192.168.61.145:8444/healthz ...
	I0401 19:31:34.715268   70962 api_server.go:279] https://192.168.61.145:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0401 19:31:34.715294   70962 api_server.go:103] status: https://192.168.61.145:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0401 19:31:35.207460   70962 api_server.go:253] Checking apiserver healthz at https://192.168.61.145:8444/healthz ...
	I0401 19:31:35.212542   70962 api_server.go:279] https://192.168.61.145:8444/healthz returned 200:
	ok
	I0401 19:31:35.219264   70962 api_server.go:141] control plane version: v1.29.3
	I0401 19:31:35.219287   70962 api_server.go:131] duration metric: took 4.512000334s to wait for apiserver health ...
	I0401 19:31:35.219294   70962 cni.go:84] Creating CNI manager for ""
	I0401 19:31:35.219309   70962 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:31:35.221080   70962 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0401 19:31:31.828694   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:34.325740   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:35.222800   70962 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0401 19:31:35.238787   70962 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0401 19:31:35.286002   70962 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 19:31:35.302379   70962 system_pods.go:59] 8 kube-system pods found
	I0401 19:31:35.302420   70962 system_pods.go:61] "coredns-76f75df574-tdwrh" [c1d3b591-fa81-46dd-847c-ffdfc22937fa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:31:35.302437   70962 system_pods.go:61] "etcd-default-k8s-diff-port-734648" [e977793d-ec92-40b8-a0fe-1b2400fb1af6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0401 19:31:35.302447   70962 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-734648" [2d0eae31-35c3-40aa-9d28-a2f51849c15d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0401 19:31:35.302469   70962 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-734648" [cded1171-2e1b-4d70-9f26-d1d3a6558da1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0401 19:31:35.302483   70962 system_pods.go:61] "kube-proxy-mn546" [f9b6366f-7095-418c-ba24-529c0555f438] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:31:35.302493   70962 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-734648" [c1518ece-8cbf-49fe-9091-15b38dc1bd62] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0401 19:31:35.302504   70962 system_pods.go:61] "metrics-server-57f55c9bc5-g7mg2" [d1ede79a-a7e6-42bd-a799-197ffc7c7939] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:31:35.302519   70962 system_pods.go:61] "storage-provisioner" [bd55f9c8-580c-4eb1-adbc-020d5bbedce9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:31:35.302532   70962 system_pods.go:74] duration metric: took 16.508651ms to wait for pod list to return data ...
	I0401 19:31:35.302545   70962 node_conditions.go:102] verifying NodePressure condition ...
	I0401 19:31:35.305826   70962 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 19:31:35.305862   70962 node_conditions.go:123] node cpu capacity is 2
	I0401 19:31:35.305876   70962 node_conditions.go:105] duration metric: took 3.322577ms to run NodePressure ...
	I0401 19:31:35.305895   70962 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:35.603225   70962 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0401 19:31:35.608584   70962 kubeadm.go:733] kubelet initialised
	I0401 19:31:35.608611   70962 kubeadm.go:734] duration metric: took 5.361549ms waiting for restarted kubelet to initialise ...
	I0401 19:31:35.608620   70962 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:31:35.615252   70962 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-tdwrh" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:35.620605   70962 pod_ready.go:97] node "default-k8s-diff-port-734648" hosting pod "coredns-76f75df574-tdwrh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-734648" has status "Ready":"False"
	I0401 19:31:35.620627   70962 pod_ready.go:81] duration metric: took 5.353257ms for pod "coredns-76f75df574-tdwrh" in "kube-system" namespace to be "Ready" ...
	E0401 19:31:35.620634   70962 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-734648" hosting pod "coredns-76f75df574-tdwrh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-734648" has status "Ready":"False"
	I0401 19:31:35.620641   70962 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:35.625280   70962 pod_ready.go:97] node "default-k8s-diff-port-734648" hosting pod "etcd-default-k8s-diff-port-734648" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-734648" has status "Ready":"False"
	I0401 19:31:35.625297   70962 pod_ready.go:81] duration metric: took 4.646748ms for pod "etcd-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	E0401 19:31:35.625311   70962 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-734648" hosting pod "etcd-default-k8s-diff-port-734648" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-734648" has status "Ready":"False"
	I0401 19:31:35.625325   70962 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:35.630150   70962 pod_ready.go:97] node "default-k8s-diff-port-734648" hosting pod "kube-apiserver-default-k8s-diff-port-734648" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-734648" has status "Ready":"False"
	I0401 19:31:35.630170   70962 pod_ready.go:81] duration metric: took 4.83409ms for pod "kube-apiserver-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	E0401 19:31:35.630178   70962 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-734648" hosting pod "kube-apiserver-default-k8s-diff-port-734648" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-734648" has status "Ready":"False"
	I0401 19:31:35.630184   70962 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:35.693865   70962 pod_ready.go:97] node "default-k8s-diff-port-734648" hosting pod "kube-controller-manager-default-k8s-diff-port-734648" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-734648" has status "Ready":"False"
	I0401 19:31:35.693890   70962 pod_ready.go:81] duration metric: took 63.697397ms for pod "kube-controller-manager-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	E0401 19:31:35.693901   70962 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-734648" hosting pod "kube-controller-manager-default-k8s-diff-port-734648" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-734648" has status "Ready":"False"
	I0401 19:31:35.693908   70962 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mn546" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:36.090904   70962 pod_ready.go:92] pod "kube-proxy-mn546" in "kube-system" namespace has status "Ready":"True"
	I0401 19:31:36.090928   70962 pod_ready.go:81] duration metric: took 397.013717ms for pod "kube-proxy-mn546" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:36.090938   70962 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:35.929498   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:35.930010   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:35.930042   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:35.929963   71902 retry.go:31] will retry after 3.806123644s: waiting for machine to come up
	I0401 19:31:41.203538   70284 start.go:364] duration metric: took 56.718693538s to acquireMachinesLock for "no-preload-472858"
	I0401 19:31:41.203592   70284 start.go:96] Skipping create...Using existing machine configuration
	I0401 19:31:41.203607   70284 fix.go:54] fixHost starting: 
	I0401 19:31:41.204096   70284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:31:41.204143   70284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:31:41.221574   70284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42471
	I0401 19:31:41.222045   70284 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:31:41.222527   70284 main.go:141] libmachine: Using API Version  1
	I0401 19:31:41.222547   70284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:31:41.222856   70284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:31:41.223051   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:31:41.223209   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetState
	I0401 19:31:41.224801   70284 fix.go:112] recreateIfNeeded on no-preload-472858: state=Stopped err=<nil>
	I0401 19:31:41.224827   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	W0401 19:31:41.224979   70284 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 19:31:41.226937   70284 out.go:177] * Restarting existing kvm2 VM for "no-preload-472858" ...
	I0401 19:31:36.824790   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:38.824976   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:40.827269   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:41.228315   70284 main.go:141] libmachine: (no-preload-472858) Calling .Start
	I0401 19:31:41.228509   70284 main.go:141] libmachine: (no-preload-472858) Ensuring networks are active...
	I0401 19:31:41.229206   70284 main.go:141] libmachine: (no-preload-472858) Ensuring network default is active
	I0401 19:31:41.229603   70284 main.go:141] libmachine: (no-preload-472858) Ensuring network mk-no-preload-472858 is active
	I0401 19:31:41.229999   70284 main.go:141] libmachine: (no-preload-472858) Getting domain xml...
	I0401 19:31:41.230682   70284 main.go:141] libmachine: (no-preload-472858) Creating domain...
	I0401 19:31:38.097417   70962 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:40.098187   70962 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:42.099891   70962 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:39.739700   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.740313   71168 main.go:141] libmachine: (old-k8s-version-163608) Found IP for machine: 192.168.50.106
	I0401 19:31:39.740369   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has current primary IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.740386   71168 main.go:141] libmachine: (old-k8s-version-163608) Reserving static IP address...
	I0401 19:31:39.740767   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "old-k8s-version-163608", mac: "52:54:00:fe:1b:e7", ip: "192.168.50.106"} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:39.740798   71168 main.go:141] libmachine: (old-k8s-version-163608) Reserved static IP address: 192.168.50.106
	I0401 19:31:39.740818   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | skip adding static IP to network mk-old-k8s-version-163608 - found existing host DHCP lease matching {name: "old-k8s-version-163608", mac: "52:54:00:fe:1b:e7", ip: "192.168.50.106"}
	I0401 19:31:39.740839   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | Getting to WaitForSSH function...
	I0401 19:31:39.740857   71168 main.go:141] libmachine: (old-k8s-version-163608) Waiting for SSH to be available...
	I0401 19:31:39.743023   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.743417   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:39.743447   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.743589   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | Using SSH client type: external
	I0401 19:31:39.743614   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | Using SSH private key: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/old-k8s-version-163608/id_rsa (-rw-------)
	I0401 19:31:39.743648   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.106 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18233-10493/.minikube/machines/old-k8s-version-163608/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0401 19:31:39.743662   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | About to run SSH command:
	I0401 19:31:39.743676   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | exit 0
	I0401 19:31:39.877699   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | SSH cmd err, output: <nil>: 
	I0401 19:31:39.878044   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetConfigRaw
	I0401 19:31:39.878611   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetIP
	I0401 19:31:39.880733   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.881074   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:39.881107   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.881352   71168 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/config.json ...
	I0401 19:31:39.881510   71168 machine.go:94] provisionDockerMachine start ...
	I0401 19:31:39.881529   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:31:39.881766   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:39.883980   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.884318   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:39.884360   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.884483   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:39.884675   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:39.884877   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:39.885029   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:39.885175   71168 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:39.885339   71168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.106 22 <nil> <nil>}
	I0401 19:31:39.885349   71168 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 19:31:39.994935   71168 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0401 19:31:39.994971   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetMachineName
	I0401 19:31:39.995213   71168 buildroot.go:166] provisioning hostname "old-k8s-version-163608"
	I0401 19:31:39.995241   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetMachineName
	I0401 19:31:39.995472   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:39.998179   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.998490   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:39.998525   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.998656   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:39.998805   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:39.998949   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:39.999054   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:39.999183   71168 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:39.999372   71168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.106 22 <nil> <nil>}
	I0401 19:31:39.999390   71168 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-163608 && echo "old-k8s-version-163608" | sudo tee /etc/hostname
	I0401 19:31:40.128852   71168 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-163608
	
	I0401 19:31:40.128880   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:40.131508   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.131817   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:40.131874   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.131987   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:40.132188   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:40.132365   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:40.132503   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:40.132693   71168 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:40.132890   71168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.106 22 <nil> <nil>}
	I0401 19:31:40.132908   71168 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-163608' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-163608/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-163608' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 19:31:40.252693   71168 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 19:31:40.252727   71168 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18233-10493/.minikube CaCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18233-10493/.minikube}
	I0401 19:31:40.252749   71168 buildroot.go:174] setting up certificates
	I0401 19:31:40.252759   71168 provision.go:84] configureAuth start
	I0401 19:31:40.252767   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetMachineName
	I0401 19:31:40.253030   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetIP
	I0401 19:31:40.255827   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.256183   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:40.256210   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.256418   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:40.259041   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.259388   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:40.259418   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.259540   71168 provision.go:143] copyHostCerts
	I0401 19:31:40.259592   71168 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem, removing ...
	I0401 19:31:40.259602   71168 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem
	I0401 19:31:40.259654   71168 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem (1082 bytes)
	I0401 19:31:40.259745   71168 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem, removing ...
	I0401 19:31:40.259754   71168 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem
	I0401 19:31:40.259773   71168 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem (1123 bytes)
	I0401 19:31:40.259822   71168 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem, removing ...
	I0401 19:31:40.259830   71168 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem
	I0401 19:31:40.259846   71168 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem (1679 bytes)
	I0401 19:31:40.259891   71168 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-163608 san=[127.0.0.1 192.168.50.106 localhost minikube old-k8s-version-163608]
	I0401 19:31:40.465177   71168 provision.go:177] copyRemoteCerts
	I0401 19:31:40.465241   71168 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 19:31:40.465265   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:40.467676   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.468040   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:40.468070   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.468272   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:40.468456   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:40.468622   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:40.468767   71168 sshutil.go:53] new ssh client: &{IP:192.168.50.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/old-k8s-version-163608/id_rsa Username:docker}
	I0401 19:31:40.557764   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 19:31:40.585326   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0401 19:31:40.611671   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 19:31:40.639265   71168 provision.go:87] duration metric: took 386.497023ms to configureAuth
	I0401 19:31:40.639296   71168 buildroot.go:189] setting minikube options for container-runtime
	I0401 19:31:40.639521   71168 config.go:182] Loaded profile config "old-k8s-version-163608": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 19:31:40.639590   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:40.642321   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.642733   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:40.642762   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.642921   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:40.643122   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:40.643294   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:40.643442   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:40.643647   71168 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:40.643802   71168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.106 22 <nil> <nil>}
	I0401 19:31:40.643819   71168 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 19:31:40.940619   71168 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 19:31:40.940647   71168 machine.go:97] duration metric: took 1.059122816s to provisionDockerMachine
	I0401 19:31:40.940661   71168 start.go:293] postStartSetup for "old-k8s-version-163608" (driver="kvm2")
	I0401 19:31:40.940672   71168 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 19:31:40.940687   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:31:40.940955   71168 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 19:31:40.940981   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:40.943787   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.944159   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:40.944197   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.944347   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:40.944556   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:40.944700   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:40.944834   71168 sshutil.go:53] new ssh client: &{IP:192.168.50.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/old-k8s-version-163608/id_rsa Username:docker}
	I0401 19:31:41.035824   71168 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 19:31:41.040975   71168 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 19:31:41.041007   71168 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/addons for local assets ...
	I0401 19:31:41.041085   71168 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/files for local assets ...
	I0401 19:31:41.041165   71168 filesync.go:149] local asset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> 177512.pem in /etc/ssl/certs
	I0401 19:31:41.041255   71168 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 19:31:41.052356   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:31:41.080699   71168 start.go:296] duration metric: took 140.024653ms for postStartSetup
	I0401 19:31:41.080737   71168 fix.go:56] duration metric: took 20.729726297s for fixHost
	I0401 19:31:41.080759   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:41.083664   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:41.084045   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:41.084075   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:41.084202   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:41.084405   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:41.084599   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:41.084796   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:41.084971   71168 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:41.085169   71168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.106 22 <nil> <nil>}
	I0401 19:31:41.085180   71168 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 19:31:41.203392   71168 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711999901.182365994
	
	I0401 19:31:41.203412   71168 fix.go:216] guest clock: 1711999901.182365994
	I0401 19:31:41.203419   71168 fix.go:229] Guest: 2024-04-01 19:31:41.182365994 +0000 UTC Remote: 2024-04-01 19:31:41.080741553 +0000 UTC m=+228.159955492 (delta=101.624441ms)
	I0401 19:31:41.203437   71168 fix.go:200] guest clock delta is within tolerance: 101.624441ms
	I0401 19:31:41.203442   71168 start.go:83] releasing machines lock for "old-k8s-version-163608", held for 20.852486097s
	I0401 19:31:41.203462   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:31:41.203744   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetIP
	I0401 19:31:41.206582   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:41.206952   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:41.206973   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:41.207151   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:31:41.207701   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:31:41.207891   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:31:41.207954   71168 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 19:31:41.207996   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:41.208096   71168 ssh_runner.go:195] Run: cat /version.json
	I0401 19:31:41.208127   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:41.210731   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:41.210928   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:41.211107   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:41.211132   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:41.211317   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:41.211446   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:41.211488   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:41.211491   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:41.211636   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:41.211692   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:41.211783   71168 sshutil.go:53] new ssh client: &{IP:192.168.50.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/old-k8s-version-163608/id_rsa Username:docker}
	I0401 19:31:41.211891   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:41.212031   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:41.212187   71168 sshutil.go:53] new ssh client: &{IP:192.168.50.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/old-k8s-version-163608/id_rsa Username:docker}
	I0401 19:31:41.296330   71168 ssh_runner.go:195] Run: systemctl --version
	I0401 19:31:41.326247   71168 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 19:31:41.479411   71168 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 19:31:41.486996   71168 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 19:31:41.487063   71168 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 19:31:41.507840   71168 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 19:31:41.507870   71168 start.go:494] detecting cgroup driver to use...
	I0401 19:31:41.507942   71168 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 19:31:41.533063   71168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 19:31:41.551699   71168 docker.go:217] disabling cri-docker service (if available) ...
	I0401 19:31:41.551754   71168 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 19:31:41.568078   71168 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 19:31:41.584278   71168 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 19:31:41.726884   71168 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 19:31:41.882514   71168 docker.go:233] disabling docker service ...
	I0401 19:31:41.882587   71168 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 19:31:41.901235   71168 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 19:31:41.919787   71168 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 19:31:42.082420   71168 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 19:31:42.248527   71168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 19:31:42.266610   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 19:31:42.295677   71168 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0401 19:31:42.295740   71168 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:42.313855   71168 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 19:31:42.313920   71168 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:42.327176   71168 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:42.339527   71168 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:42.351220   71168 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 19:31:42.363716   71168 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 19:31:42.379911   71168 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0401 19:31:42.379971   71168 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0401 19:31:42.395282   71168 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 19:31:42.407713   71168 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:31:42.579648   71168 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 19:31:42.764748   71168 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 19:31:42.764858   71168 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 19:31:42.771038   71168 start.go:562] Will wait 60s for crictl version
	I0401 19:31:42.771125   71168 ssh_runner.go:195] Run: which crictl
	I0401 19:31:42.775871   71168 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 19:31:42.823135   71168 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 19:31:42.823218   71168 ssh_runner.go:195] Run: crio --version
	I0401 19:31:42.863748   71168 ssh_runner.go:195] Run: crio --version
	I0401 19:31:42.900263   71168 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0401 19:31:42.901631   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetIP
	I0401 19:31:42.904464   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:42.904773   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:42.904812   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:42.905048   71168 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0401 19:31:42.910117   71168 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:31:42.925313   71168 kubeadm.go:877] updating cluster {Name:old-k8s-version-163608 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-163608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.106 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 19:31:42.925475   71168 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0401 19:31:42.925542   71168 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:31:42.828772   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:44.829527   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:42.553437   70284 main.go:141] libmachine: (no-preload-472858) Waiting to get IP...
	I0401 19:31:42.554422   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:42.554810   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:42.554907   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:42.554806   72041 retry.go:31] will retry after 237.823736ms: waiting for machine to come up
	I0401 19:31:42.794546   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:42.795159   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:42.795205   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:42.795117   72041 retry.go:31] will retry after 326.387674ms: waiting for machine to come up
	I0401 19:31:43.123632   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:43.124306   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:43.124342   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:43.124244   72041 retry.go:31] will retry after 455.262949ms: waiting for machine to come up
	I0401 19:31:43.580752   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:43.581420   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:43.581440   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:43.581375   72041 retry.go:31] will retry after 520.307316ms: waiting for machine to come up
	I0401 19:31:44.103924   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:44.104407   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:44.104431   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:44.104361   72041 retry.go:31] will retry after 491.638031ms: waiting for machine to come up
	I0401 19:31:44.598440   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:44.598990   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:44.599015   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:44.598901   72041 retry.go:31] will retry after 652.234963ms: waiting for machine to come up
	I0401 19:31:45.252362   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:45.252901   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:45.252933   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:45.252853   72041 retry.go:31] will retry after 1.047335678s: waiting for machine to come up
	I0401 19:31:46.301894   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:46.302324   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:46.302349   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:46.302281   72041 retry.go:31] will retry after 1.303326069s: waiting for machine to come up
	I0401 19:31:44.101042   70962 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:46.099803   70962 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace has status "Ready":"True"
	I0401 19:31:46.099828   70962 pod_ready.go:81] duration metric: took 10.008882274s for pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:46.099843   70962 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:42.974220   71168 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0401 19:31:42.974307   71168 ssh_runner.go:195] Run: which lz4
	I0401 19:31:42.979179   71168 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0401 19:31:42.984204   71168 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 19:31:42.984236   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0401 19:31:45.108131   71168 crio.go:462] duration metric: took 2.128988098s to copy over tarball
	I0401 19:31:45.108232   71168 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 19:31:47.328534   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:49.827306   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:47.606907   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:47.607392   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:47.607419   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:47.607356   72041 retry.go:31] will retry after 1.729010443s: waiting for machine to come up
	I0401 19:31:49.338200   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:49.338722   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:49.338751   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:49.338667   72041 retry.go:31] will retry after 2.069036941s: waiting for machine to come up
	I0401 19:31:51.409458   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:51.409945   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:51.409976   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:51.409894   72041 retry.go:31] will retry after 2.405834741s: waiting for machine to come up
	I0401 19:31:48.108234   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:50.607720   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:48.581824   71168 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.473552916s)
	I0401 19:31:48.581871   71168 crio.go:469] duration metric: took 3.473700991s to extract the tarball
	I0401 19:31:48.581881   71168 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 19:31:48.630609   71168 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:31:48.673027   71168 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0401 19:31:48.673048   71168 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0401 19:31:48.673085   71168 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:31:48.673129   71168 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 19:31:48.673155   71168 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 19:31:48.673190   71168 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 19:31:48.673133   71168 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0401 19:31:48.673273   71168 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0401 19:31:48.673143   71168 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0401 19:31:48.673336   71168 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 19:31:48.675068   71168 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:31:48.675073   71168 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 19:31:48.675068   71168 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 19:31:48.675093   71168 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0401 19:31:48.675072   71168 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0401 19:31:48.675073   71168 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 19:31:48.675115   71168 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 19:31:48.675096   71168 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0401 19:31:48.827947   71168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0401 19:31:48.846025   71168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0401 19:31:48.848769   71168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0401 19:31:48.858366   71168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0401 19:31:48.858613   71168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0401 19:31:48.859241   71168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 19:31:48.862047   71168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0401 19:31:48.912299   71168 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0401 19:31:48.912346   71168 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0401 19:31:48.912399   71168 ssh_runner.go:195] Run: which crictl
	I0401 19:31:49.030117   71168 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0401 19:31:49.030357   71168 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 19:31:49.030122   71168 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0401 19:31:49.030433   71168 ssh_runner.go:195] Run: which crictl
	I0401 19:31:49.030460   71168 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 19:31:49.030526   71168 ssh_runner.go:195] Run: which crictl
	I0401 19:31:49.062211   71168 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0401 19:31:49.062327   71168 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0401 19:31:49.062234   71168 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0401 19:31:49.062415   71168 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0401 19:31:49.062396   71168 ssh_runner.go:195] Run: which crictl
	I0401 19:31:49.062461   71168 ssh_runner.go:195] Run: which crictl
	I0401 19:31:49.078249   71168 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0401 19:31:49.078308   71168 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 19:31:49.078323   71168 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0401 19:31:49.078358   71168 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 19:31:49.078379   71168 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0401 19:31:49.078398   71168 ssh_runner.go:195] Run: which crictl
	I0401 19:31:49.078426   71168 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0401 19:31:49.078440   71168 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0401 19:31:49.078362   71168 ssh_runner.go:195] Run: which crictl
	I0401 19:31:49.078466   71168 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0401 19:31:49.078494   71168 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0401 19:31:49.225060   71168 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0401 19:31:49.225137   71168 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0401 19:31:49.225160   71168 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0401 19:31:49.225199   71168 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0401 19:31:49.225250   71168 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0401 19:31:49.225252   71168 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 19:31:49.225326   71168 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0401 19:31:49.280782   71168 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0401 19:31:49.281709   71168 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0401 19:31:49.299218   71168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:31:49.465497   71168 cache_images.go:92] duration metric: took 792.432136ms to LoadCachedImages
	W0401 19:31:49.465595   71168 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0401 19:31:49.465613   71168 kubeadm.go:928] updating node { 192.168.50.106 8443 v1.20.0 crio true true} ...
	I0401 19:31:49.465768   71168 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-163608 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.106
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-163608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 19:31:49.465862   71168 ssh_runner.go:195] Run: crio config
	I0401 19:31:49.529730   71168 cni.go:84] Creating CNI manager for ""
	I0401 19:31:49.529757   71168 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:31:49.529771   71168 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 19:31:49.529799   71168 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.106 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-163608 NodeName:old-k8s-version-163608 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.106"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.106 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0401 19:31:49.529969   71168 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.106
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-163608"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.106
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.106"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 19:31:49.530037   71168 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0401 19:31:49.542642   71168 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 19:31:49.542724   71168 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 19:31:49.557001   71168 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0401 19:31:49.579568   71168 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 19:31:49.599692   71168 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0401 19:31:49.619780   71168 ssh_runner.go:195] Run: grep 192.168.50.106	control-plane.minikube.internal$ /etc/hosts
	I0401 19:31:49.625597   71168 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.106	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:31:49.643862   71168 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:31:49.791391   71168 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:31:49.814470   71168 certs.go:68] Setting up /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608 for IP: 192.168.50.106
	I0401 19:31:49.814497   71168 certs.go:194] generating shared ca certs ...
	I0401 19:31:49.814516   71168 certs.go:226] acquiring lock for ca certs: {Name:mk348b3e250c104b662139cd7212c6c6dfda3180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:31:49.814680   71168 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key
	I0401 19:31:49.814736   71168 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key
	I0401 19:31:49.814745   71168 certs.go:256] generating profile certs ...
	I0401 19:31:49.814852   71168 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/client.key
	I0401 19:31:49.814916   71168 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/apiserver.key.f2de0982
	I0401 19:31:49.814964   71168 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/proxy-client.key
	I0401 19:31:49.815119   71168 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem (1338 bytes)
	W0401 19:31:49.815178   71168 certs.go:480] ignoring /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751_empty.pem, impossibly tiny 0 bytes
	I0401 19:31:49.815195   71168 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem (1675 bytes)
	I0401 19:31:49.815224   71168 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem (1082 bytes)
	I0401 19:31:49.815266   71168 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem (1123 bytes)
	I0401 19:31:49.815299   71168 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem (1679 bytes)
	I0401 19:31:49.815362   71168 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:31:49.816196   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 19:31:49.866842   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 19:31:49.913788   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 19:31:49.953223   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 19:31:50.004313   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0401 19:31:50.046972   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 19:31:50.086990   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 19:31:50.134907   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 19:31:50.163395   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /usr/share/ca-certificates/177512.pem (1708 bytes)
	I0401 19:31:50.191901   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 19:31:50.221196   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem --> /usr/share/ca-certificates/17751.pem (1338 bytes)
	I0401 19:31:50.253024   71168 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0401 19:31:50.275781   71168 ssh_runner.go:195] Run: openssl version
	I0401 19:31:50.282795   71168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177512.pem && ln -fs /usr/share/ca-certificates/177512.pem /etc/ssl/certs/177512.pem"
	I0401 19:31:50.296952   71168 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177512.pem
	I0401 19:31:50.303868   71168 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 18:15 /usr/share/ca-certificates/177512.pem
	I0401 19:31:50.303950   71168 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177512.pem
	I0401 19:31:50.312249   71168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177512.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 19:31:50.328985   71168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 19:31:50.345917   71168 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:50.352041   71168 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 18:07 /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:50.352103   71168 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:50.358752   71168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 19:31:50.371702   71168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17751.pem && ln -fs /usr/share/ca-certificates/17751.pem /etc/ssl/certs/17751.pem"
	I0401 19:31:50.384633   71168 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17751.pem
	I0401 19:31:50.391229   71168 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 18:15 /usr/share/ca-certificates/17751.pem
	I0401 19:31:50.391277   71168 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17751.pem
	I0401 19:31:50.397980   71168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17751.pem /etc/ssl/certs/51391683.0"
	I0401 19:31:50.412674   71168 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 19:31:50.418084   71168 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 19:31:50.425102   71168 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 19:31:50.431949   71168 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 19:31:50.438665   71168 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 19:31:50.446633   71168 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 19:31:50.454688   71168 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 19:31:50.462805   71168 kubeadm.go:391] StartCluster: {Name:old-k8s-version-163608 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-163608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.106 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:31:50.462922   71168 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 19:31:50.462956   71168 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:31:50.505702   71168 cri.go:89] found id: ""
	I0401 19:31:50.505788   71168 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0401 19:31:50.517916   71168 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0401 19:31:50.517934   71168 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0401 19:31:50.517940   71168 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0401 19:31:50.517995   71168 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 19:31:50.529459   71168 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 19:31:50.530408   71168 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-163608" does not appear in /home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 19:31:50.531055   71168 kubeconfig.go:62] /home/jenkins/minikube-integration/18233-10493/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-163608" cluster setting kubeconfig missing "old-k8s-version-163608" context setting]
	I0401 19:31:50.532369   71168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/kubeconfig: {Name:mkbd988e40ba29769e9f8a43c4d876f38e957f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:31:50.534578   71168 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 19:31:50.546275   71168 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.106
	I0401 19:31:50.546309   71168 kubeadm.go:1154] stopping kube-system containers ...
	I0401 19:31:50.546328   71168 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0401 19:31:50.546371   71168 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:31:50.588826   71168 cri.go:89] found id: ""
	I0401 19:31:50.588881   71168 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0401 19:31:50.610933   71168 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:31:50.622201   71168 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:31:50.622221   71168 kubeadm.go:156] found existing configuration files:
	
	I0401 19:31:50.622266   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 19:31:50.634006   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:31:50.634071   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:31:50.647891   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 19:31:50.662548   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:31:50.662596   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:31:50.674627   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 19:31:50.686739   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:31:50.686825   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:31:50.700400   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 19:31:50.712952   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:31:50.713014   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:31:50.725616   71168 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:31:50.739130   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:50.874552   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:51.568640   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:51.850288   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:52.009607   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:52.122887   71168 api_server.go:52] waiting for apiserver process to appear ...
	I0401 19:31:52.122962   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:52.623084   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:51.827968   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:54.325686   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:56.325892   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:53.817748   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:53.818158   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:53.818184   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:53.818122   72041 retry.go:31] will retry after 2.747390243s: waiting for machine to come up
	I0401 19:31:56.567288   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:56.567711   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:56.567742   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:56.567657   72041 retry.go:31] will retry after 3.904473051s: waiting for machine to come up
	I0401 19:31:53.107786   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:55.108974   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:53.123783   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:53.623248   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:54.124004   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:54.623873   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:55.123458   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:55.623923   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:56.123441   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:56.623192   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:57.123012   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:57.624010   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:58.325934   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:00.825343   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:00.476692   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.477192   70284 main.go:141] libmachine: (no-preload-472858) Found IP for machine: 192.168.72.119
	I0401 19:32:00.477217   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has current primary IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.477223   70284 main.go:141] libmachine: (no-preload-472858) Reserving static IP address...
	I0401 19:32:00.477672   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "no-preload-472858", mac: "52:54:00:0a:2e:03", ip: "192.168.72.119"} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:00.477708   70284 main.go:141] libmachine: (no-preload-472858) DBG | skip adding static IP to network mk-no-preload-472858 - found existing host DHCP lease matching {name: "no-preload-472858", mac: "52:54:00:0a:2e:03", ip: "192.168.72.119"}
	I0401 19:32:00.477726   70284 main.go:141] libmachine: (no-preload-472858) Reserved static IP address: 192.168.72.119
	I0401 19:32:00.477742   70284 main.go:141] libmachine: (no-preload-472858) Waiting for SSH to be available...
	I0401 19:32:00.477770   70284 main.go:141] libmachine: (no-preload-472858) DBG | Getting to WaitForSSH function...
	I0401 19:32:00.479949   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.480306   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:00.480334   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.480475   70284 main.go:141] libmachine: (no-preload-472858) DBG | Using SSH client type: external
	I0401 19:32:00.480508   70284 main.go:141] libmachine: (no-preload-472858) DBG | Using SSH private key: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/no-preload-472858/id_rsa (-rw-------)
	I0401 19:32:00.480538   70284 main.go:141] libmachine: (no-preload-472858) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.119 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18233-10493/.minikube/machines/no-preload-472858/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0401 19:32:00.480554   70284 main.go:141] libmachine: (no-preload-472858) DBG | About to run SSH command:
	I0401 19:32:00.480566   70284 main.go:141] libmachine: (no-preload-472858) DBG | exit 0
	I0401 19:32:00.610108   70284 main.go:141] libmachine: (no-preload-472858) DBG | SSH cmd err, output: <nil>: 
	I0401 19:32:00.610458   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetConfigRaw
	I0401 19:32:00.611059   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetIP
	I0401 19:32:00.613496   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.613872   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:00.613906   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.614179   70284 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/no-preload-472858/config.json ...
	I0401 19:32:00.614363   70284 machine.go:94] provisionDockerMachine start ...
	I0401 19:32:00.614382   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:32:00.614593   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:00.617019   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.617404   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:00.617430   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.617585   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:32:00.617780   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:00.617953   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:00.618098   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:32:00.618260   70284 main.go:141] libmachine: Using SSH client type: native
	I0401 19:32:00.618451   70284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.119 22 <nil> <nil>}
	I0401 19:32:00.618462   70284 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 19:32:00.730438   70284 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0401 19:32:00.730473   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetMachineName
	I0401 19:32:00.730725   70284 buildroot.go:166] provisioning hostname "no-preload-472858"
	I0401 19:32:00.730754   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetMachineName
	I0401 19:32:00.730994   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:00.733932   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.734274   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:00.734308   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.734419   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:32:00.734591   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:00.734752   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:00.734918   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:32:00.735092   70284 main.go:141] libmachine: Using SSH client type: native
	I0401 19:32:00.735296   70284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.119 22 <nil> <nil>}
	I0401 19:32:00.735313   70284 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-472858 && echo "no-preload-472858" | sudo tee /etc/hostname
	I0401 19:32:00.865664   70284 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-472858
	
	I0401 19:32:00.865702   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:00.868247   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.868619   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:00.868649   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.868845   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:32:00.869037   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:00.869244   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:00.869420   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:32:00.869671   70284 main.go:141] libmachine: Using SSH client type: native
	I0401 19:32:00.869840   70284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.119 22 <nil> <nil>}
	I0401 19:32:00.869859   70284 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-472858' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-472858/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-472858' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 19:32:00.991430   70284 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 19:32:00.991460   70284 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18233-10493/.minikube CaCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18233-10493/.minikube}
	I0401 19:32:00.991484   70284 buildroot.go:174] setting up certificates
	I0401 19:32:00.991493   70284 provision.go:84] configureAuth start
	I0401 19:32:00.991504   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetMachineName
	I0401 19:32:00.991748   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetIP
	I0401 19:32:00.994239   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.994566   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:00.994596   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.994722   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:00.996735   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.997064   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:00.997090   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.997212   70284 provision.go:143] copyHostCerts
	I0401 19:32:00.997265   70284 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem, removing ...
	I0401 19:32:00.997281   70284 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem
	I0401 19:32:00.997346   70284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem (1082 bytes)
	I0401 19:32:00.997493   70284 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem, removing ...
	I0401 19:32:00.997507   70284 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem
	I0401 19:32:00.997533   70284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem (1123 bytes)
	I0401 19:32:00.997619   70284 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem, removing ...
	I0401 19:32:00.997629   70284 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem
	I0401 19:32:00.997667   70284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem (1679 bytes)
	I0401 19:32:00.997733   70284 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem org=jenkins.no-preload-472858 san=[127.0.0.1 192.168.72.119 localhost minikube no-preload-472858]
	I0401 19:32:01.212397   70284 provision.go:177] copyRemoteCerts
	I0401 19:32:01.212453   70284 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 19:32:01.212473   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:01.214810   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.215170   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:01.215198   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.215398   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:32:01.215603   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:01.215761   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:32:01.215903   70284 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/no-preload-472858/id_rsa Username:docker}
	I0401 19:32:01.303113   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0401 19:32:01.331807   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 19:32:01.358429   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 19:32:01.384521   70284 provision.go:87] duration metric: took 393.005717ms to configureAuth
	I0401 19:32:01.384559   70284 buildroot.go:189] setting minikube options for container-runtime
	I0401 19:32:01.384748   70284 config.go:182] Loaded profile config "no-preload-472858": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0401 19:32:01.384862   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:01.387446   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.387828   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:01.387866   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.387966   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:32:01.388168   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:01.388356   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:01.388509   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:32:01.388663   70284 main.go:141] libmachine: Using SSH client type: native
	I0401 19:32:01.388847   70284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.119 22 <nil> <nil>}
	I0401 19:32:01.388867   70284 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 19:32:01.692586   70284 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 19:32:01.692615   70284 machine.go:97] duration metric: took 1.078237975s to provisionDockerMachine
	I0401 19:32:01.692628   70284 start.go:293] postStartSetup for "no-preload-472858" (driver="kvm2")
	I0401 19:32:01.692644   70284 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 19:32:01.692668   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:32:01.692988   70284 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 19:32:01.693012   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:01.696033   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.696405   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:01.696450   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.696603   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:32:01.696763   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:01.696901   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:32:01.697089   70284 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/no-preload-472858/id_rsa Username:docker}
	I0401 19:32:01.786626   70284 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 19:32:01.791703   70284 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 19:32:01.791726   70284 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/addons for local assets ...
	I0401 19:32:01.791802   70284 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/files for local assets ...
	I0401 19:32:01.791901   70284 filesync.go:149] local asset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> 177512.pem in /etc/ssl/certs
	I0401 19:32:01.791991   70284 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 19:32:01.803733   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:32:01.831768   70284 start.go:296] duration metric: took 139.126077ms for postStartSetup
	I0401 19:32:01.831804   70284 fix.go:56] duration metric: took 20.628199635s for fixHost
	I0401 19:32:01.831823   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:01.834218   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.834548   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:01.834574   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.834725   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:32:01.834901   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:01.835066   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:01.835188   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:32:01.835327   70284 main.go:141] libmachine: Using SSH client type: native
	I0401 19:32:01.835544   70284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.119 22 <nil> <nil>}
	I0401 19:32:01.835558   70284 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 19:31:57.607923   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:59.608857   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:02.106942   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:58.123200   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:58.624028   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:59.123026   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:59.623993   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:00.123039   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:00.623632   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:01.123204   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:01.623162   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:02.123264   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:02.623788   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:01.947198   70284 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711999921.892647753
	
	I0401 19:32:01.947267   70284 fix.go:216] guest clock: 1711999921.892647753
	I0401 19:32:01.947279   70284 fix.go:229] Guest: 2024-04-01 19:32:01.892647753 +0000 UTC Remote: 2024-04-01 19:32:01.831808507 +0000 UTC m=+359.938807685 (delta=60.839246ms)
	I0401 19:32:01.947305   70284 fix.go:200] guest clock delta is within tolerance: 60.839246ms
	I0401 19:32:01.947317   70284 start.go:83] releasing machines lock for "no-preload-472858", held for 20.743748352s
	I0401 19:32:01.947347   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:32:01.947621   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetIP
	I0401 19:32:01.950387   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.950719   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:01.950750   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.950940   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:32:01.951438   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:32:01.951631   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:32:01.951681   70284 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 19:32:01.951737   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:01.951854   70284 ssh_runner.go:195] Run: cat /version.json
	I0401 19:32:01.951881   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:01.954468   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.954603   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.954780   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:01.954815   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.954932   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:01.954960   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.954984   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:32:01.955193   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:01.955230   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:32:01.955341   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:32:01.955388   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:01.955510   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:32:01.955501   70284 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/no-preload-472858/id_rsa Username:docker}
	I0401 19:32:01.955670   70284 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/no-preload-472858/id_rsa Username:docker}
	I0401 19:32:02.035332   70284 ssh_runner.go:195] Run: systemctl --version
	I0401 19:32:02.061178   70284 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 19:32:02.220309   70284 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 19:32:02.227811   70284 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 19:32:02.227885   70284 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 19:32:02.247605   70284 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 19:32:02.247634   70284 start.go:494] detecting cgroup driver to use...
	I0401 19:32:02.247690   70284 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 19:32:02.265463   70284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 19:32:02.280175   70284 docker.go:217] disabling cri-docker service (if available) ...
	I0401 19:32:02.280246   70284 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 19:32:02.295003   70284 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 19:32:02.315072   70284 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 19:32:02.449108   70284 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 19:32:02.627772   70284 docker.go:233] disabling docker service ...
	I0401 19:32:02.627850   70284 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 19:32:02.642924   70284 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 19:32:02.657038   70284 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 19:32:02.787085   70284 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 19:32:02.918355   70284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 19:32:02.934828   70284 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 19:32:02.955495   70284 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0401 19:32:02.955548   70284 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:32:02.966690   70284 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 19:32:02.966754   70284 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:32:02.977812   70284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:32:02.989329   70284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:32:03.000727   70284 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 19:32:03.012341   70284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:32:03.023305   70284 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:32:03.044213   70284 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:32:03.055614   70284 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 19:32:03.065880   70284 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0401 19:32:03.065927   70284 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0401 19:32:03.080514   70284 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 19:32:03.090798   70284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:32:03.224199   70284 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 19:32:03.389414   70284 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 19:32:03.389482   70284 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 19:32:03.395493   70284 start.go:562] Will wait 60s for crictl version
	I0401 19:32:03.395539   70284 ssh_runner.go:195] Run: which crictl
	I0401 19:32:03.399739   70284 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 19:32:03.441020   70284 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 19:32:03.441114   70284 ssh_runner.go:195] Run: crio --version
	I0401 19:32:03.474572   70284 ssh_runner.go:195] Run: crio --version
	I0401 19:32:03.511681   70284 out.go:177] * Preparing Kubernetes v1.30.0-rc.0 on CRI-O 1.29.1 ...
	I0401 19:32:02.825628   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:04.825973   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:03.513067   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetIP
	I0401 19:32:03.515901   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:03.516281   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:03.516315   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:03.516523   70284 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0401 19:32:03.521197   70284 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:32:03.536333   70284 kubeadm.go:877] updating cluster {Name:no-preload-472858 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0-rc.0 ClusterName:no-preload-472858 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.119 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 19:32:03.536459   70284 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.0 and runtime crio
	I0401 19:32:03.536507   70284 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:32:03.582858   70284 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0-rc.0". assuming images are not preloaded.
	I0401 19:32:03.582887   70284 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0-rc.0 registry.k8s.io/kube-controller-manager:v1.30.0-rc.0 registry.k8s.io/kube-scheduler:v1.30.0-rc.0 registry.k8s.io/kube-proxy:v1.30.0-rc.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0401 19:32:03.582970   70284 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:32:03.583026   70284 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0401 19:32:03.583032   70284 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0401 19:32:03.583071   70284 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0401 19:32:03.583161   70284 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0401 19:32:03.582997   70284 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0401 19:32:03.583238   70284 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0401 19:32:03.583388   70284 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0401 19:32:03.584618   70284 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0401 19:32:03.584626   70284 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0401 19:32:03.584630   70284 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:32:03.584619   70284 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0401 19:32:03.584640   70284 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0401 19:32:03.584626   70284 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0401 19:32:03.584701   70284 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0401 19:32:03.584856   70284 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0401 19:32:03.730086   70284 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0401 19:32:03.752217   70284 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0401 19:32:03.765621   70284 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0401 19:32:03.766526   70284 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0401 19:32:03.770748   70284 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0401 19:32:03.777614   70284 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0401 19:32:03.777672   70284 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0401 19:32:03.777699   70284 ssh_runner.go:195] Run: which crictl
	I0401 19:32:03.840814   70284 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0401 19:32:03.852416   70284 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0401 19:32:03.869889   70284 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0-rc.0" does not exist at hash "e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3" in container runtime
	I0401 19:32:03.869929   70284 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0401 19:32:03.869979   70284 ssh_runner.go:195] Run: which crictl
	I0401 19:32:03.874654   70284 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0-rc.0" does not exist at hash "ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a" in container runtime
	I0401 19:32:03.874693   70284 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0401 19:32:03.874737   70284 ssh_runner.go:195] Run: which crictl
	I0401 19:32:03.899207   70284 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:32:03.906139   70284 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0-rc.0" does not exist at hash "fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5" in container runtime
	I0401 19:32:03.906182   70284 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0401 19:32:03.906227   70284 ssh_runner.go:195] Run: which crictl
	I0401 19:32:03.996916   70284 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0401 19:32:03.996987   70284 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0-rc.0" does not exist at hash "33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652" in container runtime
	I0401 19:32:03.997022   70284 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0401 19:32:03.997045   70284 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0401 19:32:03.997053   70284 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0401 19:32:03.997054   70284 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0401 19:32:03.997089   70284 ssh_runner.go:195] Run: which crictl
	I0401 19:32:03.997128   70284 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0401 19:32:03.997142   70284 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0401 19:32:03.997090   70284 ssh_runner.go:195] Run: which crictl
	I0401 19:32:03.997164   70284 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:32:03.997194   70284 ssh_runner.go:195] Run: which crictl
	I0401 19:32:03.997211   70284 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0401 19:32:04.090272   70284 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0401 19:32:04.090548   70284 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0401 19:32:04.090639   70284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0401 19:32:04.102041   70284 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0401 19:32:04.102130   70284 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.0
	I0401 19:32:04.102168   70284 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.0
	I0401 19:32:04.102226   70284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0
	I0401 19:32:04.102241   70284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0
	I0401 19:32:04.102278   70284 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:32:04.108100   70284 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.0
	I0401 19:32:04.108192   70284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0
	I0401 19:32:04.182707   70284 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0401 19:32:04.182747   70284 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0401 19:32:04.182759   70284 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0401 19:32:04.182815   70284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0401 19:32:04.182820   70284 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0401 19:32:04.182883   70284 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.0
	I0401 19:32:04.182988   70284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0
	I0401 19:32:04.186135   70284 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0 (exists)
	I0401 19:32:04.186175   70284 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0 (exists)
	I0401 19:32:04.186221   70284 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0 (exists)
	I0401 19:32:04.186242   70284 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0401 19:32:04.186324   70284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0401 19:32:06.352362   70284 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.169442796s)
	I0401 19:32:06.352398   70284 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0401 19:32:06.352419   70284 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0
	I0401 19:32:06.352416   70284 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0: (2.16957379s)
	I0401 19:32:06.352443   70284 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0401 19:32:06.352465   70284 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0
	I0401 19:32:06.352465   70284 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0: (2.16945688s)
	I0401 19:32:06.352479   70284 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.166139431s)
	I0401 19:32:06.352490   70284 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0401 19:32:06.352491   70284 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0 (exists)
	I0401 19:32:04.109989   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:06.294038   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:03.123452   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:03.623784   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:04.123649   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:04.623076   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:05.123822   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:05.623487   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:06.123635   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:06.623689   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:07.123919   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:07.623237   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:06.826244   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:09.326937   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:09.261547   70284 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0: (2.909056315s)
	I0401 19:32:09.261572   70284 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.0 from cache
	I0401 19:32:09.261600   70284 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0
	I0401 19:32:09.261668   70284 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0
	I0401 19:32:11.739636   70284 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0: (2.477945807s)
	I0401 19:32:11.739667   70284 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.0 from cache
	I0401 19:32:11.739702   70284 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0
	I0401 19:32:11.739761   70284 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0
	I0401 19:32:08.609901   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:11.114752   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:08.123689   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:08.623160   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:09.124002   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:09.623090   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:10.123049   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:10.623111   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:11.123042   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:11.623980   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:12.123074   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:12.623530   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:11.826409   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:13.828437   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:16.326097   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:13.195232   70284 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0: (1.455440816s)
	I0401 19:32:13.195267   70284 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.0 from cache
	I0401 19:32:13.195299   70284 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0401 19:32:13.195350   70284 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0401 19:32:13.607042   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:16.107993   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:13.123428   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:13.623899   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:14.123324   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:14.623889   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:15.123496   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:15.623779   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:16.124012   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:16.623620   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:17.123867   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:17.623014   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:18.326127   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:20.326575   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:17.202247   70284 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.006869591s)
	I0401 19:32:17.202284   70284 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0401 19:32:17.202315   70284 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0401 19:32:17.202364   70284 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0401 19:32:17.962735   70284 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0401 19:32:17.962785   70284 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0
	I0401 19:32:17.962850   70284 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0
	I0401 19:32:20.235136   70284 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0: (2.272262595s)
	I0401 19:32:20.235161   70284 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.0 from cache
	I0401 19:32:20.235193   70284 cache_images.go:123] Successfully loaded all cached images
	I0401 19:32:20.235197   70284 cache_images.go:92] duration metric: took 16.652290938s to LoadCachedImages
	I0401 19:32:20.235205   70284 kubeadm.go:928] updating node { 192.168.72.119 8443 v1.30.0-rc.0 crio true true} ...
	I0401 19:32:20.235332   70284 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-472858 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.119
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-rc.0 ClusterName:no-preload-472858 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 19:32:20.235402   70284 ssh_runner.go:195] Run: crio config
	I0401 19:32:20.296015   70284 cni.go:84] Creating CNI manager for ""
	I0401 19:32:20.296039   70284 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:32:20.296050   70284 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 19:32:20.296074   70284 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.119 APIServerPort:8443 KubernetesVersion:v1.30.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-472858 NodeName:no-preload-472858 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.119"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.119 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 19:32:20.296217   70284 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.119
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-472858"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.119
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.119"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 19:32:20.296275   70284 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.0
	I0401 19:32:20.307937   70284 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 19:32:20.308009   70284 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 19:32:20.318571   70284 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0401 19:32:20.339284   70284 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0401 19:32:20.358601   70284 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0401 19:32:20.379394   70284 ssh_runner.go:195] Run: grep 192.168.72.119	control-plane.minikube.internal$ /etc/hosts
	I0401 19:32:20.383948   70284 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.119	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:32:20.397559   70284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:32:20.549147   70284 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:32:20.568027   70284 certs.go:68] Setting up /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/no-preload-472858 for IP: 192.168.72.119
	I0401 19:32:20.568051   70284 certs.go:194] generating shared ca certs ...
	I0401 19:32:20.568070   70284 certs.go:226] acquiring lock for ca certs: {Name:mk348b3e250c104b662139cd7212c6c6dfda3180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:32:20.568273   70284 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key
	I0401 19:32:20.568337   70284 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key
	I0401 19:32:20.568352   70284 certs.go:256] generating profile certs ...
	I0401 19:32:20.568453   70284 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/no-preload-472858/client.key
	I0401 19:32:20.568534   70284 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/no-preload-472858/apiserver.key.bfc8ff8f
	I0401 19:32:20.568586   70284 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/no-preload-472858/proxy-client.key
	I0401 19:32:20.568691   70284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem (1338 bytes)
	W0401 19:32:20.568718   70284 certs.go:480] ignoring /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751_empty.pem, impossibly tiny 0 bytes
	I0401 19:32:20.568728   70284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem (1675 bytes)
	I0401 19:32:20.568747   70284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem (1082 bytes)
	I0401 19:32:20.568773   70284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem (1123 bytes)
	I0401 19:32:20.568795   70284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem (1679 bytes)
	I0401 19:32:20.568830   70284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:32:20.569519   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 19:32:20.605218   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 19:32:20.650321   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 19:32:20.676884   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 19:32:20.705378   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/no-preload-472858/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0401 19:32:20.733068   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/no-preload-472858/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 19:32:20.767387   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/no-preload-472858/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 19:32:20.793543   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/no-preload-472858/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 19:32:20.820843   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /usr/share/ca-certificates/177512.pem (1708 bytes)
	I0401 19:32:20.848364   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 19:32:20.877551   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem --> /usr/share/ca-certificates/17751.pem (1338 bytes)
	I0401 19:32:20.904650   70284 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0401 19:32:20.922876   70284 ssh_runner.go:195] Run: openssl version
	I0401 19:32:20.929441   70284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 19:32:20.942496   70284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:32:20.948011   70284 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 18:07 /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:32:20.948080   70284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:32:20.954320   70284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 19:32:20.968060   70284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17751.pem && ln -fs /usr/share/ca-certificates/17751.pem /etc/ssl/certs/17751.pem"
	I0401 19:32:20.981591   70284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17751.pem
	I0401 19:32:20.986660   70284 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 18:15 /usr/share/ca-certificates/17751.pem
	I0401 19:32:20.986706   70284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17751.pem
	I0401 19:32:20.993394   70284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17751.pem /etc/ssl/certs/51391683.0"
	I0401 19:32:21.006530   70284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177512.pem && ln -fs /usr/share/ca-certificates/177512.pem /etc/ssl/certs/177512.pem"
	I0401 19:32:21.020014   70284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177512.pem
	I0401 19:32:21.025507   70284 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 18:15 /usr/share/ca-certificates/177512.pem
	I0401 19:32:21.025560   70284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177512.pem
	I0401 19:32:21.032433   70284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177512.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 19:32:21.047002   70284 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 19:32:21.052551   70284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 19:32:21.059875   70284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 19:32:21.067243   70284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 19:32:21.074304   70284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 19:32:21.080978   70284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 19:32:21.088051   70284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 19:32:21.095219   70284 kubeadm.go:391] StartCluster: {Name:no-preload-472858 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0-rc.0 ClusterName:no-preload-472858 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.119 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:32:21.095325   70284 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 19:32:21.095403   70284 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:32:21.144103   70284 cri.go:89] found id: ""
	I0401 19:32:21.144187   70284 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0401 19:32:21.157222   70284 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0401 19:32:21.157241   70284 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0401 19:32:21.157246   70284 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0401 19:32:21.157290   70284 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 19:32:21.169027   70284 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 19:32:21.170123   70284 kubeconfig.go:125] found "no-preload-472858" server: "https://192.168.72.119:8443"
	I0401 19:32:21.172523   70284 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 19:32:21.183801   70284 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.119
	I0401 19:32:21.183838   70284 kubeadm.go:1154] stopping kube-system containers ...
	I0401 19:32:21.183847   70284 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0401 19:32:21.183892   70284 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:32:21.229279   70284 cri.go:89] found id: ""
	I0401 19:32:21.229357   70284 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0401 19:32:21.249719   70284 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:32:21.261894   70284 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:32:21.261929   70284 kubeadm.go:156] found existing configuration files:
	
	I0401 19:32:21.261984   70284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 19:32:21.273961   70284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:32:21.274026   70284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:32:21.286746   70284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 19:32:21.297920   70284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:32:21.297986   70284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:32:21.308793   70284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 19:32:21.319612   70284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:32:21.319658   70284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:32:21.332730   70284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 19:32:21.344752   70284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:32:21.344810   70284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:32:21.355821   70284 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:32:21.366649   70284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:32:21.482208   70284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:32:18.607685   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:20.607824   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:18.123795   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:18.623529   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:19.123446   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:19.623223   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:20.123133   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:20.623058   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:21.123302   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:21.623115   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:22.123810   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:22.623878   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:22.826056   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:24.826357   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:22.312148   70284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:32:22.533156   70284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:32:22.620390   70284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:32:22.704948   70284 api_server.go:52] waiting for apiserver process to appear ...
	I0401 19:32:22.705039   70284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:23.205114   70284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:23.706000   70284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:23.725209   70284 api_server.go:72] duration metric: took 1.020261742s to wait for apiserver process to appear ...
	I0401 19:32:23.725243   70284 api_server.go:88] waiting for apiserver healthz status ...
	I0401 19:32:23.725264   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:23.725749   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": dial tcp 192.168.72.119:8443: connect: connection refused
	I0401 19:32:24.226383   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:23.107450   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:25.109899   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:23.123507   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:23.623244   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:24.123444   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:24.623346   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:25.123834   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:25.623814   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:26.124028   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:26.623428   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:27.123592   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:27.623451   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:27.327961   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:29.826272   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:29.226831   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0401 19:32:29.226876   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:27.607575   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:29.608427   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:32.106668   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:28.123454   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:28.623502   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:29.123265   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:29.623449   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:30.123525   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:30.623634   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:31.123972   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:31.623023   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:32.123346   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:32.623839   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:32.325638   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:34.325777   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:36.326510   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:34.227668   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0401 19:32:34.227723   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:34.606929   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:36.607515   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:33.123673   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:33.623088   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:34.123230   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:34.623967   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:35.123420   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:35.623499   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:36.123152   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:36.623963   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:37.123682   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:37.623536   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:38.829585   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:41.325607   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:39.228117   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0401 19:32:39.228164   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:39.107473   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:41.607043   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:38.123238   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:38.623831   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:39.123180   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:39.623801   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:40.123478   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:40.623651   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:41.123687   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:41.624016   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:42.123891   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:42.623493   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:43.326457   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:45.827310   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:44.228934   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0401 19:32:44.228982   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:44.259601   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": read tcp 192.168.72.1:37026->192.168.72.119:8443: read: connection reset by peer
	I0401 19:32:44.726186   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:44.726759   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": dial tcp 192.168.72.119:8443: connect: connection refused
	I0401 19:32:45.226347   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:43.607936   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:46.106775   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:43.123504   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:43.623527   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:44.124016   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:44.623931   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:45.123188   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:45.623649   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:46.123570   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:46.623179   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:47.123273   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:47.623842   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:48.325252   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:50.327365   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:50.226859   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0401 19:32:50.226907   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:48.109152   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:50.607327   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:48.123759   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:48.623092   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:49.123174   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:49.623986   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:50.123301   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:50.623694   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:51.123466   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:51.623618   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:52.123073   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:32:52.123172   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:32:52.164635   71168 cri.go:89] found id: ""
	I0401 19:32:52.164656   71168 logs.go:276] 0 containers: []
	W0401 19:32:52.164663   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:32:52.164669   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:32:52.164738   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:32:52.202531   71168 cri.go:89] found id: ""
	I0401 19:32:52.202560   71168 logs.go:276] 0 containers: []
	W0401 19:32:52.202572   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:32:52.202580   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:32:52.202653   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:32:52.247667   71168 cri.go:89] found id: ""
	I0401 19:32:52.247693   71168 logs.go:276] 0 containers: []
	W0401 19:32:52.247703   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:32:52.247714   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:32:52.247774   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:32:52.289029   71168 cri.go:89] found id: ""
	I0401 19:32:52.289054   71168 logs.go:276] 0 containers: []
	W0401 19:32:52.289062   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:32:52.289068   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:32:52.289114   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:32:52.326820   71168 cri.go:89] found id: ""
	I0401 19:32:52.326864   71168 logs.go:276] 0 containers: []
	W0401 19:32:52.326875   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:32:52.326882   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:32:52.326944   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:32:52.362793   71168 cri.go:89] found id: ""
	I0401 19:32:52.362827   71168 logs.go:276] 0 containers: []
	W0401 19:32:52.362838   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:32:52.362845   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:32:52.362950   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:32:52.400174   71168 cri.go:89] found id: ""
	I0401 19:32:52.400204   71168 logs.go:276] 0 containers: []
	W0401 19:32:52.400215   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:32:52.400222   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:32:52.400282   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:32:52.436027   71168 cri.go:89] found id: ""
	I0401 19:32:52.436056   71168 logs.go:276] 0 containers: []
	W0401 19:32:52.436066   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:32:52.436085   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:32:52.436099   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:32:52.477246   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:32:52.477272   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:32:52.529215   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:32:52.529247   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:32:52.544695   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:32:52.544724   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:32:52.677816   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:32:52.677849   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:32:52.677877   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:32:52.825288   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:54.826043   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:55.228105   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0401 19:32:55.228139   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:53.106774   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:55.107668   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:55.241224   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:55.256975   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:32:55.257045   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:32:55.298280   71168 cri.go:89] found id: ""
	I0401 19:32:55.298307   71168 logs.go:276] 0 containers: []
	W0401 19:32:55.298319   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:32:55.298326   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:32:55.298397   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:32:55.337707   71168 cri.go:89] found id: ""
	I0401 19:32:55.337732   71168 logs.go:276] 0 containers: []
	W0401 19:32:55.337739   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:32:55.337745   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:32:55.337791   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:32:55.381455   71168 cri.go:89] found id: ""
	I0401 19:32:55.381479   71168 logs.go:276] 0 containers: []
	W0401 19:32:55.381490   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:32:55.381496   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:32:55.381557   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:32:55.420715   71168 cri.go:89] found id: ""
	I0401 19:32:55.420739   71168 logs.go:276] 0 containers: []
	W0401 19:32:55.420749   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:32:55.420756   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:32:55.420820   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:32:55.459546   71168 cri.go:89] found id: ""
	I0401 19:32:55.459575   71168 logs.go:276] 0 containers: []
	W0401 19:32:55.459583   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:32:55.459588   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:32:55.459634   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:32:55.504240   71168 cri.go:89] found id: ""
	I0401 19:32:55.504267   71168 logs.go:276] 0 containers: []
	W0401 19:32:55.504277   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:32:55.504285   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:32:55.504368   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:32:55.539399   71168 cri.go:89] found id: ""
	I0401 19:32:55.539426   71168 logs.go:276] 0 containers: []
	W0401 19:32:55.539437   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:32:55.539443   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:32:55.539509   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:32:55.583823   71168 cri.go:89] found id: ""
	I0401 19:32:55.583861   71168 logs.go:276] 0 containers: []
	W0401 19:32:55.583872   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:32:55.583881   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:32:55.583895   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:32:55.645489   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:32:55.645523   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:32:55.712883   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:32:55.712920   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:32:55.734890   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:32:55.734923   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:32:55.853068   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:32:55.853089   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:32:55.853102   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:32:57.325965   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:59.827753   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:00.228533   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0401 19:33:00.228582   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:57.607203   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:59.610732   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:02.108676   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:58.435925   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:58.450910   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:32:58.450980   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:32:58.487470   71168 cri.go:89] found id: ""
	I0401 19:32:58.487495   71168 logs.go:276] 0 containers: []
	W0401 19:32:58.487506   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:32:58.487514   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:32:58.487562   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:32:58.529513   71168 cri.go:89] found id: ""
	I0401 19:32:58.529534   71168 logs.go:276] 0 containers: []
	W0401 19:32:58.529543   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:32:58.529547   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:32:58.529592   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:32:58.574170   71168 cri.go:89] found id: ""
	I0401 19:32:58.574197   71168 logs.go:276] 0 containers: []
	W0401 19:32:58.574205   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:32:58.574211   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:32:58.574258   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:32:58.615379   71168 cri.go:89] found id: ""
	I0401 19:32:58.615405   71168 logs.go:276] 0 containers: []
	W0401 19:32:58.615414   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:32:58.615419   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:32:58.615468   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:32:58.655496   71168 cri.go:89] found id: ""
	I0401 19:32:58.655523   71168 logs.go:276] 0 containers: []
	W0401 19:32:58.655534   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:32:58.655542   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:32:58.655593   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:32:58.697199   71168 cri.go:89] found id: ""
	I0401 19:32:58.697229   71168 logs.go:276] 0 containers: []
	W0401 19:32:58.697238   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:32:58.697246   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:32:58.697312   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:32:58.735618   71168 cri.go:89] found id: ""
	I0401 19:32:58.735643   71168 logs.go:276] 0 containers: []
	W0401 19:32:58.735651   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:32:58.735656   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:32:58.735701   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:32:58.780583   71168 cri.go:89] found id: ""
	I0401 19:32:58.780613   71168 logs.go:276] 0 containers: []
	W0401 19:32:58.780624   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:32:58.780635   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:32:58.780649   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:32:58.829717   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:32:58.829743   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:32:58.844836   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:32:58.844866   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:32:58.923138   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:32:58.923157   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:32:58.923172   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:32:58.993680   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:32:58.993713   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:01.538920   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:01.556943   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:01.557017   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:01.608397   71168 cri.go:89] found id: ""
	I0401 19:33:01.608417   71168 logs.go:276] 0 containers: []
	W0401 19:33:01.608425   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:01.608430   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:01.608490   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:01.666573   71168 cri.go:89] found id: ""
	I0401 19:33:01.666599   71168 logs.go:276] 0 containers: []
	W0401 19:33:01.666609   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:01.666615   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:01.666674   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:01.726308   71168 cri.go:89] found id: ""
	I0401 19:33:01.726331   71168 logs.go:276] 0 containers: []
	W0401 19:33:01.726341   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:01.726347   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:01.726412   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:01.773095   71168 cri.go:89] found id: ""
	I0401 19:33:01.773118   71168 logs.go:276] 0 containers: []
	W0401 19:33:01.773125   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:01.773131   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:01.773189   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:01.813011   71168 cri.go:89] found id: ""
	I0401 19:33:01.813034   71168 logs.go:276] 0 containers: []
	W0401 19:33:01.813042   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:01.813048   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:01.813096   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:01.859124   71168 cri.go:89] found id: ""
	I0401 19:33:01.859151   71168 logs.go:276] 0 containers: []
	W0401 19:33:01.859161   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:01.859169   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:01.859228   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:01.904491   71168 cri.go:89] found id: ""
	I0401 19:33:01.904519   71168 logs.go:276] 0 containers: []
	W0401 19:33:01.904530   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:01.904537   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:01.904596   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:01.946768   71168 cri.go:89] found id: ""
	I0401 19:33:01.946794   71168 logs.go:276] 0 containers: []
	W0401 19:33:01.946804   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:01.946815   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:01.946829   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:02.026315   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:02.026362   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:02.072861   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:02.072893   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:02.132064   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:02.132105   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:02.151545   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:02.151575   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:02.234059   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:02.325806   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:04.327258   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:03.215901   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0401 19:33:03.215933   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0401 19:33:03.215947   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:03.264913   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0401 19:33:03.264946   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0401 19:33:03.264961   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:03.272548   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0401 19:33:03.272580   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0401 19:33:03.726254   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:03.731022   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:03.731050   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:04.225595   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:04.237757   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:04.237783   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:04.725330   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:04.734019   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:04.734047   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:05.225303   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:05.242774   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:05.242811   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:05.726350   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:05.730775   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:05.730838   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:06.225345   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:06.229749   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:06.229793   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:06.725687   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:06.730607   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:06.730640   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:04.112109   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:06.606160   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:04.734559   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:04.755071   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:04.755130   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:04.798316   71168 cri.go:89] found id: ""
	I0401 19:33:04.798345   71168 logs.go:276] 0 containers: []
	W0401 19:33:04.798358   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:04.798366   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:04.798426   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:04.840011   71168 cri.go:89] found id: ""
	I0401 19:33:04.840032   71168 logs.go:276] 0 containers: []
	W0401 19:33:04.840043   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:04.840050   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:04.840106   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:04.883686   71168 cri.go:89] found id: ""
	I0401 19:33:04.883713   71168 logs.go:276] 0 containers: []
	W0401 19:33:04.883725   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:04.883733   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:04.883795   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:04.933810   71168 cri.go:89] found id: ""
	I0401 19:33:04.933844   71168 logs.go:276] 0 containers: []
	W0401 19:33:04.933855   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:04.933863   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:04.933925   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:04.983118   71168 cri.go:89] found id: ""
	I0401 19:33:04.983139   71168 logs.go:276] 0 containers: []
	W0401 19:33:04.983146   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:04.983151   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:04.983207   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:05.036146   71168 cri.go:89] found id: ""
	I0401 19:33:05.036169   71168 logs.go:276] 0 containers: []
	W0401 19:33:05.036179   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:05.036186   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:05.036242   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:05.086269   71168 cri.go:89] found id: ""
	I0401 19:33:05.086296   71168 logs.go:276] 0 containers: []
	W0401 19:33:05.086308   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:05.086315   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:05.086378   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:05.140893   71168 cri.go:89] found id: ""
	I0401 19:33:05.140914   71168 logs.go:276] 0 containers: []
	W0401 19:33:05.140922   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:05.140931   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:05.140946   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:05.161222   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:05.161249   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:05.262254   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:05.262276   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:05.262289   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:05.352880   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:05.352908   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:05.400720   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:05.400748   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:07.954227   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:07.225774   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:07.230656   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:07.230684   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:07.726299   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:07.731793   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:07.731830   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:08.225362   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:08.229716   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:08.229755   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:08.725315   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:08.733428   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 200:
	ok
	I0401 19:33:08.739761   70284 api_server.go:141] control plane version: v1.30.0-rc.0
	I0401 19:33:08.739788   70284 api_server.go:131] duration metric: took 45.014537527s to wait for apiserver health ...
	I0401 19:33:08.739796   70284 cni.go:84] Creating CNI manager for ""
	I0401 19:33:08.739802   70284 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:33:08.741701   70284 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0401 19:33:06.825165   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:08.829987   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:11.327172   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:08.743011   70284 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0401 19:33:08.758184   70284 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0401 19:33:08.778975   70284 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 19:33:08.789725   70284 system_pods.go:59] 8 kube-system pods found
	I0401 19:33:08.789763   70284 system_pods.go:61] "coredns-7db6d8ff4d-gdml5" [039c8887-dff0-40e5-b8b5-00ef2f4a21cc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:33:08.789771   70284 system_pods.go:61] "etcd-no-preload-472858" [09086659-e20f-40da-b01f-3690e110ffeb] Running
	I0401 19:33:08.789781   70284 system_pods.go:61] "kube-apiserver-no-preload-472858" [5139434c-3d23-4736-86ad-28253c89f7da] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0401 19:33:08.789794   70284 system_pods.go:61] "kube-controller-manager-no-preload-472858" [965d600a-612e-4625-b883-7105f9166503] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0401 19:33:08.789806   70284 system_pods.go:61] "kube-proxy-7c22p" [903412f5-252c-41f3-81ac-1ae47522b403] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:33:08.789820   70284 system_pods.go:61] "kube-scheduler-no-preload-472858" [936981be-fc5e-4865-811c-936fab59f37b] Running
	I0401 19:33:08.789832   70284 system_pods.go:61] "metrics-server-569cc877fc-wlr7k" [14010e9a-9662-46c9-bc46-cc6d19c0cddf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:33:08.789839   70284 system_pods.go:61] "storage-provisioner" [2e5d9f78-e74c-4b3b-8878-e4bd8ce34108] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:33:08.789861   70284 system_pods.go:74] duration metric: took 10.868458ms to wait for pod list to return data ...
	I0401 19:33:08.789874   70284 node_conditions.go:102] verifying NodePressure condition ...
	I0401 19:33:08.793853   70284 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 19:33:08.793883   70284 node_conditions.go:123] node cpu capacity is 2
	I0401 19:33:08.793897   70284 node_conditions.go:105] duration metric: took 4.016996ms to run NodePressure ...
	I0401 19:33:08.793916   70284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:33:09.081698   70284 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0401 19:33:09.085681   70284 kubeadm.go:733] kubelet initialised
	I0401 19:33:09.085699   70284 kubeadm.go:734] duration metric: took 3.976973ms waiting for restarted kubelet to initialise ...
	I0401 19:33:09.085705   70284 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:33:09.090647   70284 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:11.102738   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:08.608194   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:11.109659   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:07.970794   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:07.970850   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:08.013694   71168 cri.go:89] found id: ""
	I0401 19:33:08.013719   71168 logs.go:276] 0 containers: []
	W0401 19:33:08.013729   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:08.013737   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:08.013810   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:08.050810   71168 cri.go:89] found id: ""
	I0401 19:33:08.050849   71168 logs.go:276] 0 containers: []
	W0401 19:33:08.050861   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:08.050868   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:08.050932   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:08.092056   71168 cri.go:89] found id: ""
	I0401 19:33:08.092086   71168 logs.go:276] 0 containers: []
	W0401 19:33:08.092096   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:08.092102   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:08.092157   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:08.133171   71168 cri.go:89] found id: ""
	I0401 19:33:08.133195   71168 logs.go:276] 0 containers: []
	W0401 19:33:08.133205   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:08.133212   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:08.133271   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:08.173997   71168 cri.go:89] found id: ""
	I0401 19:33:08.174023   71168 logs.go:276] 0 containers: []
	W0401 19:33:08.174034   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:08.174041   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:08.174102   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:08.212740   71168 cri.go:89] found id: ""
	I0401 19:33:08.212768   71168 logs.go:276] 0 containers: []
	W0401 19:33:08.212778   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:08.212785   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:08.212831   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:08.254815   71168 cri.go:89] found id: ""
	I0401 19:33:08.254837   71168 logs.go:276] 0 containers: []
	W0401 19:33:08.254847   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:08.254854   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:08.254909   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:08.295347   71168 cri.go:89] found id: ""
	I0401 19:33:08.295375   71168 logs.go:276] 0 containers: []
	W0401 19:33:08.295382   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:08.295390   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:08.295402   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:08.311574   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:08.311600   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:08.405437   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:08.405455   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:08.405470   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:08.483687   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:08.483722   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:08.526132   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:08.526158   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:11.076590   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:11.093846   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:11.093983   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:11.146046   71168 cri.go:89] found id: ""
	I0401 19:33:11.146073   71168 logs.go:276] 0 containers: []
	W0401 19:33:11.146083   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:11.146088   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:11.146146   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:11.193751   71168 cri.go:89] found id: ""
	I0401 19:33:11.193782   71168 logs.go:276] 0 containers: []
	W0401 19:33:11.193793   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:11.193801   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:11.193873   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:11.242150   71168 cri.go:89] found id: ""
	I0401 19:33:11.242178   71168 logs.go:276] 0 containers: []
	W0401 19:33:11.242189   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:11.242197   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:11.242271   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:11.294063   71168 cri.go:89] found id: ""
	I0401 19:33:11.294092   71168 logs.go:276] 0 containers: []
	W0401 19:33:11.294103   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:11.294110   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:11.294175   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:11.334764   71168 cri.go:89] found id: ""
	I0401 19:33:11.334784   71168 logs.go:276] 0 containers: []
	W0401 19:33:11.334791   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:11.334797   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:11.334846   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:11.372770   71168 cri.go:89] found id: ""
	I0401 19:33:11.372789   71168 logs.go:276] 0 containers: []
	W0401 19:33:11.372795   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:11.372806   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:11.372871   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:11.413233   71168 cri.go:89] found id: ""
	I0401 19:33:11.413261   71168 logs.go:276] 0 containers: []
	W0401 19:33:11.413271   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:11.413278   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:11.413337   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:11.456044   71168 cri.go:89] found id: ""
	I0401 19:33:11.456073   71168 logs.go:276] 0 containers: []
	W0401 19:33:11.456084   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:11.456093   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:11.456103   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:11.471157   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:11.471183   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:11.550489   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:11.550508   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:11.550523   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:11.635360   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:11.635389   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:11.680683   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:11.680713   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:13.827425   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:16.325563   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:13.104812   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:15.602114   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:13.607926   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:16.107219   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:14.235295   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:14.251513   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:14.251590   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:14.291688   71168 cri.go:89] found id: ""
	I0401 19:33:14.291715   71168 logs.go:276] 0 containers: []
	W0401 19:33:14.291725   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:14.291732   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:14.291792   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:14.332030   71168 cri.go:89] found id: ""
	I0401 19:33:14.332051   71168 logs.go:276] 0 containers: []
	W0401 19:33:14.332060   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:14.332068   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:14.332132   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:14.372098   71168 cri.go:89] found id: ""
	I0401 19:33:14.372122   71168 logs.go:276] 0 containers: []
	W0401 19:33:14.372130   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:14.372137   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:14.372183   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:14.410529   71168 cri.go:89] found id: ""
	I0401 19:33:14.410554   71168 logs.go:276] 0 containers: []
	W0401 19:33:14.410563   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:14.410570   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:14.410624   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:14.451198   71168 cri.go:89] found id: ""
	I0401 19:33:14.451226   71168 logs.go:276] 0 containers: []
	W0401 19:33:14.451238   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:14.451246   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:14.451306   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:14.494588   71168 cri.go:89] found id: ""
	I0401 19:33:14.494616   71168 logs.go:276] 0 containers: []
	W0401 19:33:14.494627   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:14.494635   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:14.494689   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:14.537561   71168 cri.go:89] found id: ""
	I0401 19:33:14.537583   71168 logs.go:276] 0 containers: []
	W0401 19:33:14.537590   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:14.537597   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:14.537674   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:14.580624   71168 cri.go:89] found id: ""
	I0401 19:33:14.580651   71168 logs.go:276] 0 containers: []
	W0401 19:33:14.580662   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:14.580672   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:14.580688   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:14.635769   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:14.635798   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:14.650275   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:14.650304   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:14.742355   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:14.742378   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:14.742394   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:14.827839   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:14.827869   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:17.373408   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:17.390110   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:17.390185   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:17.432355   71168 cri.go:89] found id: ""
	I0401 19:33:17.432384   71168 logs.go:276] 0 containers: []
	W0401 19:33:17.432396   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:17.432409   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:17.432471   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:17.476458   71168 cri.go:89] found id: ""
	I0401 19:33:17.476484   71168 logs.go:276] 0 containers: []
	W0401 19:33:17.476495   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:17.476502   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:17.476587   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:17.519657   71168 cri.go:89] found id: ""
	I0401 19:33:17.519686   71168 logs.go:276] 0 containers: []
	W0401 19:33:17.519694   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:17.519699   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:17.519751   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:17.559962   71168 cri.go:89] found id: ""
	I0401 19:33:17.559985   71168 logs.go:276] 0 containers: []
	W0401 19:33:17.559992   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:17.559997   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:17.560054   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:17.608924   71168 cri.go:89] found id: ""
	I0401 19:33:17.608995   71168 logs.go:276] 0 containers: []
	W0401 19:33:17.609009   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:17.609016   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:17.609075   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:17.648371   71168 cri.go:89] found id: ""
	I0401 19:33:17.648394   71168 logs.go:276] 0 containers: []
	W0401 19:33:17.648401   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:17.648406   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:17.648462   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:17.689217   71168 cri.go:89] found id: ""
	I0401 19:33:17.689239   71168 logs.go:276] 0 containers: []
	W0401 19:33:17.689246   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:17.689252   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:17.689312   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:17.741738   71168 cri.go:89] found id: ""
	I0401 19:33:17.741768   71168 logs.go:276] 0 containers: []
	W0401 19:33:17.741779   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:17.741790   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:17.741805   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:17.839857   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:17.839887   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:17.888684   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:17.888716   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:17.944268   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:17.944298   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:17.959305   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:17.959334   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0401 19:33:18.327388   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:20.826627   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:18.100065   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:20.100714   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:18.107770   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:20.108880   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	W0401 19:33:18.040820   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:20.541980   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:20.558198   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:20.558270   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:20.596329   71168 cri.go:89] found id: ""
	I0401 19:33:20.596357   71168 logs.go:276] 0 containers: []
	W0401 19:33:20.596366   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:20.596373   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:20.596431   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:20.638611   71168 cri.go:89] found id: ""
	I0401 19:33:20.638639   71168 logs.go:276] 0 containers: []
	W0401 19:33:20.638664   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:20.638672   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:20.638729   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:20.677984   71168 cri.go:89] found id: ""
	I0401 19:33:20.678014   71168 logs.go:276] 0 containers: []
	W0401 19:33:20.678024   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:20.678032   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:20.678080   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:20.718491   71168 cri.go:89] found id: ""
	I0401 19:33:20.718520   71168 logs.go:276] 0 containers: []
	W0401 19:33:20.718530   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:20.718537   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:20.718597   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:20.772147   71168 cri.go:89] found id: ""
	I0401 19:33:20.772174   71168 logs.go:276] 0 containers: []
	W0401 19:33:20.772185   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:20.772199   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:20.772258   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:20.823339   71168 cri.go:89] found id: ""
	I0401 19:33:20.823361   71168 logs.go:276] 0 containers: []
	W0401 19:33:20.823372   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:20.823380   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:20.823463   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:20.884081   71168 cri.go:89] found id: ""
	I0401 19:33:20.884106   71168 logs.go:276] 0 containers: []
	W0401 19:33:20.884117   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:20.884124   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:20.884185   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:20.931679   71168 cri.go:89] found id: ""
	I0401 19:33:20.931703   71168 logs.go:276] 0 containers: []
	W0401 19:33:20.931713   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:20.931722   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:20.931736   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:21.016766   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:21.016797   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:21.067600   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:21.067632   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:21.136989   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:21.137045   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:21.152673   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:21.152706   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:21.250186   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:23.325222   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:25.326919   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:22.597922   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:24.602701   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:22.606659   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:24.606811   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:26.608185   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:23.750565   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:23.768458   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:23.768534   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:23.814489   71168 cri.go:89] found id: ""
	I0401 19:33:23.814534   71168 logs.go:276] 0 containers: []
	W0401 19:33:23.814555   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:23.814565   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:23.814632   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:23.854954   71168 cri.go:89] found id: ""
	I0401 19:33:23.854981   71168 logs.go:276] 0 containers: []
	W0401 19:33:23.854989   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:23.854995   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:23.855060   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:23.896115   71168 cri.go:89] found id: ""
	I0401 19:33:23.896148   71168 logs.go:276] 0 containers: []
	W0401 19:33:23.896159   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:23.896169   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:23.896231   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:23.941300   71168 cri.go:89] found id: ""
	I0401 19:33:23.941324   71168 logs.go:276] 0 containers: []
	W0401 19:33:23.941337   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:23.941344   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:23.941390   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:23.983955   71168 cri.go:89] found id: ""
	I0401 19:33:23.983982   71168 logs.go:276] 0 containers: []
	W0401 19:33:23.983991   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:23.983997   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:23.984056   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:24.020756   71168 cri.go:89] found id: ""
	I0401 19:33:24.020777   71168 logs.go:276] 0 containers: []
	W0401 19:33:24.020784   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:24.020789   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:24.020835   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:24.063426   71168 cri.go:89] found id: ""
	I0401 19:33:24.063454   71168 logs.go:276] 0 containers: []
	W0401 19:33:24.063462   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:24.063467   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:24.063529   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:24.110924   71168 cri.go:89] found id: ""
	I0401 19:33:24.110945   71168 logs.go:276] 0 containers: []
	W0401 19:33:24.110952   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:24.110960   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:24.110969   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:24.179200   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:24.179240   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:24.194880   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:24.194909   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:24.280555   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:24.280588   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:24.280603   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:24.359502   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:24.359534   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:26.909147   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:26.925961   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:26.926028   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:26.969502   71168 cri.go:89] found id: ""
	I0401 19:33:26.969525   71168 logs.go:276] 0 containers: []
	W0401 19:33:26.969536   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:26.969543   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:26.969604   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:27.015205   71168 cri.go:89] found id: ""
	I0401 19:33:27.015232   71168 logs.go:276] 0 containers: []
	W0401 19:33:27.015241   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:27.015246   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:27.015296   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:27.055943   71168 cri.go:89] found id: ""
	I0401 19:33:27.055968   71168 logs.go:276] 0 containers: []
	W0401 19:33:27.055977   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:27.055983   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:27.056039   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:27.095447   71168 cri.go:89] found id: ""
	I0401 19:33:27.095474   71168 logs.go:276] 0 containers: []
	W0401 19:33:27.095485   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:27.095497   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:27.095558   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:27.137912   71168 cri.go:89] found id: ""
	I0401 19:33:27.137941   71168 logs.go:276] 0 containers: []
	W0401 19:33:27.137948   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:27.137954   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:27.138008   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:27.183303   71168 cri.go:89] found id: ""
	I0401 19:33:27.183325   71168 logs.go:276] 0 containers: []
	W0401 19:33:27.183335   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:27.183344   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:27.183403   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:27.225780   71168 cri.go:89] found id: ""
	I0401 19:33:27.225804   71168 logs.go:276] 0 containers: []
	W0401 19:33:27.225814   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:27.225822   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:27.225880   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:27.268136   71168 cri.go:89] found id: ""
	I0401 19:33:27.268159   71168 logs.go:276] 0 containers: []
	W0401 19:33:27.268168   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:27.268191   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:27.268215   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:27.325527   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:27.325557   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:27.341727   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:27.341763   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:27.432369   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:27.432389   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:27.432403   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:27.523104   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:27.523135   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:27.826804   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:30.326279   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:27.099509   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:29.597830   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:31.598325   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:29.107400   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:31.107514   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:30.066147   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:30.079999   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:30.080062   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:30.121887   71168 cri.go:89] found id: ""
	I0401 19:33:30.121911   71168 logs.go:276] 0 containers: []
	W0401 19:33:30.121920   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:30.121929   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:30.121986   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:30.163939   71168 cri.go:89] found id: ""
	I0401 19:33:30.163967   71168 logs.go:276] 0 containers: []
	W0401 19:33:30.163978   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:30.163986   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:30.164051   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:30.203924   71168 cri.go:89] found id: ""
	I0401 19:33:30.203965   71168 logs.go:276] 0 containers: []
	W0401 19:33:30.203977   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:30.203985   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:30.204048   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:30.243771   71168 cri.go:89] found id: ""
	I0401 19:33:30.243798   71168 logs.go:276] 0 containers: []
	W0401 19:33:30.243809   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:30.243816   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:30.243888   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:30.284039   71168 cri.go:89] found id: ""
	I0401 19:33:30.284066   71168 logs.go:276] 0 containers: []
	W0401 19:33:30.284074   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:30.284079   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:30.284127   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:30.327549   71168 cri.go:89] found id: ""
	I0401 19:33:30.327570   71168 logs.go:276] 0 containers: []
	W0401 19:33:30.327577   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:30.327583   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:30.327630   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:30.365258   71168 cri.go:89] found id: ""
	I0401 19:33:30.365281   71168 logs.go:276] 0 containers: []
	W0401 19:33:30.365291   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:30.365297   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:30.365352   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:30.405959   71168 cri.go:89] found id: ""
	I0401 19:33:30.405984   71168 logs.go:276] 0 containers: []
	W0401 19:33:30.405992   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:30.405999   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:30.406011   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:30.480668   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:30.480692   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:30.480706   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:30.566042   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:30.566077   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:30.629250   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:30.629285   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:30.682185   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:30.682213   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:32.824844   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:34.826598   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:33.600555   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:36.100194   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:33.608315   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:36.106573   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:33.199466   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:33.213557   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:33.213630   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:33.255038   71168 cri.go:89] found id: ""
	I0401 19:33:33.255062   71168 logs.go:276] 0 containers: []
	W0401 19:33:33.255072   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:33.255079   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:33.255143   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:33.297724   71168 cri.go:89] found id: ""
	I0401 19:33:33.297751   71168 logs.go:276] 0 containers: []
	W0401 19:33:33.297761   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:33.297767   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:33.297836   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:33.340694   71168 cri.go:89] found id: ""
	I0401 19:33:33.340718   71168 logs.go:276] 0 containers: []
	W0401 19:33:33.340727   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:33.340735   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:33.340794   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:33.388857   71168 cri.go:89] found id: ""
	I0401 19:33:33.388883   71168 logs.go:276] 0 containers: []
	W0401 19:33:33.388891   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:33.388896   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:33.388940   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:33.430875   71168 cri.go:89] found id: ""
	I0401 19:33:33.430899   71168 logs.go:276] 0 containers: []
	W0401 19:33:33.430906   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:33.430911   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:33.430966   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:33.479877   71168 cri.go:89] found id: ""
	I0401 19:33:33.479905   71168 logs.go:276] 0 containers: []
	W0401 19:33:33.479917   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:33.479923   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:33.479968   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:33.522635   71168 cri.go:89] found id: ""
	I0401 19:33:33.522662   71168 logs.go:276] 0 containers: []
	W0401 19:33:33.522672   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:33.522680   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:33.522737   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:33.560497   71168 cri.go:89] found id: ""
	I0401 19:33:33.560519   71168 logs.go:276] 0 containers: []
	W0401 19:33:33.560527   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:33.560534   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:33.560549   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:33.612141   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:33.612170   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:33.665142   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:33.665170   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:33.681076   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:33.681100   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:33.755938   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:33.755966   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:33.755983   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:36.341957   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:36.359519   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:36.359586   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:36.416339   71168 cri.go:89] found id: ""
	I0401 19:33:36.416362   71168 logs.go:276] 0 containers: []
	W0401 19:33:36.416373   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:36.416381   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:36.416442   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:36.473883   71168 cri.go:89] found id: ""
	I0401 19:33:36.473906   71168 logs.go:276] 0 containers: []
	W0401 19:33:36.473918   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:36.473925   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:36.473988   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:36.521532   71168 cri.go:89] found id: ""
	I0401 19:33:36.521558   71168 logs.go:276] 0 containers: []
	W0401 19:33:36.521568   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:36.521575   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:36.521639   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:36.563420   71168 cri.go:89] found id: ""
	I0401 19:33:36.563446   71168 logs.go:276] 0 containers: []
	W0401 19:33:36.563454   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:36.563459   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:36.563520   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:36.605658   71168 cri.go:89] found id: ""
	I0401 19:33:36.605678   71168 logs.go:276] 0 containers: []
	W0401 19:33:36.605689   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:36.605697   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:36.605759   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:36.645611   71168 cri.go:89] found id: ""
	I0401 19:33:36.645631   71168 logs.go:276] 0 containers: []
	W0401 19:33:36.645638   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:36.645656   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:36.645715   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:36.685994   71168 cri.go:89] found id: ""
	I0401 19:33:36.686022   71168 logs.go:276] 0 containers: []
	W0401 19:33:36.686033   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:36.686041   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:36.686099   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:36.725573   71168 cri.go:89] found id: ""
	I0401 19:33:36.725598   71168 logs.go:276] 0 containers: []
	W0401 19:33:36.725608   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:36.725618   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:36.725630   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:36.778854   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:36.778885   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:36.795003   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:36.795036   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:36.872648   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:36.872666   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:36.872678   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:36.956648   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:36.956683   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:36.827745   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:38.830544   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:41.326012   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:38.597991   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:41.097044   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:38.107961   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:40.606475   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:39.502868   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:39.519090   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:39.519161   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:39.562347   71168 cri.go:89] found id: ""
	I0401 19:33:39.562371   71168 logs.go:276] 0 containers: []
	W0401 19:33:39.562379   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:39.562384   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:39.562442   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:39.607250   71168 cri.go:89] found id: ""
	I0401 19:33:39.607276   71168 logs.go:276] 0 containers: []
	W0401 19:33:39.607286   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:39.607293   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:39.607343   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:39.650683   71168 cri.go:89] found id: ""
	I0401 19:33:39.650704   71168 logs.go:276] 0 containers: []
	W0401 19:33:39.650712   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:39.650717   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:39.650764   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:39.694676   71168 cri.go:89] found id: ""
	I0401 19:33:39.694706   71168 logs.go:276] 0 containers: []
	W0401 19:33:39.694718   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:39.694724   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:39.694783   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:39.733873   71168 cri.go:89] found id: ""
	I0401 19:33:39.733901   71168 logs.go:276] 0 containers: []
	W0401 19:33:39.733911   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:39.733919   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:39.733980   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:39.773625   71168 cri.go:89] found id: ""
	I0401 19:33:39.773668   71168 logs.go:276] 0 containers: []
	W0401 19:33:39.773679   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:39.773686   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:39.773735   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:39.815020   71168 cri.go:89] found id: ""
	I0401 19:33:39.815053   71168 logs.go:276] 0 containers: []
	W0401 19:33:39.815064   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:39.815071   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:39.815134   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:39.855575   71168 cri.go:89] found id: ""
	I0401 19:33:39.855606   71168 logs.go:276] 0 containers: []
	W0401 19:33:39.855615   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:39.855626   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:39.855641   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:39.873827   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:39.873857   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:39.948487   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:39.948507   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:39.948521   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:40.034026   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:40.034062   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:40.077798   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:40.077828   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:42.637999   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:42.654991   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:42.655063   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:42.695920   71168 cri.go:89] found id: ""
	I0401 19:33:42.695953   71168 logs.go:276] 0 containers: []
	W0401 19:33:42.695964   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:42.695971   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:42.696030   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:42.737303   71168 cri.go:89] found id: ""
	I0401 19:33:42.737325   71168 logs.go:276] 0 containers: []
	W0401 19:33:42.737333   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:42.737341   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:42.737393   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:42.777922   71168 cri.go:89] found id: ""
	I0401 19:33:42.777953   71168 logs.go:276] 0 containers: []
	W0401 19:33:42.777965   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:42.777972   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:42.778036   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:42.818339   71168 cri.go:89] found id: ""
	I0401 19:33:42.818364   71168 logs.go:276] 0 containers: []
	W0401 19:33:42.818372   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:42.818379   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:42.818435   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:42.859470   71168 cri.go:89] found id: ""
	I0401 19:33:42.859494   71168 logs.go:276] 0 containers: []
	W0401 19:33:42.859502   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:42.859507   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:42.859556   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:42.901950   71168 cri.go:89] found id: ""
	I0401 19:33:42.901980   71168 logs.go:276] 0 containers: []
	W0401 19:33:42.901989   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:42.901996   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:42.902063   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:42.947230   71168 cri.go:89] found id: ""
	I0401 19:33:42.947258   71168 logs.go:276] 0 containers: []
	W0401 19:33:42.947268   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:42.947275   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:42.947351   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:43.827204   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:46.325749   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:43.098252   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:45.098316   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:42.607590   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:44.607666   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:47.107837   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:42.988997   71168 cri.go:89] found id: ""
	I0401 19:33:42.989022   71168 logs.go:276] 0 containers: []
	W0401 19:33:42.989032   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:42.989049   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:42.989066   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:43.075323   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:43.075352   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:43.075363   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:43.164445   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:43.164479   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:43.215852   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:43.215885   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:43.271301   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:43.271334   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:45.786705   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:45.804389   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:45.804445   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:45.849838   71168 cri.go:89] found id: ""
	I0401 19:33:45.849872   71168 logs.go:276] 0 containers: []
	W0401 19:33:45.849883   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:45.849891   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:45.849950   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:45.890603   71168 cri.go:89] found id: ""
	I0401 19:33:45.890625   71168 logs.go:276] 0 containers: []
	W0401 19:33:45.890635   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:45.890642   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:45.890703   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:45.929189   71168 cri.go:89] found id: ""
	I0401 19:33:45.929210   71168 logs.go:276] 0 containers: []
	W0401 19:33:45.929218   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:45.929223   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:45.929268   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:45.968266   71168 cri.go:89] found id: ""
	I0401 19:33:45.968292   71168 logs.go:276] 0 containers: []
	W0401 19:33:45.968303   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:45.968310   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:45.968365   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:46.007114   71168 cri.go:89] found id: ""
	I0401 19:33:46.007135   71168 logs.go:276] 0 containers: []
	W0401 19:33:46.007143   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:46.007148   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:46.007195   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:46.046067   71168 cri.go:89] found id: ""
	I0401 19:33:46.046088   71168 logs.go:276] 0 containers: []
	W0401 19:33:46.046095   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:46.046101   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:46.046186   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:46.083604   71168 cri.go:89] found id: ""
	I0401 19:33:46.083630   71168 logs.go:276] 0 containers: []
	W0401 19:33:46.083644   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:46.083651   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:46.083709   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:46.125435   71168 cri.go:89] found id: ""
	I0401 19:33:46.125457   71168 logs.go:276] 0 containers: []
	W0401 19:33:46.125464   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:46.125472   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:46.125483   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:46.179060   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:46.179092   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:46.195139   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:46.195179   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:46.275876   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:46.275903   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:46.275914   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:46.365430   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:46.365465   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:48.825540   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:50.827204   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:47.099197   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:49.105260   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:51.597808   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:49.108344   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:51.607079   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:48.908390   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:48.924357   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:48.924416   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:48.969325   71168 cri.go:89] found id: ""
	I0401 19:33:48.969351   71168 logs.go:276] 0 containers: []
	W0401 19:33:48.969359   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:48.969364   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:48.969421   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:49.006702   71168 cri.go:89] found id: ""
	I0401 19:33:49.006724   71168 logs.go:276] 0 containers: []
	W0401 19:33:49.006731   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:49.006736   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:49.006785   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:49.051196   71168 cri.go:89] found id: ""
	I0401 19:33:49.051229   71168 logs.go:276] 0 containers: []
	W0401 19:33:49.051241   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:49.051260   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:49.051336   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:49.098123   71168 cri.go:89] found id: ""
	I0401 19:33:49.098150   71168 logs.go:276] 0 containers: []
	W0401 19:33:49.098159   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:49.098166   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:49.098225   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:49.138203   71168 cri.go:89] found id: ""
	I0401 19:33:49.138232   71168 logs.go:276] 0 containers: []
	W0401 19:33:49.138239   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:49.138244   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:49.138290   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:49.185441   71168 cri.go:89] found id: ""
	I0401 19:33:49.185465   71168 logs.go:276] 0 containers: []
	W0401 19:33:49.185473   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:49.185478   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:49.185537   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:49.235649   71168 cri.go:89] found id: ""
	I0401 19:33:49.235670   71168 logs.go:276] 0 containers: []
	W0401 19:33:49.235678   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:49.235683   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:49.235762   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:49.279638   71168 cri.go:89] found id: ""
	I0401 19:33:49.279662   71168 logs.go:276] 0 containers: []
	W0401 19:33:49.279673   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:49.279683   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:49.279699   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:49.340761   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:49.340798   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:49.356552   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:49.356581   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:49.441110   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:49.441129   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:49.441140   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:49.523159   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:49.523189   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:52.067710   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:52.082986   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:52.083046   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:52.128510   71168 cri.go:89] found id: ""
	I0401 19:33:52.128531   71168 logs.go:276] 0 containers: []
	W0401 19:33:52.128538   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:52.128543   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:52.128590   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:52.167767   71168 cri.go:89] found id: ""
	I0401 19:33:52.167792   71168 logs.go:276] 0 containers: []
	W0401 19:33:52.167803   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:52.167810   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:52.167871   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:52.206384   71168 cri.go:89] found id: ""
	I0401 19:33:52.206416   71168 logs.go:276] 0 containers: []
	W0401 19:33:52.206426   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:52.206433   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:52.206493   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:52.245277   71168 cri.go:89] found id: ""
	I0401 19:33:52.245301   71168 logs.go:276] 0 containers: []
	W0401 19:33:52.245309   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:52.245318   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:52.245388   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:52.283925   71168 cri.go:89] found id: ""
	I0401 19:33:52.283954   71168 logs.go:276] 0 containers: []
	W0401 19:33:52.283964   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:52.283971   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:52.284032   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:52.323944   71168 cri.go:89] found id: ""
	I0401 19:33:52.323970   71168 logs.go:276] 0 containers: []
	W0401 19:33:52.323981   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:52.323988   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:52.324045   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:52.364853   71168 cri.go:89] found id: ""
	I0401 19:33:52.364882   71168 logs.go:276] 0 containers: []
	W0401 19:33:52.364893   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:52.364901   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:52.364958   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:52.404136   71168 cri.go:89] found id: ""
	I0401 19:33:52.404158   71168 logs.go:276] 0 containers: []
	W0401 19:33:52.404165   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:52.404173   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:52.404184   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:52.459097   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:52.459129   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:52.474392   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:52.474417   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:52.551817   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:52.551843   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:52.551860   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:52.650710   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:52.650750   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:53.326050   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:55.327326   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:52.607062   70284 pod_ready.go:92] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"True"
	I0401 19:33:52.607082   70284 pod_ready.go:81] duration metric: took 43.516413537s for pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.607091   70284 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.628695   70284 pod_ready.go:92] pod "etcd-no-preload-472858" in "kube-system" namespace has status "Ready":"True"
	I0401 19:33:52.628725   70284 pod_ready.go:81] duration metric: took 21.625468ms for pod "etcd-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.628739   70284 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.643017   70284 pod_ready.go:92] pod "kube-apiserver-no-preload-472858" in "kube-system" namespace has status "Ready":"True"
	I0401 19:33:52.643044   70284 pod_ready.go:81] duration metric: took 14.296056ms for pod "kube-apiserver-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.643058   70284 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.649063   70284 pod_ready.go:92] pod "kube-controller-manager-no-preload-472858" in "kube-system" namespace has status "Ready":"True"
	I0401 19:33:52.649091   70284 pod_ready.go:81] duration metric: took 6.024238ms for pod "kube-controller-manager-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.649105   70284 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7c22p" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.654806   70284 pod_ready.go:92] pod "kube-proxy-7c22p" in "kube-system" namespace has status "Ready":"True"
	I0401 19:33:52.654829   70284 pod_ready.go:81] duration metric: took 5.709865ms for pod "kube-proxy-7c22p" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.654840   70284 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.997116   70284 pod_ready.go:92] pod "kube-scheduler-no-preload-472858" in "kube-system" namespace has status "Ready":"True"
	I0401 19:33:52.997139   70284 pod_ready.go:81] duration metric: took 342.291727ms for pod "kube-scheduler-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.997148   70284 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:55.004130   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:53.608064   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:56.106148   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:55.205689   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:55.222840   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:55.222901   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:55.263783   71168 cri.go:89] found id: ""
	I0401 19:33:55.263813   71168 logs.go:276] 0 containers: []
	W0401 19:33:55.263820   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:55.263828   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:55.263883   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:55.300788   71168 cri.go:89] found id: ""
	I0401 19:33:55.300818   71168 logs.go:276] 0 containers: []
	W0401 19:33:55.300826   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:55.300834   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:55.300888   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:55.343189   71168 cri.go:89] found id: ""
	I0401 19:33:55.343215   71168 logs.go:276] 0 containers: []
	W0401 19:33:55.343223   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:55.343229   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:55.343286   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:55.387560   71168 cri.go:89] found id: ""
	I0401 19:33:55.387587   71168 logs.go:276] 0 containers: []
	W0401 19:33:55.387597   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:55.387604   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:55.387663   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:55.428078   71168 cri.go:89] found id: ""
	I0401 19:33:55.428103   71168 logs.go:276] 0 containers: []
	W0401 19:33:55.428112   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:55.428119   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:55.428181   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:55.472696   71168 cri.go:89] found id: ""
	I0401 19:33:55.472722   71168 logs.go:276] 0 containers: []
	W0401 19:33:55.472734   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:55.472741   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:55.472797   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:55.518071   71168 cri.go:89] found id: ""
	I0401 19:33:55.518115   71168 logs.go:276] 0 containers: []
	W0401 19:33:55.518126   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:55.518136   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:55.518201   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:55.555697   71168 cri.go:89] found id: ""
	I0401 19:33:55.555717   71168 logs.go:276] 0 containers: []
	W0401 19:33:55.555724   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:55.555732   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:55.555747   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:55.637462   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:55.637492   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:55.682353   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:55.682380   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:55.735451   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:55.735484   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:55.750928   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:55.750954   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:55.824610   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:57.328228   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:59.826213   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:57.005395   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:59.505575   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:01.506107   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:58.106643   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:00.606864   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:58.325742   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:58.341022   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:58.341092   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:58.380910   71168 cri.go:89] found id: ""
	I0401 19:33:58.380932   71168 logs.go:276] 0 containers: []
	W0401 19:33:58.380940   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:58.380946   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:58.380990   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:58.420387   71168 cri.go:89] found id: ""
	I0401 19:33:58.420413   71168 logs.go:276] 0 containers: []
	W0401 19:33:58.420425   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:58.420431   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:58.420479   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:58.460470   71168 cri.go:89] found id: ""
	I0401 19:33:58.460501   71168 logs.go:276] 0 containers: []
	W0401 19:33:58.460511   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:58.460520   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:58.460580   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:58.496844   71168 cri.go:89] found id: ""
	I0401 19:33:58.496867   71168 logs.go:276] 0 containers: []
	W0401 19:33:58.496875   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:58.496881   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:58.496930   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:58.535883   71168 cri.go:89] found id: ""
	I0401 19:33:58.535905   71168 logs.go:276] 0 containers: []
	W0401 19:33:58.535915   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:58.535922   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:58.535979   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:58.576833   71168 cri.go:89] found id: ""
	I0401 19:33:58.576855   71168 logs.go:276] 0 containers: []
	W0401 19:33:58.576863   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:58.576869   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:58.576913   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:58.615057   71168 cri.go:89] found id: ""
	I0401 19:33:58.615081   71168 logs.go:276] 0 containers: []
	W0401 19:33:58.615091   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:58.615098   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:58.615156   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:58.657982   71168 cri.go:89] found id: ""
	I0401 19:33:58.658008   71168 logs.go:276] 0 containers: []
	W0401 19:33:58.658018   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:58.658028   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:58.658045   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:58.734579   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:58.734601   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:58.734616   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:58.821779   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:58.821819   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:58.894470   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:58.894506   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:58.949854   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:58.949884   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:01.465820   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:01.481929   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:01.481984   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:01.525371   71168 cri.go:89] found id: ""
	I0401 19:34:01.525397   71168 logs.go:276] 0 containers: []
	W0401 19:34:01.525407   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:01.525415   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:01.525473   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:01.571106   71168 cri.go:89] found id: ""
	I0401 19:34:01.571136   71168 logs.go:276] 0 containers: []
	W0401 19:34:01.571146   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:01.571153   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:01.571214   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:01.617666   71168 cri.go:89] found id: ""
	I0401 19:34:01.617705   71168 logs.go:276] 0 containers: []
	W0401 19:34:01.617717   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:01.617725   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:01.617787   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:01.655286   71168 cri.go:89] found id: ""
	I0401 19:34:01.655311   71168 logs.go:276] 0 containers: []
	W0401 19:34:01.655321   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:01.655328   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:01.655396   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:01.694911   71168 cri.go:89] found id: ""
	I0401 19:34:01.694940   71168 logs.go:276] 0 containers: []
	W0401 19:34:01.694950   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:01.694957   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:01.695040   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:01.734970   71168 cri.go:89] found id: ""
	I0401 19:34:01.734996   71168 logs.go:276] 0 containers: []
	W0401 19:34:01.735007   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:01.735014   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:01.735071   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:01.778846   71168 cri.go:89] found id: ""
	I0401 19:34:01.778871   71168 logs.go:276] 0 containers: []
	W0401 19:34:01.778879   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:01.778885   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:01.778958   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:01.821934   71168 cri.go:89] found id: ""
	I0401 19:34:01.821964   71168 logs.go:276] 0 containers: []
	W0401 19:34:01.821975   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:01.821986   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:01.822002   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:01.880123   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:01.880155   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:01.895178   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:01.895200   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:01.972248   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:01.972275   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:01.972290   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:02.056663   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:02.056694   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:02.325323   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:04.326474   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:06.327583   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:04.004061   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:06.004176   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:02.608516   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:05.108477   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:04.603745   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:04.619269   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:04.619344   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:04.658089   71168 cri.go:89] found id: ""
	I0401 19:34:04.658111   71168 logs.go:276] 0 containers: []
	W0401 19:34:04.658118   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:04.658123   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:04.658168   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:04.700596   71168 cri.go:89] found id: ""
	I0401 19:34:04.700622   71168 logs.go:276] 0 containers: []
	W0401 19:34:04.700634   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:04.700641   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:04.700708   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:04.744960   71168 cri.go:89] found id: ""
	I0401 19:34:04.744990   71168 logs.go:276] 0 containers: []
	W0401 19:34:04.744999   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:04.745004   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:04.745052   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:04.788239   71168 cri.go:89] found id: ""
	I0401 19:34:04.788264   71168 logs.go:276] 0 containers: []
	W0401 19:34:04.788272   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:04.788278   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:04.788343   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:04.830788   71168 cri.go:89] found id: ""
	I0401 19:34:04.830812   71168 logs.go:276] 0 containers: []
	W0401 19:34:04.830850   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:04.830859   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:04.830917   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:04.889784   71168 cri.go:89] found id: ""
	I0401 19:34:04.889815   71168 logs.go:276] 0 containers: []
	W0401 19:34:04.889826   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:04.889834   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:04.889902   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:04.931969   71168 cri.go:89] found id: ""
	I0401 19:34:04.931996   71168 logs.go:276] 0 containers: []
	W0401 19:34:04.932004   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:04.932010   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:04.932058   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:04.975668   71168 cri.go:89] found id: ""
	I0401 19:34:04.975689   71168 logs.go:276] 0 containers: []
	W0401 19:34:04.975696   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:04.975704   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:04.975715   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:05.032212   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:05.032246   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:05.047900   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:05.047924   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:05.132371   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:05.132394   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:05.132408   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:05.222591   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:05.222623   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:07.767686   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:07.784473   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:07.784542   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:07.828460   71168 cri.go:89] found id: ""
	I0401 19:34:07.828487   71168 logs.go:276] 0 containers: []
	W0401 19:34:07.828498   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:07.828505   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:07.828564   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:07.872760   71168 cri.go:89] found id: ""
	I0401 19:34:07.872786   71168 logs.go:276] 0 containers: []
	W0401 19:34:07.872797   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:07.872804   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:07.872862   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:07.914241   71168 cri.go:89] found id: ""
	I0401 19:34:07.914263   71168 logs.go:276] 0 containers: []
	W0401 19:34:07.914271   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:07.914276   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:07.914340   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:07.953757   71168 cri.go:89] found id: ""
	I0401 19:34:07.953784   71168 logs.go:276] 0 containers: []
	W0401 19:34:07.953795   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:07.953803   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:07.953869   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:08.825113   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:10.827081   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:08.504038   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:10.508973   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:07.608037   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:10.110321   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:07.994382   71168 cri.go:89] found id: ""
	I0401 19:34:07.994401   71168 logs.go:276] 0 containers: []
	W0401 19:34:07.994409   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:07.994414   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:07.994459   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:08.038178   71168 cri.go:89] found id: ""
	I0401 19:34:08.038202   71168 logs.go:276] 0 containers: []
	W0401 19:34:08.038213   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:08.038220   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:08.038282   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:08.077532   71168 cri.go:89] found id: ""
	I0401 19:34:08.077562   71168 logs.go:276] 0 containers: []
	W0401 19:34:08.077573   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:08.077580   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:08.077657   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:08.119825   71168 cri.go:89] found id: ""
	I0401 19:34:08.119845   71168 logs.go:276] 0 containers: []
	W0401 19:34:08.119855   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:08.119865   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:08.119878   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:08.207688   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:08.207724   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:08.253050   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:08.253085   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:08.309119   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:08.309152   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:08.325675   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:08.325704   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:08.410877   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:10.911211   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:10.925590   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:10.925657   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:10.964180   71168 cri.go:89] found id: ""
	I0401 19:34:10.964205   71168 logs.go:276] 0 containers: []
	W0401 19:34:10.964216   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:10.964224   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:10.964273   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:11.004492   71168 cri.go:89] found id: ""
	I0401 19:34:11.004515   71168 logs.go:276] 0 containers: []
	W0401 19:34:11.004526   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:11.004533   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:11.004588   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:11.048771   71168 cri.go:89] found id: ""
	I0401 19:34:11.048792   71168 logs.go:276] 0 containers: []
	W0401 19:34:11.048804   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:11.048810   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:11.048861   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:11.084956   71168 cri.go:89] found id: ""
	I0401 19:34:11.084982   71168 logs.go:276] 0 containers: []
	W0401 19:34:11.084992   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:11.084999   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:11.085043   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:11.128194   71168 cri.go:89] found id: ""
	I0401 19:34:11.128218   71168 logs.go:276] 0 containers: []
	W0401 19:34:11.128225   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:11.128230   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:11.128274   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:11.169884   71168 cri.go:89] found id: ""
	I0401 19:34:11.169908   71168 logs.go:276] 0 containers: []
	W0401 19:34:11.169918   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:11.169925   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:11.169988   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:11.213032   71168 cri.go:89] found id: ""
	I0401 19:34:11.213066   71168 logs.go:276] 0 containers: []
	W0401 19:34:11.213077   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:11.213084   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:11.213149   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:11.258391   71168 cri.go:89] found id: ""
	I0401 19:34:11.258414   71168 logs.go:276] 0 containers: []
	W0401 19:34:11.258422   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:11.258429   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:11.258445   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:11.341297   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:11.341328   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:11.388628   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:11.388659   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:11.442300   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:11.442326   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:11.457531   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:11.457561   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:11.561556   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:13.324598   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:15.325464   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:13.005005   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:15.505216   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:12.607201   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:14.607580   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:17.107659   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:14.062670   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:14.077384   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:14.077449   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:14.119421   71168 cri.go:89] found id: ""
	I0401 19:34:14.119444   71168 logs.go:276] 0 containers: []
	W0401 19:34:14.119455   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:14.119462   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:14.119518   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:14.158762   71168 cri.go:89] found id: ""
	I0401 19:34:14.158783   71168 logs.go:276] 0 containers: []
	W0401 19:34:14.158798   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:14.158805   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:14.158867   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:14.197024   71168 cri.go:89] found id: ""
	I0401 19:34:14.197052   71168 logs.go:276] 0 containers: []
	W0401 19:34:14.197060   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:14.197065   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:14.197115   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:14.235976   71168 cri.go:89] found id: ""
	I0401 19:34:14.236004   71168 logs.go:276] 0 containers: []
	W0401 19:34:14.236015   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:14.236021   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:14.236085   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:14.280596   71168 cri.go:89] found id: ""
	I0401 19:34:14.280623   71168 logs.go:276] 0 containers: []
	W0401 19:34:14.280635   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:14.280642   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:14.280703   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:14.322196   71168 cri.go:89] found id: ""
	I0401 19:34:14.322219   71168 logs.go:276] 0 containers: []
	W0401 19:34:14.322230   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:14.322239   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:14.322298   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:14.364572   71168 cri.go:89] found id: ""
	I0401 19:34:14.364596   71168 logs.go:276] 0 containers: []
	W0401 19:34:14.364607   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:14.364615   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:14.364662   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:14.406043   71168 cri.go:89] found id: ""
	I0401 19:34:14.406066   71168 logs.go:276] 0 containers: []
	W0401 19:34:14.406072   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:14.406082   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:14.406097   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:14.461841   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:14.461870   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:14.479960   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:14.479990   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:14.557039   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:14.557058   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:14.557070   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:14.641945   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:14.641975   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:17.192681   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:17.207913   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:17.207964   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:17.245596   71168 cri.go:89] found id: ""
	I0401 19:34:17.245618   71168 logs.go:276] 0 containers: []
	W0401 19:34:17.245625   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:17.245630   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:17.245701   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:17.310845   71168 cri.go:89] found id: ""
	I0401 19:34:17.310875   71168 logs.go:276] 0 containers: []
	W0401 19:34:17.310887   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:17.310894   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:17.310958   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:17.367726   71168 cri.go:89] found id: ""
	I0401 19:34:17.367753   71168 logs.go:276] 0 containers: []
	W0401 19:34:17.367764   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:17.367770   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:17.367833   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:17.410807   71168 cri.go:89] found id: ""
	I0401 19:34:17.410834   71168 logs.go:276] 0 containers: []
	W0401 19:34:17.410842   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:17.410847   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:17.410892   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:17.448242   71168 cri.go:89] found id: ""
	I0401 19:34:17.448268   71168 logs.go:276] 0 containers: []
	W0401 19:34:17.448278   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:17.448285   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:17.448337   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:17.486552   71168 cri.go:89] found id: ""
	I0401 19:34:17.486580   71168 logs.go:276] 0 containers: []
	W0401 19:34:17.486590   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:17.486595   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:17.486644   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:17.529947   71168 cri.go:89] found id: ""
	I0401 19:34:17.529975   71168 logs.go:276] 0 containers: []
	W0401 19:34:17.529986   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:17.529993   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:17.530052   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:17.571617   71168 cri.go:89] found id: ""
	I0401 19:34:17.571640   71168 logs.go:276] 0 containers: []
	W0401 19:34:17.571648   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:17.571656   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:17.571673   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:17.627326   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:17.627354   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:17.643409   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:17.643431   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:17.723772   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:17.723798   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:17.723811   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:17.803383   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:17.803414   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:17.325836   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:19.328447   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:17.509486   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:20.004341   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:19.606840   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:21.607646   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:20.348949   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:20.363311   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:20.363385   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:20.401558   71168 cri.go:89] found id: ""
	I0401 19:34:20.401585   71168 logs.go:276] 0 containers: []
	W0401 19:34:20.401595   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:20.401603   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:20.401686   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:20.445979   71168 cri.go:89] found id: ""
	I0401 19:34:20.446004   71168 logs.go:276] 0 containers: []
	W0401 19:34:20.446011   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:20.446016   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:20.446060   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:20.487819   71168 cri.go:89] found id: ""
	I0401 19:34:20.487844   71168 logs.go:276] 0 containers: []
	W0401 19:34:20.487854   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:20.487862   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:20.487921   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:20.532107   71168 cri.go:89] found id: ""
	I0401 19:34:20.532131   71168 logs.go:276] 0 containers: []
	W0401 19:34:20.532154   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:20.532186   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:20.532247   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:20.577727   71168 cri.go:89] found id: ""
	I0401 19:34:20.577749   71168 logs.go:276] 0 containers: []
	W0401 19:34:20.577756   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:20.577762   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:20.577841   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:20.616774   71168 cri.go:89] found id: ""
	I0401 19:34:20.616805   71168 logs.go:276] 0 containers: []
	W0401 19:34:20.616816   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:20.616824   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:20.616887   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:20.656122   71168 cri.go:89] found id: ""
	I0401 19:34:20.656150   71168 logs.go:276] 0 containers: []
	W0401 19:34:20.656160   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:20.656167   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:20.656226   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:20.701249   71168 cri.go:89] found id: ""
	I0401 19:34:20.701274   71168 logs.go:276] 0 containers: []
	W0401 19:34:20.701285   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:20.701295   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:20.701310   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:20.746979   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:20.747003   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:20.799197   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:20.799226   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:20.815771   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:20.815808   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:20.895179   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:20.895202   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:20.895218   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:21.826671   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:24.325896   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:26.326569   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:22.503727   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:24.503877   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:26.506643   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:24.107702   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:26.607285   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:23.481911   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:23.496820   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:23.496889   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:23.538292   71168 cri.go:89] found id: ""
	I0401 19:34:23.538314   71168 logs.go:276] 0 containers: []
	W0401 19:34:23.538322   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:23.538327   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:23.538372   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:23.579171   71168 cri.go:89] found id: ""
	I0401 19:34:23.579200   71168 logs.go:276] 0 containers: []
	W0401 19:34:23.579209   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:23.579214   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:23.579269   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:23.620377   71168 cri.go:89] found id: ""
	I0401 19:34:23.620399   71168 logs.go:276] 0 containers: []
	W0401 19:34:23.620410   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:23.620417   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:23.620477   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:23.663309   71168 cri.go:89] found id: ""
	I0401 19:34:23.663329   71168 logs.go:276] 0 containers: []
	W0401 19:34:23.663337   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:23.663342   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:23.663392   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:23.702724   71168 cri.go:89] found id: ""
	I0401 19:34:23.702755   71168 logs.go:276] 0 containers: []
	W0401 19:34:23.702772   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:23.702778   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:23.702836   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:23.742797   71168 cri.go:89] found id: ""
	I0401 19:34:23.742827   71168 logs.go:276] 0 containers: []
	W0401 19:34:23.742837   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:23.742845   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:23.742913   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:23.781299   71168 cri.go:89] found id: ""
	I0401 19:34:23.781350   71168 logs.go:276] 0 containers: []
	W0401 19:34:23.781367   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:23.781375   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:23.781440   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:23.828244   71168 cri.go:89] found id: ""
	I0401 19:34:23.828270   71168 logs.go:276] 0 containers: []
	W0401 19:34:23.828277   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:23.828284   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:23.828298   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:23.914758   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:23.914782   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:23.914797   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:23.993300   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:23.993332   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:24.037388   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:24.037424   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:24.090157   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:24.090198   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:26.609062   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:26.624241   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:26.624309   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:26.665813   71168 cri.go:89] found id: ""
	I0401 19:34:26.665840   71168 logs.go:276] 0 containers: []
	W0401 19:34:26.665848   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:26.665857   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:26.665917   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:26.709571   71168 cri.go:89] found id: ""
	I0401 19:34:26.709593   71168 logs.go:276] 0 containers: []
	W0401 19:34:26.709600   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:26.709606   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:26.709680   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:26.757286   71168 cri.go:89] found id: ""
	I0401 19:34:26.757309   71168 logs.go:276] 0 containers: []
	W0401 19:34:26.757319   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:26.757325   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:26.757386   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:26.795715   71168 cri.go:89] found id: ""
	I0401 19:34:26.795768   71168 logs.go:276] 0 containers: []
	W0401 19:34:26.795781   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:26.795788   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:26.795839   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:26.835985   71168 cri.go:89] found id: ""
	I0401 19:34:26.836011   71168 logs.go:276] 0 containers: []
	W0401 19:34:26.836022   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:26.836029   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:26.836094   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:26.878890   71168 cri.go:89] found id: ""
	I0401 19:34:26.878918   71168 logs.go:276] 0 containers: []
	W0401 19:34:26.878929   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:26.878936   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:26.878991   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:26.920161   71168 cri.go:89] found id: ""
	I0401 19:34:26.920189   71168 logs.go:276] 0 containers: []
	W0401 19:34:26.920199   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:26.920206   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:26.920262   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:26.961597   71168 cri.go:89] found id: ""
	I0401 19:34:26.961626   71168 logs.go:276] 0 containers: []
	W0401 19:34:26.961637   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:26.961663   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:26.961679   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:27.019814   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:27.019847   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:27.035535   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:27.035564   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:27.111755   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:27.111776   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:27.111790   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:27.194932   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:27.194964   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:28.827702   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:31.325488   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:29.005830   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:31.007294   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:29.107097   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:31.109807   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:29.738592   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:29.752851   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:29.752913   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:29.791808   71168 cri.go:89] found id: ""
	I0401 19:34:29.791863   71168 logs.go:276] 0 containers: []
	W0401 19:34:29.791875   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:29.791883   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:29.791944   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:29.836113   71168 cri.go:89] found id: ""
	I0401 19:34:29.836132   71168 logs.go:276] 0 containers: []
	W0401 19:34:29.836139   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:29.836144   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:29.836200   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:29.879005   71168 cri.go:89] found id: ""
	I0401 19:34:29.879039   71168 logs.go:276] 0 containers: []
	W0401 19:34:29.879050   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:29.879059   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:29.879122   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:29.919349   71168 cri.go:89] found id: ""
	I0401 19:34:29.919383   71168 logs.go:276] 0 containers: []
	W0401 19:34:29.919394   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:29.919400   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:29.919454   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:29.957252   71168 cri.go:89] found id: ""
	I0401 19:34:29.957275   71168 logs.go:276] 0 containers: []
	W0401 19:34:29.957287   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:29.957294   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:29.957354   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:30.003220   71168 cri.go:89] found id: ""
	I0401 19:34:30.003245   71168 logs.go:276] 0 containers: []
	W0401 19:34:30.003256   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:30.003263   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:30.003311   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:30.043873   71168 cri.go:89] found id: ""
	I0401 19:34:30.043900   71168 logs.go:276] 0 containers: []
	W0401 19:34:30.043921   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:30.043928   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:30.043989   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:30.082215   71168 cri.go:89] found id: ""
	I0401 19:34:30.082242   71168 logs.go:276] 0 containers: []
	W0401 19:34:30.082253   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:30.082263   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:30.082277   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:30.098676   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:30.098701   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:30.180857   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:30.180879   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:30.180897   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:30.269982   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:30.270016   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:30.317933   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:30.317967   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:32.874312   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:32.888687   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:32.888742   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:32.926222   71168 cri.go:89] found id: ""
	I0401 19:34:32.926244   71168 logs.go:276] 0 containers: []
	W0401 19:34:32.926252   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:32.926257   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:32.926307   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:32.964838   71168 cri.go:89] found id: ""
	I0401 19:34:32.964858   71168 logs.go:276] 0 containers: []
	W0401 19:34:32.964865   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:32.964870   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:32.964914   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:33.327670   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:35.826387   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:33.504338   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:36.005240   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:33.606596   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:35.607014   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:33.006903   71168 cri.go:89] found id: ""
	I0401 19:34:33.006920   71168 logs.go:276] 0 containers: []
	W0401 19:34:33.006927   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:33.006933   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:33.006983   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:33.045663   71168 cri.go:89] found id: ""
	I0401 19:34:33.045691   71168 logs.go:276] 0 containers: []
	W0401 19:34:33.045701   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:33.045709   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:33.045770   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:33.086262   71168 cri.go:89] found id: ""
	I0401 19:34:33.086290   71168 logs.go:276] 0 containers: []
	W0401 19:34:33.086298   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:33.086303   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:33.086368   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:33.128302   71168 cri.go:89] found id: ""
	I0401 19:34:33.128327   71168 logs.go:276] 0 containers: []
	W0401 19:34:33.128335   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:33.128341   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:33.128402   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:33.171155   71168 cri.go:89] found id: ""
	I0401 19:34:33.171189   71168 logs.go:276] 0 containers: []
	W0401 19:34:33.171200   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:33.171207   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:33.171270   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:33.210793   71168 cri.go:89] found id: ""
	I0401 19:34:33.210820   71168 logs.go:276] 0 containers: []
	W0401 19:34:33.210838   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:33.210848   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:33.210870   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:33.295035   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:33.295072   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:33.345381   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:33.345417   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:33.401082   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:33.401120   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:33.417029   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:33.417055   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:33.497027   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:35.997632   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:36.013106   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:36.013161   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:36.053013   71168 cri.go:89] found id: ""
	I0401 19:34:36.053040   71168 logs.go:276] 0 containers: []
	W0401 19:34:36.053050   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:36.053059   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:36.053116   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:36.092268   71168 cri.go:89] found id: ""
	I0401 19:34:36.092297   71168 logs.go:276] 0 containers: []
	W0401 19:34:36.092308   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:36.092315   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:36.092389   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:36.131347   71168 cri.go:89] found id: ""
	I0401 19:34:36.131391   71168 logs.go:276] 0 containers: []
	W0401 19:34:36.131402   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:36.131409   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:36.131468   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:36.171402   71168 cri.go:89] found id: ""
	I0401 19:34:36.171432   71168 logs.go:276] 0 containers: []
	W0401 19:34:36.171443   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:36.171449   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:36.171511   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:36.211239   71168 cri.go:89] found id: ""
	I0401 19:34:36.211272   71168 logs.go:276] 0 containers: []
	W0401 19:34:36.211283   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:36.211290   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:36.211354   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:36.251246   71168 cri.go:89] found id: ""
	I0401 19:34:36.251275   71168 logs.go:276] 0 containers: []
	W0401 19:34:36.251287   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:36.251294   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:36.251354   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:36.293140   71168 cri.go:89] found id: ""
	I0401 19:34:36.293162   71168 logs.go:276] 0 containers: []
	W0401 19:34:36.293169   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:36.293174   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:36.293231   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:36.330281   71168 cri.go:89] found id: ""
	I0401 19:34:36.330308   71168 logs.go:276] 0 containers: []
	W0401 19:34:36.330318   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:36.330328   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:36.330342   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:36.421753   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:36.421790   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:36.467555   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:36.467581   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:36.524747   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:36.524778   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:36.540946   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:36.540976   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:36.622452   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:38.326341   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:40.327267   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:38.503641   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:40.504555   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:38.107732   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:40.608535   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:39.122969   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:39.139092   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:39.139157   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:39.177337   71168 cri.go:89] found id: ""
	I0401 19:34:39.177368   71168 logs.go:276] 0 containers: []
	W0401 19:34:39.177379   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:39.177387   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:39.177449   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:39.216471   71168 cri.go:89] found id: ""
	I0401 19:34:39.216498   71168 logs.go:276] 0 containers: []
	W0401 19:34:39.216507   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:39.216512   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:39.216558   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:39.255526   71168 cri.go:89] found id: ""
	I0401 19:34:39.255550   71168 logs.go:276] 0 containers: []
	W0401 19:34:39.255557   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:39.255563   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:39.255623   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:39.294682   71168 cri.go:89] found id: ""
	I0401 19:34:39.294711   71168 logs.go:276] 0 containers: []
	W0401 19:34:39.294723   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:39.294735   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:39.294798   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:39.337416   71168 cri.go:89] found id: ""
	I0401 19:34:39.337437   71168 logs.go:276] 0 containers: []
	W0401 19:34:39.337444   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:39.337449   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:39.337510   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:39.384560   71168 cri.go:89] found id: ""
	I0401 19:34:39.384586   71168 logs.go:276] 0 containers: []
	W0401 19:34:39.384598   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:39.384608   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:39.384671   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:39.421459   71168 cri.go:89] found id: ""
	I0401 19:34:39.421480   71168 logs.go:276] 0 containers: []
	W0401 19:34:39.421488   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:39.421493   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:39.421540   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:39.460221   71168 cri.go:89] found id: ""
	I0401 19:34:39.460246   71168 logs.go:276] 0 containers: []
	W0401 19:34:39.460256   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:39.460264   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:39.460275   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:39.543800   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:39.543835   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:39.591012   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:39.591038   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:39.645994   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:39.646025   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:39.662223   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:39.662250   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:39.741574   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:42.242541   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:42.256933   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:42.257006   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:42.294268   71168 cri.go:89] found id: ""
	I0401 19:34:42.294297   71168 logs.go:276] 0 containers: []
	W0401 19:34:42.294308   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:42.294315   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:42.294370   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:42.331978   71168 cri.go:89] found id: ""
	I0401 19:34:42.331999   71168 logs.go:276] 0 containers: []
	W0401 19:34:42.332005   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:42.332013   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:42.332078   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:42.369858   71168 cri.go:89] found id: ""
	I0401 19:34:42.369885   71168 logs.go:276] 0 containers: []
	W0401 19:34:42.369895   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:42.369903   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:42.369989   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:42.412688   71168 cri.go:89] found id: ""
	I0401 19:34:42.412708   71168 logs.go:276] 0 containers: []
	W0401 19:34:42.412715   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:42.412720   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:42.412776   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:42.449180   71168 cri.go:89] found id: ""
	I0401 19:34:42.449209   71168 logs.go:276] 0 containers: []
	W0401 19:34:42.449217   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:42.449225   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:42.449283   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:42.488582   71168 cri.go:89] found id: ""
	I0401 19:34:42.488606   71168 logs.go:276] 0 containers: []
	W0401 19:34:42.488613   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:42.488618   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:42.488665   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:42.527883   71168 cri.go:89] found id: ""
	I0401 19:34:42.527915   71168 logs.go:276] 0 containers: []
	W0401 19:34:42.527924   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:42.527931   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:42.527993   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:42.564372   71168 cri.go:89] found id: ""
	I0401 19:34:42.564394   71168 logs.go:276] 0 containers: []
	W0401 19:34:42.564401   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:42.564408   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:42.564419   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:42.646940   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:42.646974   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:42.689323   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:42.689354   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:42.744996   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:42.745024   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:42.761404   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:42.761429   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:42.836643   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:42.825895   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:45.325856   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:42.504642   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:45.004315   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:43.110114   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:45.607093   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:45.337809   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:45.352936   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:45.353029   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:45.395073   71168 cri.go:89] found id: ""
	I0401 19:34:45.395098   71168 logs.go:276] 0 containers: []
	W0401 19:34:45.395106   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:45.395112   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:45.395160   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:45.433537   71168 cri.go:89] found id: ""
	I0401 19:34:45.433567   71168 logs.go:276] 0 containers: []
	W0401 19:34:45.433578   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:45.433586   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:45.433658   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:45.477108   71168 cri.go:89] found id: ""
	I0401 19:34:45.477138   71168 logs.go:276] 0 containers: []
	W0401 19:34:45.477150   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:45.477157   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:45.477217   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:45.520350   71168 cri.go:89] found id: ""
	I0401 19:34:45.520389   71168 logs.go:276] 0 containers: []
	W0401 19:34:45.520401   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:45.520408   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:45.520466   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:45.562871   71168 cri.go:89] found id: ""
	I0401 19:34:45.562901   71168 logs.go:276] 0 containers: []
	W0401 19:34:45.562911   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:45.562918   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:45.562988   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:45.619214   71168 cri.go:89] found id: ""
	I0401 19:34:45.619237   71168 logs.go:276] 0 containers: []
	W0401 19:34:45.619248   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:45.619255   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:45.619317   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:45.664361   71168 cri.go:89] found id: ""
	I0401 19:34:45.664387   71168 logs.go:276] 0 containers: []
	W0401 19:34:45.664398   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:45.664405   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:45.664463   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:45.701087   71168 cri.go:89] found id: ""
	I0401 19:34:45.701110   71168 logs.go:276] 0 containers: []
	W0401 19:34:45.701120   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:45.701128   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:45.701139   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:45.716839   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:45.716863   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:45.794609   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:45.794630   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:45.794642   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:45.883428   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:45.883464   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:45.934342   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:45.934374   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:47.825597   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:50.326528   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:47.505036   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:49.505287   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:51.505884   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:47.609038   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:50.106705   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:52.107802   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:48.492128   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:48.508674   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:48.508746   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:48.549522   71168 cri.go:89] found id: ""
	I0401 19:34:48.549545   71168 logs.go:276] 0 containers: []
	W0401 19:34:48.549555   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:48.549561   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:48.549619   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:48.587014   71168 cri.go:89] found id: ""
	I0401 19:34:48.587037   71168 logs.go:276] 0 containers: []
	W0401 19:34:48.587045   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:48.587051   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:48.587108   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:48.629591   71168 cri.go:89] found id: ""
	I0401 19:34:48.629620   71168 logs.go:276] 0 containers: []
	W0401 19:34:48.629630   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:48.629636   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:48.629707   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:48.669335   71168 cri.go:89] found id: ""
	I0401 19:34:48.669363   71168 logs.go:276] 0 containers: []
	W0401 19:34:48.669383   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:48.669400   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:48.669455   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:48.708322   71168 cri.go:89] found id: ""
	I0401 19:34:48.708350   71168 logs.go:276] 0 containers: []
	W0401 19:34:48.708356   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:48.708362   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:48.708407   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:48.750680   71168 cri.go:89] found id: ""
	I0401 19:34:48.750708   71168 logs.go:276] 0 containers: []
	W0401 19:34:48.750718   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:48.750726   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:48.750791   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:48.790946   71168 cri.go:89] found id: ""
	I0401 19:34:48.790974   71168 logs.go:276] 0 containers: []
	W0401 19:34:48.790984   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:48.790998   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:48.791055   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:48.828849   71168 cri.go:89] found id: ""
	I0401 19:34:48.828871   71168 logs.go:276] 0 containers: []
	W0401 19:34:48.828880   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:48.828889   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:48.828904   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:48.909182   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:48.909212   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:48.954285   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:48.954315   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:49.010340   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:49.010372   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:49.026493   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:49.026516   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:49.099662   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:51.599905   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:51.618094   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:51.618168   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:51.657003   71168 cri.go:89] found id: ""
	I0401 19:34:51.657028   71168 logs.go:276] 0 containers: []
	W0401 19:34:51.657038   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:51.657046   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:51.657104   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:51.696415   71168 cri.go:89] found id: ""
	I0401 19:34:51.696441   71168 logs.go:276] 0 containers: []
	W0401 19:34:51.696451   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:51.696456   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:51.696515   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:51.734416   71168 cri.go:89] found id: ""
	I0401 19:34:51.734445   71168 logs.go:276] 0 containers: []
	W0401 19:34:51.734457   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:51.734465   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:51.734523   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:51.774895   71168 cri.go:89] found id: ""
	I0401 19:34:51.774918   71168 logs.go:276] 0 containers: []
	W0401 19:34:51.774925   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:51.774931   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:51.774980   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:51.814602   71168 cri.go:89] found id: ""
	I0401 19:34:51.814623   71168 logs.go:276] 0 containers: []
	W0401 19:34:51.814631   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:51.814637   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:51.814687   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:51.856035   71168 cri.go:89] found id: ""
	I0401 19:34:51.856061   71168 logs.go:276] 0 containers: []
	W0401 19:34:51.856071   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:51.856078   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:51.856132   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:51.897415   71168 cri.go:89] found id: ""
	I0401 19:34:51.897440   71168 logs.go:276] 0 containers: []
	W0401 19:34:51.897451   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:51.897457   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:51.897516   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:51.937406   71168 cri.go:89] found id: ""
	I0401 19:34:51.937428   71168 logs.go:276] 0 containers: []
	W0401 19:34:51.937436   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:51.937443   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:51.937456   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:51.981508   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:51.981535   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:52.039956   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:52.039995   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:52.066403   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:52.066429   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:52.172509   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:52.172530   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:52.172541   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:52.827950   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:55.331369   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:54.004625   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:56.503197   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:54.607359   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:57.108257   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:54.761459   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:54.776972   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:54.777030   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:54.822945   71168 cri.go:89] found id: ""
	I0401 19:34:54.822983   71168 logs.go:276] 0 containers: []
	W0401 19:34:54.822996   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:54.823004   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:54.823066   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:54.861602   71168 cri.go:89] found id: ""
	I0401 19:34:54.861629   71168 logs.go:276] 0 containers: []
	W0401 19:34:54.861639   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:54.861662   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:54.861727   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:54.901283   71168 cri.go:89] found id: ""
	I0401 19:34:54.901309   71168 logs.go:276] 0 containers: []
	W0401 19:34:54.901319   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:54.901327   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:54.901385   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:54.940071   71168 cri.go:89] found id: ""
	I0401 19:34:54.940103   71168 logs.go:276] 0 containers: []
	W0401 19:34:54.940114   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:54.940121   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:54.940179   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:54.978447   71168 cri.go:89] found id: ""
	I0401 19:34:54.978474   71168 logs.go:276] 0 containers: []
	W0401 19:34:54.978485   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:54.978493   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:54.978563   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:55.021786   71168 cri.go:89] found id: ""
	I0401 19:34:55.021810   71168 logs.go:276] 0 containers: []
	W0401 19:34:55.021819   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:55.021827   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:55.021886   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:55.059861   71168 cri.go:89] found id: ""
	I0401 19:34:55.059889   71168 logs.go:276] 0 containers: []
	W0401 19:34:55.059899   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:55.059907   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:55.059963   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:55.104484   71168 cri.go:89] found id: ""
	I0401 19:34:55.104516   71168 logs.go:276] 0 containers: []
	W0401 19:34:55.104527   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:55.104537   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:55.104551   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:55.152197   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:55.152221   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:55.203900   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:55.203942   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:55.221553   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:55.221580   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:55.299651   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:55.299668   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:55.299680   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:57.877382   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:57.899186   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:57.899260   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:57.948146   71168 cri.go:89] found id: ""
	I0401 19:34:57.948182   71168 logs.go:276] 0 containers: []
	W0401 19:34:57.948192   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:57.948203   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:57.948270   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:57.826282   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:59.826598   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:58.504492   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:01.003480   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:59.607646   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:02.107162   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:58.017121   71168 cri.go:89] found id: ""
	I0401 19:34:58.017150   71168 logs.go:276] 0 containers: []
	W0401 19:34:58.017161   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:58.017168   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:58.017230   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:58.073881   71168 cri.go:89] found id: ""
	I0401 19:34:58.073905   71168 logs.go:276] 0 containers: []
	W0401 19:34:58.073916   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:58.073923   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:58.073979   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:58.115410   71168 cri.go:89] found id: ""
	I0401 19:34:58.115435   71168 logs.go:276] 0 containers: []
	W0401 19:34:58.115445   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:58.115452   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:58.115512   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:58.155452   71168 cri.go:89] found id: ""
	I0401 19:34:58.155481   71168 logs.go:276] 0 containers: []
	W0401 19:34:58.155492   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:58.155500   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:58.155562   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:58.197335   71168 cri.go:89] found id: ""
	I0401 19:34:58.197376   71168 logs.go:276] 0 containers: []
	W0401 19:34:58.197397   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:58.197407   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:58.197469   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:58.239782   71168 cri.go:89] found id: ""
	I0401 19:34:58.239808   71168 logs.go:276] 0 containers: []
	W0401 19:34:58.239815   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:58.239820   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:58.239870   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:58.280936   71168 cri.go:89] found id: ""
	I0401 19:34:58.280961   71168 logs.go:276] 0 containers: []
	W0401 19:34:58.280971   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:58.280982   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:58.280998   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:58.368357   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:58.368401   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:58.415104   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:58.415132   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:58.474719   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:58.474749   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:58.491004   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:58.491031   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:58.573999   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:01.074865   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:01.091751   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:01.091822   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:01.140053   71168 cri.go:89] found id: ""
	I0401 19:35:01.140079   71168 logs.go:276] 0 containers: []
	W0401 19:35:01.140089   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:01.140096   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:01.140154   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:01.184046   71168 cri.go:89] found id: ""
	I0401 19:35:01.184078   71168 logs.go:276] 0 containers: []
	W0401 19:35:01.184089   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:01.184096   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:01.184161   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:01.225962   71168 cri.go:89] found id: ""
	I0401 19:35:01.225989   71168 logs.go:276] 0 containers: []
	W0401 19:35:01.225999   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:01.226006   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:01.226072   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:01.267212   71168 cri.go:89] found id: ""
	I0401 19:35:01.267234   71168 logs.go:276] 0 containers: []
	W0401 19:35:01.267242   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:01.267247   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:01.267308   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:01.307039   71168 cri.go:89] found id: ""
	I0401 19:35:01.307066   71168 logs.go:276] 0 containers: []
	W0401 19:35:01.307074   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:01.307080   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:01.307132   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:01.347856   71168 cri.go:89] found id: ""
	I0401 19:35:01.347886   71168 logs.go:276] 0 containers: []
	W0401 19:35:01.347898   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:01.347905   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:01.347962   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:01.385893   71168 cri.go:89] found id: ""
	I0401 19:35:01.385923   71168 logs.go:276] 0 containers: []
	W0401 19:35:01.385933   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:01.385940   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:01.385999   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:01.422983   71168 cri.go:89] found id: ""
	I0401 19:35:01.423012   71168 logs.go:276] 0 containers: []
	W0401 19:35:01.423022   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:01.423033   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:01.423048   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:01.469842   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:01.469875   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:01.527536   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:01.527566   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:01.542332   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:01.542357   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:01.617252   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:01.617270   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:01.617284   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:02.325502   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:04.326603   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:06.328115   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:03.005979   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:05.504470   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:04.107681   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:06.607619   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:04.195171   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:04.211963   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:04.212015   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:04.252298   71168 cri.go:89] found id: ""
	I0401 19:35:04.252324   71168 logs.go:276] 0 containers: []
	W0401 19:35:04.252334   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:04.252342   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:04.252396   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:04.299619   71168 cri.go:89] found id: ""
	I0401 19:35:04.299649   71168 logs.go:276] 0 containers: []
	W0401 19:35:04.299659   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:04.299667   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:04.299725   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:04.347386   71168 cri.go:89] found id: ""
	I0401 19:35:04.347409   71168 logs.go:276] 0 containers: []
	W0401 19:35:04.347416   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:04.347426   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:04.347473   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:04.385902   71168 cri.go:89] found id: ""
	I0401 19:35:04.385929   71168 logs.go:276] 0 containers: []
	W0401 19:35:04.385937   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:04.385943   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:04.385993   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:04.425235   71168 cri.go:89] found id: ""
	I0401 19:35:04.425258   71168 logs.go:276] 0 containers: []
	W0401 19:35:04.425266   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:04.425271   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:04.425325   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:04.463849   71168 cri.go:89] found id: ""
	I0401 19:35:04.463881   71168 logs.go:276] 0 containers: []
	W0401 19:35:04.463891   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:04.463899   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:04.463974   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:04.501983   71168 cri.go:89] found id: ""
	I0401 19:35:04.502003   71168 logs.go:276] 0 containers: []
	W0401 19:35:04.502010   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:04.502016   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:04.502072   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:04.544082   71168 cri.go:89] found id: ""
	I0401 19:35:04.544103   71168 logs.go:276] 0 containers: []
	W0401 19:35:04.544113   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:04.544124   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:04.544141   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:04.600545   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:04.600578   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:04.617049   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:04.617075   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:04.696927   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:04.696945   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:04.696957   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:04.780024   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:04.780056   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:07.323161   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:07.339368   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:07.339432   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:07.379407   71168 cri.go:89] found id: ""
	I0401 19:35:07.379429   71168 logs.go:276] 0 containers: []
	W0401 19:35:07.379440   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:07.379452   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:07.379497   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:07.418700   71168 cri.go:89] found id: ""
	I0401 19:35:07.418728   71168 logs.go:276] 0 containers: []
	W0401 19:35:07.418737   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:07.418743   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:07.418788   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:07.457580   71168 cri.go:89] found id: ""
	I0401 19:35:07.457606   71168 logs.go:276] 0 containers: []
	W0401 19:35:07.457617   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:07.457624   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:07.457696   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:07.498211   71168 cri.go:89] found id: ""
	I0401 19:35:07.498240   71168 logs.go:276] 0 containers: []
	W0401 19:35:07.498249   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:07.498256   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:07.498318   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:07.539659   71168 cri.go:89] found id: ""
	I0401 19:35:07.539681   71168 logs.go:276] 0 containers: []
	W0401 19:35:07.539692   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:07.539699   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:07.539759   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:07.577414   71168 cri.go:89] found id: ""
	I0401 19:35:07.577440   71168 logs.go:276] 0 containers: []
	W0401 19:35:07.577450   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:07.577456   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:07.577520   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:07.623318   71168 cri.go:89] found id: ""
	I0401 19:35:07.623340   71168 logs.go:276] 0 containers: []
	W0401 19:35:07.623352   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:07.623358   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:07.623416   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:07.664791   71168 cri.go:89] found id: ""
	I0401 19:35:07.664823   71168 logs.go:276] 0 containers: []
	W0401 19:35:07.664834   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:07.664842   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:07.664854   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:07.722158   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:07.722186   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:07.737838   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:07.737876   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:07.813694   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:07.813717   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:07.813728   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:07.899698   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:07.899740   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:08.825778   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:10.825935   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:07.505933   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:10.003529   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:09.107076   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:11.108917   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:10.446184   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:10.460860   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:10.460927   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:10.505656   71168 cri.go:89] found id: ""
	I0401 19:35:10.505685   71168 logs.go:276] 0 containers: []
	W0401 19:35:10.505692   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:10.505698   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:10.505742   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:10.547771   71168 cri.go:89] found id: ""
	I0401 19:35:10.547796   71168 logs.go:276] 0 containers: []
	W0401 19:35:10.547814   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:10.547820   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:10.547876   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:10.584625   71168 cri.go:89] found id: ""
	I0401 19:35:10.584652   71168 logs.go:276] 0 containers: []
	W0401 19:35:10.584664   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:10.584671   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:10.584737   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:10.625512   71168 cri.go:89] found id: ""
	I0401 19:35:10.625541   71168 logs.go:276] 0 containers: []
	W0401 19:35:10.625552   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:10.625559   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:10.625618   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:10.664905   71168 cri.go:89] found id: ""
	I0401 19:35:10.664936   71168 logs.go:276] 0 containers: []
	W0401 19:35:10.664949   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:10.664955   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:10.665015   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:10.703043   71168 cri.go:89] found id: ""
	I0401 19:35:10.703071   71168 logs.go:276] 0 containers: []
	W0401 19:35:10.703082   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:10.703090   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:10.703149   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:10.747750   71168 cri.go:89] found id: ""
	I0401 19:35:10.747777   71168 logs.go:276] 0 containers: []
	W0401 19:35:10.747790   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:10.747796   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:10.747841   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:10.792944   71168 cri.go:89] found id: ""
	I0401 19:35:10.792970   71168 logs.go:276] 0 containers: []
	W0401 19:35:10.792980   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:10.792989   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:10.793004   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:10.854029   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:10.854058   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:10.868968   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:10.868991   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:10.940537   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:10.940564   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:10.940579   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:11.018201   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:11.018231   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:12.826117   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:14.826387   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:12.003995   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:14.503258   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:16.504686   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:13.608777   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:16.108992   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:13.562139   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:13.579370   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:13.579435   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:13.620811   71168 cri.go:89] found id: ""
	I0401 19:35:13.620838   71168 logs.go:276] 0 containers: []
	W0401 19:35:13.620847   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:13.620859   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:13.620919   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:13.661377   71168 cri.go:89] found id: ""
	I0401 19:35:13.661408   71168 logs.go:276] 0 containers: []
	W0401 19:35:13.661419   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:13.661427   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:13.661489   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:13.702413   71168 cri.go:89] found id: ""
	I0401 19:35:13.702436   71168 logs.go:276] 0 containers: []
	W0401 19:35:13.702445   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:13.702453   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:13.702519   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:13.748760   71168 cri.go:89] found id: ""
	I0401 19:35:13.748788   71168 logs.go:276] 0 containers: []
	W0401 19:35:13.748796   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:13.748803   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:13.748874   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:13.795438   71168 cri.go:89] found id: ""
	I0401 19:35:13.795460   71168 logs.go:276] 0 containers: []
	W0401 19:35:13.795472   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:13.795479   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:13.795537   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:13.835572   71168 cri.go:89] found id: ""
	I0401 19:35:13.835601   71168 logs.go:276] 0 containers: []
	W0401 19:35:13.835612   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:13.835619   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:13.835677   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:13.874301   71168 cri.go:89] found id: ""
	I0401 19:35:13.874327   71168 logs.go:276] 0 containers: []
	W0401 19:35:13.874336   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:13.874342   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:13.874387   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:13.914847   71168 cri.go:89] found id: ""
	I0401 19:35:13.914876   71168 logs.go:276] 0 containers: []
	W0401 19:35:13.914883   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:13.914891   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:13.914904   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:13.929329   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:13.929355   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:14.004332   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:14.004358   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:14.004373   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:14.084901   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:14.084935   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:14.134471   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:14.134500   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:16.693432   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:16.710258   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:16.710332   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:16.757213   71168 cri.go:89] found id: ""
	I0401 19:35:16.757243   71168 logs.go:276] 0 containers: []
	W0401 19:35:16.757254   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:16.757261   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:16.757320   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:16.797134   71168 cri.go:89] found id: ""
	I0401 19:35:16.797174   71168 logs.go:276] 0 containers: []
	W0401 19:35:16.797182   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:16.797188   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:16.797233   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:16.839502   71168 cri.go:89] found id: ""
	I0401 19:35:16.839530   71168 logs.go:276] 0 containers: []
	W0401 19:35:16.839541   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:16.839549   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:16.839609   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:16.881380   71168 cri.go:89] found id: ""
	I0401 19:35:16.881406   71168 logs.go:276] 0 containers: []
	W0401 19:35:16.881413   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:16.881419   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:16.881472   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:16.922968   71168 cri.go:89] found id: ""
	I0401 19:35:16.922991   71168 logs.go:276] 0 containers: []
	W0401 19:35:16.923002   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:16.923009   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:16.923069   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:16.961262   71168 cri.go:89] found id: ""
	I0401 19:35:16.961290   71168 logs.go:276] 0 containers: []
	W0401 19:35:16.961301   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:16.961310   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:16.961369   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:16.996901   71168 cri.go:89] found id: ""
	I0401 19:35:16.996929   71168 logs.go:276] 0 containers: []
	W0401 19:35:16.996940   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:16.996947   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:16.997004   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:17.038447   71168 cri.go:89] found id: ""
	I0401 19:35:17.038473   71168 logs.go:276] 0 containers: []
	W0401 19:35:17.038481   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:17.038489   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:17.038500   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:17.079979   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:17.080013   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:17.136973   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:17.137010   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:17.153083   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:17.153108   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:17.232055   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:17.232078   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:17.232096   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:17.326246   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:19.326903   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:20.818889   70687 pod_ready.go:81] duration metric: took 4m0.000381983s for pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace to be "Ready" ...
	E0401 19:35:20.818918   70687 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace to be "Ready" (will not retry!)
	I0401 19:35:20.818938   70687 pod_ready.go:38] duration metric: took 4m5.525170808s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:35:20.818967   70687 kubeadm.go:591] duration metric: took 4m13.404699267s to restartPrimaryControlPlane
	W0401 19:35:20.819026   70687 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0401 19:35:20.819059   70687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0401 19:35:19.004932   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:21.504514   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:18.607067   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:20.609619   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:19.813327   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:19.830168   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:19.830229   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:19.875502   71168 cri.go:89] found id: ""
	I0401 19:35:19.875524   71168 logs.go:276] 0 containers: []
	W0401 19:35:19.875532   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:19.875537   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:19.875591   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:19.916084   71168 cri.go:89] found id: ""
	I0401 19:35:19.916107   71168 logs.go:276] 0 containers: []
	W0401 19:35:19.916117   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:19.916125   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:19.916188   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:19.960673   71168 cri.go:89] found id: ""
	I0401 19:35:19.960699   71168 logs.go:276] 0 containers: []
	W0401 19:35:19.960710   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:19.960717   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:19.960796   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:19.998736   71168 cri.go:89] found id: ""
	I0401 19:35:19.998760   71168 logs.go:276] 0 containers: []
	W0401 19:35:19.998768   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:19.998776   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:19.998840   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:20.043382   71168 cri.go:89] found id: ""
	I0401 19:35:20.043408   71168 logs.go:276] 0 containers: []
	W0401 19:35:20.043418   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:20.043425   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:20.043492   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:20.086132   71168 cri.go:89] found id: ""
	I0401 19:35:20.086158   71168 logs.go:276] 0 containers: []
	W0401 19:35:20.086171   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:20.086178   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:20.086239   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:20.131052   71168 cri.go:89] found id: ""
	I0401 19:35:20.131074   71168 logs.go:276] 0 containers: []
	W0401 19:35:20.131081   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:20.131091   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:20.131151   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:20.174668   71168 cri.go:89] found id: ""
	I0401 19:35:20.174693   71168 logs.go:276] 0 containers: []
	W0401 19:35:20.174699   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:20.174707   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:20.174718   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:20.266503   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:20.266521   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:20.266534   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:20.351555   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:20.351586   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:20.400261   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:20.400289   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:20.455149   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:20.455183   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:23.510048   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:26.005267   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:23.109720   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:25.608633   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:22.972675   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:22.987481   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:22.987555   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:23.032429   71168 cri.go:89] found id: ""
	I0401 19:35:23.032453   71168 logs.go:276] 0 containers: []
	W0401 19:35:23.032461   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:23.032467   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:23.032522   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:23.073286   71168 cri.go:89] found id: ""
	I0401 19:35:23.073313   71168 logs.go:276] 0 containers: []
	W0401 19:35:23.073322   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:23.073330   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:23.073397   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:23.115424   71168 cri.go:89] found id: ""
	I0401 19:35:23.115447   71168 logs.go:276] 0 containers: []
	W0401 19:35:23.115454   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:23.115459   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:23.115506   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:23.164883   71168 cri.go:89] found id: ""
	I0401 19:35:23.164908   71168 logs.go:276] 0 containers: []
	W0401 19:35:23.164918   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:23.164925   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:23.164985   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:23.213617   71168 cri.go:89] found id: ""
	I0401 19:35:23.213656   71168 logs.go:276] 0 containers: []
	W0401 19:35:23.213668   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:23.213675   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:23.213787   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:23.264846   71168 cri.go:89] found id: ""
	I0401 19:35:23.264874   71168 logs.go:276] 0 containers: []
	W0401 19:35:23.264886   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:23.264893   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:23.264958   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:23.306467   71168 cri.go:89] found id: ""
	I0401 19:35:23.306495   71168 logs.go:276] 0 containers: []
	W0401 19:35:23.306506   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:23.306514   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:23.306566   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:23.358574   71168 cri.go:89] found id: ""
	I0401 19:35:23.358597   71168 logs.go:276] 0 containers: []
	W0401 19:35:23.358608   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:23.358619   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:23.358634   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:23.437486   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:23.437510   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:23.437525   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:23.555307   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:23.555350   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:23.601776   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:23.601808   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:23.666654   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:23.666688   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:26.184503   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:26.199924   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:26.199997   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:26.252151   71168 cri.go:89] found id: ""
	I0401 19:35:26.252181   71168 logs.go:276] 0 containers: []
	W0401 19:35:26.252192   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:26.252199   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:26.252266   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:26.299094   71168 cri.go:89] found id: ""
	I0401 19:35:26.299126   71168 logs.go:276] 0 containers: []
	W0401 19:35:26.299134   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:26.299139   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:26.299194   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:26.340483   71168 cri.go:89] found id: ""
	I0401 19:35:26.340516   71168 logs.go:276] 0 containers: []
	W0401 19:35:26.340533   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:26.340540   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:26.340599   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:26.387153   71168 cri.go:89] found id: ""
	I0401 19:35:26.387180   71168 logs.go:276] 0 containers: []
	W0401 19:35:26.387188   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:26.387194   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:26.387261   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:26.430746   71168 cri.go:89] found id: ""
	I0401 19:35:26.430773   71168 logs.go:276] 0 containers: []
	W0401 19:35:26.430781   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:26.430787   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:26.430854   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:26.478412   71168 cri.go:89] found id: ""
	I0401 19:35:26.478440   71168 logs.go:276] 0 containers: []
	W0401 19:35:26.478451   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:26.478458   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:26.478523   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:26.521120   71168 cri.go:89] found id: ""
	I0401 19:35:26.521150   71168 logs.go:276] 0 containers: []
	W0401 19:35:26.521161   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:26.521168   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:26.521229   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:26.564678   71168 cri.go:89] found id: ""
	I0401 19:35:26.564721   71168 logs.go:276] 0 containers: []
	W0401 19:35:26.564731   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:26.564742   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:26.564757   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:26.625271   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:26.625308   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:26.640505   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:26.640529   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:26.722753   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:26.722777   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:26.722795   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:26.830507   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:26.830551   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:28.505100   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:31.004387   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:28.107396   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:30.108080   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:29.386655   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:29.401232   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:29.401308   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:29.440479   71168 cri.go:89] found id: ""
	I0401 19:35:29.440511   71168 logs.go:276] 0 containers: []
	W0401 19:35:29.440522   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:29.440530   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:29.440590   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:29.479022   71168 cri.go:89] found id: ""
	I0401 19:35:29.479049   71168 logs.go:276] 0 containers: []
	W0401 19:35:29.479057   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:29.479062   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:29.479119   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:29.518179   71168 cri.go:89] found id: ""
	I0401 19:35:29.518208   71168 logs.go:276] 0 containers: []
	W0401 19:35:29.518216   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:29.518222   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:29.518281   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:29.556654   71168 cri.go:89] found id: ""
	I0401 19:35:29.556682   71168 logs.go:276] 0 containers: []
	W0401 19:35:29.556692   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:29.556712   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:29.556772   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:29.593258   71168 cri.go:89] found id: ""
	I0401 19:35:29.593287   71168 logs.go:276] 0 containers: []
	W0401 19:35:29.593295   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:29.593301   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:29.593349   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:29.637215   71168 cri.go:89] found id: ""
	I0401 19:35:29.637243   71168 logs.go:276] 0 containers: []
	W0401 19:35:29.637253   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:29.637261   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:29.637321   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:29.683052   71168 cri.go:89] found id: ""
	I0401 19:35:29.683090   71168 logs.go:276] 0 containers: []
	W0401 19:35:29.683100   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:29.683108   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:29.683164   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:29.730948   71168 cri.go:89] found id: ""
	I0401 19:35:29.730979   71168 logs.go:276] 0 containers: []
	W0401 19:35:29.730991   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:29.731001   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:29.731014   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:29.781969   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:29.782001   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:29.800700   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:29.800729   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:29.877200   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:29.877225   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:29.877244   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:29.958110   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:29.958144   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:32.501060   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:32.519551   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:32.519619   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:32.579776   71168 cri.go:89] found id: ""
	I0401 19:35:32.579802   71168 logs.go:276] 0 containers: []
	W0401 19:35:32.579813   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:32.579824   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:32.579886   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:32.643271   71168 cri.go:89] found id: ""
	I0401 19:35:32.643300   71168 logs.go:276] 0 containers: []
	W0401 19:35:32.643312   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:32.643322   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:32.643387   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:32.688576   71168 cri.go:89] found id: ""
	I0401 19:35:32.688605   71168 logs.go:276] 0 containers: []
	W0401 19:35:32.688614   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:32.688619   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:32.688678   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:32.729867   71168 cri.go:89] found id: ""
	I0401 19:35:32.729890   71168 logs.go:276] 0 containers: []
	W0401 19:35:32.729898   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:32.729906   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:32.729962   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:32.771485   71168 cri.go:89] found id: ""
	I0401 19:35:32.771508   71168 logs.go:276] 0 containers: []
	W0401 19:35:32.771515   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:32.771521   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:32.771574   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:32.809362   71168 cri.go:89] found id: ""
	I0401 19:35:32.809385   71168 logs.go:276] 0 containers: []
	W0401 19:35:32.809393   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:32.809398   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:32.809458   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:32.844916   71168 cri.go:89] found id: ""
	I0401 19:35:32.844941   71168 logs.go:276] 0 containers: []
	W0401 19:35:32.844950   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:32.844955   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:32.845000   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:32.884638   71168 cri.go:89] found id: ""
	I0401 19:35:32.884660   71168 logs.go:276] 0 containers: []
	W0401 19:35:32.884670   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:32.884680   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:32.884695   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:32.937462   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:32.937489   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:32.952842   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:32.952871   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0401 19:35:33.005516   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:35.504755   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:32.608051   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:35.106708   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:37.108135   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	W0401 19:35:33.035254   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:33.035278   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:33.035294   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:33.114963   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:33.114994   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:35.662190   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:35.675960   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:35.676016   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:35.717300   71168 cri.go:89] found id: ""
	I0401 19:35:35.717329   71168 logs.go:276] 0 containers: []
	W0401 19:35:35.717340   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:35.717347   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:35.717409   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:35.756687   71168 cri.go:89] found id: ""
	I0401 19:35:35.756713   71168 logs.go:276] 0 containers: []
	W0401 19:35:35.756723   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:35.756730   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:35.756788   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:35.796995   71168 cri.go:89] found id: ""
	I0401 19:35:35.797017   71168 logs.go:276] 0 containers: []
	W0401 19:35:35.797025   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:35.797030   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:35.797083   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:35.840419   71168 cri.go:89] found id: ""
	I0401 19:35:35.840444   71168 logs.go:276] 0 containers: []
	W0401 19:35:35.840455   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:35.840462   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:35.840523   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:35.880059   71168 cri.go:89] found id: ""
	I0401 19:35:35.880093   71168 logs.go:276] 0 containers: []
	W0401 19:35:35.880107   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:35.880113   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:35.880171   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:35.929491   71168 cri.go:89] found id: ""
	I0401 19:35:35.929515   71168 logs.go:276] 0 containers: []
	W0401 19:35:35.929523   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:35.929530   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:35.929584   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:35.968745   71168 cri.go:89] found id: ""
	I0401 19:35:35.968771   71168 logs.go:276] 0 containers: []
	W0401 19:35:35.968778   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:35.968784   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:35.968833   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:36.014294   71168 cri.go:89] found id: ""
	I0401 19:35:36.014318   71168 logs.go:276] 0 containers: []
	W0401 19:35:36.014328   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:36.014338   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:36.014359   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:36.068418   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:36.068450   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:36.086343   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:36.086367   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:36.172027   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:36.172053   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:36.172067   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:36.250046   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:36.250080   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:38.004007   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:40.004138   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:39.607714   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:42.107775   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:38.794261   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:38.809535   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:38.809597   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:38.849139   71168 cri.go:89] found id: ""
	I0401 19:35:38.849167   71168 logs.go:276] 0 containers: []
	W0401 19:35:38.849176   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:38.849181   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:38.849238   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:38.886787   71168 cri.go:89] found id: ""
	I0401 19:35:38.886811   71168 logs.go:276] 0 containers: []
	W0401 19:35:38.886821   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:38.886828   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:38.886891   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:38.923388   71168 cri.go:89] found id: ""
	I0401 19:35:38.923419   71168 logs.go:276] 0 containers: []
	W0401 19:35:38.923431   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:38.923438   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:38.923497   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:38.959583   71168 cri.go:89] found id: ""
	I0401 19:35:38.959608   71168 logs.go:276] 0 containers: []
	W0401 19:35:38.959619   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:38.959626   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:38.959682   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:38.998201   71168 cri.go:89] found id: ""
	I0401 19:35:38.998226   71168 logs.go:276] 0 containers: []
	W0401 19:35:38.998233   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:38.998238   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:38.998294   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:39.039669   71168 cri.go:89] found id: ""
	I0401 19:35:39.039692   71168 logs.go:276] 0 containers: []
	W0401 19:35:39.039703   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:39.039710   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:39.039767   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:39.077331   71168 cri.go:89] found id: ""
	I0401 19:35:39.077358   71168 logs.go:276] 0 containers: []
	W0401 19:35:39.077366   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:39.077371   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:39.077423   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:39.125999   71168 cri.go:89] found id: ""
	I0401 19:35:39.126021   71168 logs.go:276] 0 containers: []
	W0401 19:35:39.126031   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:39.126041   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:39.126054   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:39.183579   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:39.183612   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:39.201200   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:39.201227   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:39.282262   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:39.282280   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:39.282291   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:39.365340   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:39.365370   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:41.914909   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:41.929243   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:41.929317   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:41.975594   71168 cri.go:89] found id: ""
	I0401 19:35:41.975622   71168 logs.go:276] 0 containers: []
	W0401 19:35:41.975632   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:41.975639   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:41.975701   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:42.023558   71168 cri.go:89] found id: ""
	I0401 19:35:42.023585   71168 logs.go:276] 0 containers: []
	W0401 19:35:42.023596   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:42.023602   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:42.023662   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:42.074242   71168 cri.go:89] found id: ""
	I0401 19:35:42.074266   71168 logs.go:276] 0 containers: []
	W0401 19:35:42.074276   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:42.074283   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:42.074340   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:42.123327   71168 cri.go:89] found id: ""
	I0401 19:35:42.123358   71168 logs.go:276] 0 containers: []
	W0401 19:35:42.123370   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:42.123378   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:42.123452   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:42.168931   71168 cri.go:89] found id: ""
	I0401 19:35:42.168961   71168 logs.go:276] 0 containers: []
	W0401 19:35:42.168972   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:42.168980   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:42.169037   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:42.211747   71168 cri.go:89] found id: ""
	I0401 19:35:42.211774   71168 logs.go:276] 0 containers: []
	W0401 19:35:42.211784   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:42.211793   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:42.211849   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:42.251809   71168 cri.go:89] found id: ""
	I0401 19:35:42.251830   71168 logs.go:276] 0 containers: []
	W0401 19:35:42.251841   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:42.251849   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:42.251908   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:42.293266   71168 cri.go:89] found id: ""
	I0401 19:35:42.293361   71168 logs.go:276] 0 containers: []
	W0401 19:35:42.293377   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:42.293388   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:42.293405   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:42.364502   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:42.364553   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:42.381147   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:42.381180   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:42.464219   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:42.464238   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:42.464249   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:42.544564   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:42.544594   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:42.006061   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:44.504700   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:46.505615   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:44.606915   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:46.100004   70962 pod_ready.go:81] duration metric: took 4m0.000146584s for pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace to be "Ready" ...
	E0401 19:35:46.100029   70962 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace to be "Ready" (will not retry!)
	I0401 19:35:46.100044   70962 pod_ready.go:38] duration metric: took 4m10.491414096s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:35:46.100088   70962 kubeadm.go:591] duration metric: took 4m18.223285856s to restartPrimaryControlPlane
	W0401 19:35:46.100141   70962 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0401 19:35:46.100164   70962 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0401 19:35:45.105777   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:45.119911   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:45.119976   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:45.161871   71168 cri.go:89] found id: ""
	I0401 19:35:45.161890   71168 logs.go:276] 0 containers: []
	W0401 19:35:45.161897   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:45.161902   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:45.161949   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:45.198677   71168 cri.go:89] found id: ""
	I0401 19:35:45.198702   71168 logs.go:276] 0 containers: []
	W0401 19:35:45.198710   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:45.198715   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:45.198776   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:45.236938   71168 cri.go:89] found id: ""
	I0401 19:35:45.236972   71168 logs.go:276] 0 containers: []
	W0401 19:35:45.236983   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:45.236990   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:45.237052   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:45.280621   71168 cri.go:89] found id: ""
	I0401 19:35:45.280650   71168 logs.go:276] 0 containers: []
	W0401 19:35:45.280661   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:45.280668   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:45.280727   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:45.326794   71168 cri.go:89] found id: ""
	I0401 19:35:45.326818   71168 logs.go:276] 0 containers: []
	W0401 19:35:45.326827   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:45.326834   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:45.326892   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:45.369405   71168 cri.go:89] found id: ""
	I0401 19:35:45.369431   71168 logs.go:276] 0 containers: []
	W0401 19:35:45.369441   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:45.369446   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:45.369501   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:45.407609   71168 cri.go:89] found id: ""
	I0401 19:35:45.407635   71168 logs.go:276] 0 containers: []
	W0401 19:35:45.407643   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:45.407648   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:45.407720   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:45.444848   71168 cri.go:89] found id: ""
	I0401 19:35:45.444871   71168 logs.go:276] 0 containers: []
	W0401 19:35:45.444881   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:45.444891   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:45.444911   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:45.531938   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:45.531957   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:45.531972   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:45.617109   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:45.617141   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:45.663559   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:45.663591   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:45.717622   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:45.717670   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:49.004037   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:51.004650   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:48.234834   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:48.250543   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:48.250606   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:48.294396   71168 cri.go:89] found id: ""
	I0401 19:35:48.294423   71168 logs.go:276] 0 containers: []
	W0401 19:35:48.294432   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:48.294439   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:48.294504   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:48.336866   71168 cri.go:89] found id: ""
	I0401 19:35:48.336892   71168 logs.go:276] 0 containers: []
	W0401 19:35:48.336902   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:48.336908   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:48.336965   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:48.376031   71168 cri.go:89] found id: ""
	I0401 19:35:48.376065   71168 logs.go:276] 0 containers: []
	W0401 19:35:48.376076   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:48.376084   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:48.376142   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:48.414975   71168 cri.go:89] found id: ""
	I0401 19:35:48.414995   71168 logs.go:276] 0 containers: []
	W0401 19:35:48.415003   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:48.415008   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:48.415058   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:48.453484   71168 cri.go:89] found id: ""
	I0401 19:35:48.453513   71168 logs.go:276] 0 containers: []
	W0401 19:35:48.453524   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:48.453532   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:48.453593   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:48.487712   71168 cri.go:89] found id: ""
	I0401 19:35:48.487739   71168 logs.go:276] 0 containers: []
	W0401 19:35:48.487749   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:48.487757   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:48.487815   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:48.533331   71168 cri.go:89] found id: ""
	I0401 19:35:48.533364   71168 logs.go:276] 0 containers: []
	W0401 19:35:48.533375   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:48.533383   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:48.533442   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:48.574103   71168 cri.go:89] found id: ""
	I0401 19:35:48.574131   71168 logs.go:276] 0 containers: []
	W0401 19:35:48.574139   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:48.574147   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:48.574160   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:48.632068   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:48.632098   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:48.649342   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:48.649369   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:48.721799   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:48.721822   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:48.721836   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:48.821549   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:48.821584   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:51.364852   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:51.380281   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:51.380362   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:51.423383   71168 cri.go:89] found id: ""
	I0401 19:35:51.423412   71168 logs.go:276] 0 containers: []
	W0401 19:35:51.423422   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:51.423430   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:51.423490   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:51.470331   71168 cri.go:89] found id: ""
	I0401 19:35:51.470359   71168 logs.go:276] 0 containers: []
	W0401 19:35:51.470370   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:51.470378   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:51.470441   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:51.520310   71168 cri.go:89] found id: ""
	I0401 19:35:51.520339   71168 logs.go:276] 0 containers: []
	W0401 19:35:51.520350   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:51.520358   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:51.520414   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:51.568681   71168 cri.go:89] found id: ""
	I0401 19:35:51.568706   71168 logs.go:276] 0 containers: []
	W0401 19:35:51.568716   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:51.568724   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:51.568843   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:51.615146   71168 cri.go:89] found id: ""
	I0401 19:35:51.615174   71168 logs.go:276] 0 containers: []
	W0401 19:35:51.615185   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:51.615193   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:51.615256   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:51.658678   71168 cri.go:89] found id: ""
	I0401 19:35:51.658703   71168 logs.go:276] 0 containers: []
	W0401 19:35:51.658712   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:51.658720   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:51.658791   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:51.700071   71168 cri.go:89] found id: ""
	I0401 19:35:51.700097   71168 logs.go:276] 0 containers: []
	W0401 19:35:51.700108   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:51.700114   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:51.700177   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:51.746772   71168 cri.go:89] found id: ""
	I0401 19:35:51.746798   71168 logs.go:276] 0 containers: []
	W0401 19:35:51.746809   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:51.746826   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:51.746849   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:51.762321   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:51.762350   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:51.843300   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:51.843322   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:51.843337   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:51.919059   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:51.919090   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:51.965899   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:51.965925   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:53.564613   70687 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.745530657s)
	I0401 19:35:53.564696   70687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:35:53.582161   70687 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:35:53.593313   70687 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:35:53.604441   70687 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:35:53.604460   70687 kubeadm.go:156] found existing configuration files:
	
	I0401 19:35:53.604502   70687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 19:35:53.615367   70687 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:35:53.615426   70687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:35:53.626375   70687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 19:35:53.636924   70687 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:35:53.636975   70687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:35:53.647493   70687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 19:35:53.657319   70687 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:35:53.657373   70687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:35:53.667422   70687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 19:35:53.677235   70687 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:35:53.677308   70687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:35:53.688043   70687 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 19:35:53.894204   70687 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 19:35:53.504486   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:55.505966   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:54.523484   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:54.542004   71168 kubeadm.go:591] duration metric: took 4m4.024054342s to restartPrimaryControlPlane
	W0401 19:35:54.542067   71168 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0401 19:35:54.542088   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0401 19:35:55.179619   71168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:35:55.196424   71168 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:35:55.209517   71168 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:35:55.222643   71168 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:35:55.222664   71168 kubeadm.go:156] found existing configuration files:
	
	I0401 19:35:55.222714   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 19:35:55.234756   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:35:55.234813   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:35:55.246725   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 19:35:55.258440   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:35:55.258499   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:35:55.270106   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 19:35:55.280724   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:35:55.280776   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:35:55.293630   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 19:35:55.305588   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:35:55.305660   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:35:55.318308   71168 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 19:35:55.574896   71168 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 19:35:58.004494   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:00.505168   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:02.622337   70687 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0401 19:36:02.622433   70687 kubeadm.go:309] [preflight] Running pre-flight checks
	I0401 19:36:02.622548   70687 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 19:36:02.622659   70687 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 19:36:02.622794   70687 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 19:36:02.622883   70687 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 19:36:02.624550   70687 out.go:204]   - Generating certificates and keys ...
	I0401 19:36:02.624640   70687 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0401 19:36:02.624734   70687 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0401 19:36:02.624861   70687 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0401 19:36:02.624952   70687 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0401 19:36:02.625042   70687 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0401 19:36:02.625114   70687 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0401 19:36:02.625206   70687 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0401 19:36:02.625271   70687 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0401 19:36:02.625337   70687 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0401 19:36:02.625398   70687 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0401 19:36:02.625430   70687 kubeadm.go:309] [certs] Using the existing "sa" key
	I0401 19:36:02.625475   70687 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 19:36:02.625519   70687 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 19:36:02.625567   70687 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 19:36:02.625630   70687 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 19:36:02.625744   70687 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 19:36:02.625825   70687 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 19:36:02.625938   70687 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 19:36:02.626041   70687 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 19:36:02.627616   70687 out.go:204]   - Booting up control plane ...
	I0401 19:36:02.627744   70687 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 19:36:02.627812   70687 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 19:36:02.627878   70687 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 19:36:02.627976   70687 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 19:36:02.628046   70687 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 19:36:02.628098   70687 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0401 19:36:02.628273   70687 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 19:36:02.628354   70687 kubeadm.go:309] [apiclient] All control plane components are healthy after 5.502318 seconds
	I0401 19:36:02.628467   70687 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 19:36:02.628587   70687 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 19:36:02.628642   70687 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 19:36:02.628800   70687 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-882095 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 19:36:02.628849   70687 kubeadm.go:309] [bootstrap-token] Using token: 821cxx.fac41nwqi8u5mwgu
	I0401 19:36:02.630202   70687 out.go:204]   - Configuring RBAC rules ...
	I0401 19:36:02.630328   70687 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 19:36:02.630413   70687 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 19:36:02.630593   70687 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 19:36:02.630794   70687 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 19:36:02.630941   70687 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 19:36:02.631049   70687 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 19:36:02.631205   70687 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 19:36:02.631255   70687 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0401 19:36:02.631318   70687 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0401 19:36:02.631326   70687 kubeadm.go:309] 
	I0401 19:36:02.631412   70687 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0401 19:36:02.631421   70687 kubeadm.go:309] 
	I0401 19:36:02.631527   70687 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0401 19:36:02.631534   70687 kubeadm.go:309] 
	I0401 19:36:02.631560   70687 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0401 19:36:02.631649   70687 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 19:36:02.631721   70687 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 19:36:02.631731   70687 kubeadm.go:309] 
	I0401 19:36:02.631810   70687 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0401 19:36:02.631822   70687 kubeadm.go:309] 
	I0401 19:36:02.631896   70687 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 19:36:02.631910   70687 kubeadm.go:309] 
	I0401 19:36:02.631986   70687 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0401 19:36:02.632088   70687 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 19:36:02.632181   70687 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 19:36:02.632190   70687 kubeadm.go:309] 
	I0401 19:36:02.632319   70687 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 19:36:02.632427   70687 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0401 19:36:02.632437   70687 kubeadm.go:309] 
	I0401 19:36:02.632532   70687 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 821cxx.fac41nwqi8u5mwgu \
	I0401 19:36:02.632695   70687 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b8a0197ad47aa27a5800307c57228d22e61e4d31af785fa8a896f2b7fab267b8 \
	I0401 19:36:02.632726   70687 kubeadm.go:309] 	--control-plane 
	I0401 19:36:02.632736   70687 kubeadm.go:309] 
	I0401 19:36:02.632860   70687 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0401 19:36:02.632875   70687 kubeadm.go:309] 
	I0401 19:36:02.632983   70687 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 821cxx.fac41nwqi8u5mwgu \
	I0401 19:36:02.633118   70687 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b8a0197ad47aa27a5800307c57228d22e61e4d31af785fa8a896f2b7fab267b8 
	I0401 19:36:02.633132   70687 cni.go:84] Creating CNI manager for ""
	I0401 19:36:02.633138   70687 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:36:02.634595   70687 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0401 19:36:02.635812   70687 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0401 19:36:02.671750   70687 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0401 19:36:02.705562   70687 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 19:36:02.705657   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:02.705671   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-882095 minikube.k8s.io/updated_at=2024_04_01T19_36_02_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2 minikube.k8s.io/name=embed-certs-882095 minikube.k8s.io/primary=true
	I0401 19:36:02.762626   70687 ops.go:34] apiserver oom_adj: -16
	I0401 19:36:03.065957   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:03.566513   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:04.066178   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:04.566321   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:05.066798   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:05.566877   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:06.066520   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:03.004878   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:05.505057   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:06.566982   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:07.066931   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:07.566107   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:08.066843   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:08.566186   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:09.066550   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:09.566205   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:10.066287   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:10.566902   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:11.066656   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:08.005380   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:10.504026   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:11.566894   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:12.066235   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:12.566599   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:13.066132   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:13.566865   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:14.066759   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:14.566435   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:15.066907   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:15.566851   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:16.066880   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:16.158125   70687 kubeadm.go:1107] duration metric: took 13.452541301s to wait for elevateKubeSystemPrivileges
	W0401 19:36:16.158168   70687 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0401 19:36:16.158176   70687 kubeadm.go:393] duration metric: took 5m8.800288084s to StartCluster
	I0401 19:36:16.158195   70687 settings.go:142] acquiring lock: {Name:mk5cd3d9600680d3808ad7ff6310a5e71b09e71d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:36:16.158268   70687 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 19:36:16.159976   70687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/kubeconfig: {Name:mkbd988e40ba29769e9f8a43c4d876f38e957f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:36:16.160254   70687 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.190 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 19:36:16.162239   70687 out.go:177] * Verifying Kubernetes components...
	I0401 19:36:16.160346   70687 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0401 19:36:16.162276   70687 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-882095"
	I0401 19:36:16.162311   70687 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-882095"
	W0401 19:36:16.162320   70687 addons.go:243] addon storage-provisioner should already be in state true
	I0401 19:36:16.162339   70687 addons.go:69] Setting default-storageclass=true in profile "embed-certs-882095"
	I0401 19:36:16.162348   70687 addons.go:69] Setting metrics-server=true in profile "embed-certs-882095"
	I0401 19:36:16.162363   70687 addons.go:234] Setting addon metrics-server=true in "embed-certs-882095"
	W0401 19:36:16.162371   70687 addons.go:243] addon metrics-server should already be in state true
	I0401 19:36:16.162377   70687 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-882095"
	I0401 19:36:16.162384   70687 host.go:66] Checking if "embed-certs-882095" exists ...
	I0401 19:36:16.162345   70687 host.go:66] Checking if "embed-certs-882095" exists ...
	I0401 19:36:16.163767   70687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:36:16.160484   70687 config.go:182] Loaded profile config "embed-certs-882095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 19:36:16.162673   70687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:16.162687   70687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:16.163886   70687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:16.163900   70687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:16.162704   70687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:16.163963   70687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:16.180743   70687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41647
	I0401 19:36:16.180759   70687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46707
	I0401 19:36:16.180746   70687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44419
	I0401 19:36:16.181334   70687 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:16.181342   70687 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:16.181369   70687 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:16.181830   70687 main.go:141] libmachine: Using API Version  1
	I0401 19:36:16.181848   70687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:16.181973   70687 main.go:141] libmachine: Using API Version  1
	I0401 19:36:16.181991   70687 main.go:141] libmachine: Using API Version  1
	I0401 19:36:16.182001   70687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:16.182007   70687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:16.182187   70687 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:16.182360   70687 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:16.182393   70687 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:16.182592   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetState
	I0401 19:36:16.182726   70687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:16.182753   70687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:16.182829   70687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:16.182871   70687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:16.186198   70687 addons.go:234] Setting addon default-storageclass=true in "embed-certs-882095"
	W0401 19:36:16.186226   70687 addons.go:243] addon default-storageclass should already be in state true
	I0401 19:36:16.186258   70687 host.go:66] Checking if "embed-certs-882095" exists ...
	I0401 19:36:16.186603   70687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:16.186636   70687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:16.198494   70687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36915
	I0401 19:36:16.198862   70687 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:16.199298   70687 main.go:141] libmachine: Using API Version  1
	I0401 19:36:16.199315   70687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:16.199777   70687 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:16.200056   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetState
	I0401 19:36:16.201955   70687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39769
	I0401 19:36:16.202167   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:36:16.202416   70687 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:16.204728   70687 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:36:16.202891   70687 main.go:141] libmachine: Using API Version  1
	I0401 19:36:16.205309   70687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35751
	I0401 19:36:16.207964   70687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:16.208022   70687 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 19:36:16.208038   70687 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 19:36:16.208057   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:36:16.208345   70687 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:16.208482   70687 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:16.208550   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetState
	I0401 19:36:16.209106   70687 main.go:141] libmachine: Using API Version  1
	I0401 19:36:16.209121   70687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:16.209764   70687 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:16.210220   70687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:16.210258   70687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:16.211015   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:36:16.213549   70687 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 19:36:16.212105   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:36:16.215606   70687 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 19:36:16.213577   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:36:16.215625   70687 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 19:36:16.215632   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:36:16.212867   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:36:16.215647   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:36:16.215791   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:36:16.215913   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:36:16.216028   70687 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/embed-certs-882095/id_rsa Username:docker}
	I0401 19:36:16.218302   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:36:16.218924   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:36:16.218948   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:36:16.219174   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:36:16.219340   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:36:16.219496   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:36:16.219818   70687 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/embed-certs-882095/id_rsa Username:docker}
	I0401 19:36:16.227813   70687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35001
	I0401 19:36:16.228198   70687 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:16.228612   70687 main.go:141] libmachine: Using API Version  1
	I0401 19:36:16.228635   70687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:16.228989   70687 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:16.229159   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetState
	I0401 19:36:16.230712   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:36:16.230969   70687 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 19:36:16.230987   70687 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 19:36:16.231003   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:36:16.233712   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:36:16.234102   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:36:16.234126   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:36:16.234273   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:36:16.234435   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:36:16.234593   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:36:16.234753   70687 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/embed-certs-882095/id_rsa Username:docker}
	I0401 19:36:16.332504   70687 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:36:16.354423   70687 node_ready.go:35] waiting up to 6m0s for node "embed-certs-882095" to be "Ready" ...
	I0401 19:36:16.363527   70687 node_ready.go:49] node "embed-certs-882095" has status "Ready":"True"
	I0401 19:36:16.363555   70687 node_ready.go:38] duration metric: took 9.10669ms for node "embed-certs-882095" to be "Ready" ...
	I0401 19:36:16.363567   70687 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:36:16.369606   70687 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-fx6hf" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:16.435769   70687 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 19:36:16.435793   70687 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 19:36:16.450934   70687 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 19:36:16.468137   70687 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 19:36:16.474209   70687 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 19:36:16.474233   70687 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 19:36:13.003028   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:15.004924   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:16.530201   70687 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 19:36:16.530222   70687 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0401 19:36:16.607557   70687 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 19:36:17.044156   70687 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:17.044183   70687 main.go:141] libmachine: (embed-certs-882095) Calling .Close
	I0401 19:36:17.044165   70687 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:17.044244   70687 main.go:141] libmachine: (embed-certs-882095) Calling .Close
	I0401 19:36:17.044569   70687 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:17.044606   70687 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:17.044617   70687 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:17.044624   70687 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:17.044630   70687 main.go:141] libmachine: (embed-certs-882095) Calling .Close
	I0401 19:36:17.044639   70687 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:17.044656   70687 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:17.044657   70687 main.go:141] libmachine: (embed-certs-882095) DBG | Closing plugin on server side
	I0401 19:36:17.044670   70687 main.go:141] libmachine: (embed-certs-882095) Calling .Close
	I0401 19:36:17.044616   70687 main.go:141] libmachine: (embed-certs-882095) DBG | Closing plugin on server side
	I0401 19:36:17.044947   70687 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:17.044963   70687 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:17.044964   70687 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:17.044973   70687 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:17.045019   70687 main.go:141] libmachine: (embed-certs-882095) DBG | Closing plugin on server side
	I0401 19:36:17.058441   70687 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:17.058469   70687 main.go:141] libmachine: (embed-certs-882095) Calling .Close
	I0401 19:36:17.058718   70687 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:17.058735   70687 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:17.276263   70687 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:17.276283   70687 main.go:141] libmachine: (embed-certs-882095) Calling .Close
	I0401 19:36:17.276548   70687 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:17.276562   70687 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:17.276571   70687 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:17.276584   70687 main.go:141] libmachine: (embed-certs-882095) Calling .Close
	I0401 19:36:17.276823   70687 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:17.276837   70687 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:17.276852   70687 addons.go:470] Verifying addon metrics-server=true in "embed-certs-882095"
	I0401 19:36:17.278536   70687 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0401 19:36:17.279740   70687 addons.go:505] duration metric: took 1.119396s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0401 19:36:18.412746   70687 pod_ready.go:102] pod "coredns-76f75df574-fx6hf" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:19.378799   70687 pod_ready.go:92] pod "coredns-76f75df574-fx6hf" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:19.378819   70687 pod_ready.go:81] duration metric: took 3.009189982s for pod "coredns-76f75df574-fx6hf" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.378828   70687 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-hwbw6" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.384482   70687 pod_ready.go:92] pod "coredns-76f75df574-hwbw6" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:19.384498   70687 pod_ready.go:81] duration metric: took 5.664781ms for pod "coredns-76f75df574-hwbw6" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.384507   70687 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.390258   70687 pod_ready.go:92] pod "etcd-embed-certs-882095" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:19.390274   70687 pod_ready.go:81] duration metric: took 5.761319ms for pod "etcd-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.390281   70687 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.395592   70687 pod_ready.go:92] pod "kube-apiserver-embed-certs-882095" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:19.395611   70687 pod_ready.go:81] duration metric: took 5.323181ms for pod "kube-apiserver-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.395622   70687 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.400979   70687 pod_ready.go:92] pod "kube-controller-manager-embed-certs-882095" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:19.400994   70687 pod_ready.go:81] duration metric: took 5.365282ms for pod "kube-controller-manager-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.401002   70687 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mbs4m" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.775009   70687 pod_ready.go:92] pod "kube-proxy-mbs4m" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:19.775036   70687 pod_ready.go:81] duration metric: took 374.027521ms for pod "kube-proxy-mbs4m" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.775047   70687 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:20.174962   70687 pod_ready.go:92] pod "kube-scheduler-embed-certs-882095" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:20.174986   70687 pod_ready.go:81] duration metric: took 399.930828ms for pod "kube-scheduler-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:20.174994   70687 pod_ready.go:38] duration metric: took 3.811414774s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:36:20.175006   70687 api_server.go:52] waiting for apiserver process to appear ...
	I0401 19:36:20.175064   70687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:36:20.191452   70687 api_server.go:72] duration metric: took 4.031156406s to wait for apiserver process to appear ...
	I0401 19:36:20.191477   70687 api_server.go:88] waiting for apiserver healthz status ...
	I0401 19:36:20.191498   70687 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0401 19:36:20.196706   70687 api_server.go:279] https://192.168.39.190:8443/healthz returned 200:
	ok
	I0401 19:36:20.197772   70687 api_server.go:141] control plane version: v1.29.3
	I0401 19:36:20.197791   70687 api_server.go:131] duration metric: took 6.308074ms to wait for apiserver health ...
	I0401 19:36:20.197799   70687 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 19:36:20.380616   70687 system_pods.go:59] 9 kube-system pods found
	I0401 19:36:20.380645   70687 system_pods.go:61] "coredns-76f75df574-fx6hf" [1c07b740-3374-4a54-a786-784b23ec6b83] Running
	I0401 19:36:20.380651   70687 system_pods.go:61] "coredns-76f75df574-hwbw6" [7b12145a-2689-47e9-9724-d80790ed079c] Running
	I0401 19:36:20.380657   70687 system_pods.go:61] "etcd-embed-certs-882095" [3848d128-2fde-42f5-9543-b8d0343ba15b] Running
	I0401 19:36:20.380663   70687 system_pods.go:61] "kube-apiserver-embed-certs-882095" [116c5cd1-2d04-4a85-96e9-bd1e6af4cba4] Running
	I0401 19:36:20.380668   70687 system_pods.go:61] "kube-controller-manager-embed-certs-882095" [8a2282cf-2a87-4cee-a482-355e92048642] Running
	I0401 19:36:20.380672   70687 system_pods.go:61] "kube-proxy-mbs4m" [ffccbae0-7538-4a75-a6ce-afce49865f07] Running
	I0401 19:36:20.380676   70687 system_pods.go:61] "kube-scheduler-embed-certs-882095" [d2554007-1c9c-4238-809a-72aae1fb7de3] Running
	I0401 19:36:20.380684   70687 system_pods.go:61] "metrics-server-57f55c9bc5-dktr6" [c6adfcab-c746-4ad8-abe2-8b300389a4f5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:36:20.380689   70687 system_pods.go:61] "storage-provisioner" [bcff0d1d-a555-4b25-9aa5-7ab1188c21fd] Running
	I0401 19:36:20.380700   70687 system_pods.go:74] duration metric: took 182.895079ms to wait for pod list to return data ...
	I0401 19:36:20.380711   70687 default_sa.go:34] waiting for default service account to be created ...
	I0401 19:36:20.574739   70687 default_sa.go:45] found service account: "default"
	I0401 19:36:20.574771   70687 default_sa.go:55] duration metric: took 194.049249ms for default service account to be created ...
	I0401 19:36:20.574785   70687 system_pods.go:116] waiting for k8s-apps to be running ...
	I0401 19:36:20.781600   70687 system_pods.go:86] 9 kube-system pods found
	I0401 19:36:20.781630   70687 system_pods.go:89] "coredns-76f75df574-fx6hf" [1c07b740-3374-4a54-a786-784b23ec6b83] Running
	I0401 19:36:20.781638   70687 system_pods.go:89] "coredns-76f75df574-hwbw6" [7b12145a-2689-47e9-9724-d80790ed079c] Running
	I0401 19:36:20.781658   70687 system_pods.go:89] "etcd-embed-certs-882095" [3848d128-2fde-42f5-9543-b8d0343ba15b] Running
	I0401 19:36:20.781664   70687 system_pods.go:89] "kube-apiserver-embed-certs-882095" [116c5cd1-2d04-4a85-96e9-bd1e6af4cba4] Running
	I0401 19:36:20.781672   70687 system_pods.go:89] "kube-controller-manager-embed-certs-882095" [8a2282cf-2a87-4cee-a482-355e92048642] Running
	I0401 19:36:20.781678   70687 system_pods.go:89] "kube-proxy-mbs4m" [ffccbae0-7538-4a75-a6ce-afce49865f07] Running
	I0401 19:36:20.781686   70687 system_pods.go:89] "kube-scheduler-embed-certs-882095" [d2554007-1c9c-4238-809a-72aae1fb7de3] Running
	I0401 19:36:20.781695   70687 system_pods.go:89] "metrics-server-57f55c9bc5-dktr6" [c6adfcab-c746-4ad8-abe2-8b300389a4f5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:36:20.781705   70687 system_pods.go:89] "storage-provisioner" [bcff0d1d-a555-4b25-9aa5-7ab1188c21fd] Running
	I0401 19:36:20.781722   70687 system_pods.go:126] duration metric: took 206.928658ms to wait for k8s-apps to be running ...
	I0401 19:36:20.781738   70687 system_svc.go:44] waiting for kubelet service to be running ....
	I0401 19:36:20.781789   70687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:36:20.798910   70687 system_svc.go:56] duration metric: took 17.163227ms WaitForService to wait for kubelet
	I0401 19:36:20.798940   70687 kubeadm.go:576] duration metric: took 4.638649198s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 19:36:20.798962   70687 node_conditions.go:102] verifying NodePressure condition ...
	I0401 19:36:20.975011   70687 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 19:36:20.975034   70687 node_conditions.go:123] node cpu capacity is 2
	I0401 19:36:20.975045   70687 node_conditions.go:105] duration metric: took 176.077669ms to run NodePressure ...
	I0401 19:36:20.975055   70687 start.go:240] waiting for startup goroutines ...
	I0401 19:36:20.975061   70687 start.go:245] waiting for cluster config update ...
	I0401 19:36:20.975070   70687 start.go:254] writing updated cluster config ...
	I0401 19:36:20.975313   70687 ssh_runner.go:195] Run: rm -f paused
	I0401 19:36:21.024261   70687 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0401 19:36:21.026583   70687 out.go:177] * Done! kubectl is now configured to use "embed-certs-882095" cluster and "default" namespace by default
	I0401 19:36:17.504621   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:20.003964   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:18.623277   70962 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.523094705s)
	I0401 19:36:18.623344   70962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:36:18.640939   70962 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:36:18.653983   70962 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:36:18.666162   70962 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:36:18.666182   70962 kubeadm.go:156] found existing configuration files:
	
	I0401 19:36:18.666233   70962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0401 19:36:18.679043   70962 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:36:18.679092   70962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:36:18.690185   70962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0401 19:36:18.703017   70962 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:36:18.703078   70962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:36:18.714986   70962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0401 19:36:18.727138   70962 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:36:18.727188   70962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:36:18.737886   70962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0401 19:36:18.748013   70962 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:36:18.748064   70962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:36:18.758552   70962 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 19:36:18.988309   70962 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 19:36:22.004400   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:24.004510   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:26.504264   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:28.053408   70962 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0401 19:36:28.053478   70962 kubeadm.go:309] [preflight] Running pre-flight checks
	I0401 19:36:28.053544   70962 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 19:36:28.053677   70962 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 19:36:28.053837   70962 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 19:36:28.053953   70962 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 19:36:28.055426   70962 out.go:204]   - Generating certificates and keys ...
	I0401 19:36:28.055513   70962 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0401 19:36:28.055614   70962 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0401 19:36:28.055742   70962 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0401 19:36:28.055834   70962 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0401 19:36:28.055942   70962 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0401 19:36:28.056022   70962 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0401 19:36:28.056104   70962 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0401 19:36:28.056167   70962 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0401 19:36:28.056250   70962 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0401 19:36:28.056331   70962 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0401 19:36:28.056371   70962 kubeadm.go:309] [certs] Using the existing "sa" key
	I0401 19:36:28.056449   70962 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 19:36:28.056531   70962 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 19:36:28.056600   70962 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 19:36:28.056677   70962 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 19:36:28.056772   70962 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 19:36:28.056870   70962 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 19:36:28.057006   70962 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 19:36:28.057100   70962 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 19:36:28.058575   70962 out.go:204]   - Booting up control plane ...
	I0401 19:36:28.058693   70962 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 19:36:28.058773   70962 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 19:36:28.058830   70962 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 19:36:28.058923   70962 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 19:36:28.058998   70962 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 19:36:28.059032   70962 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0401 19:36:28.059201   70962 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 19:36:28.059307   70962 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003148 seconds
	I0401 19:36:28.059432   70962 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 19:36:28.059592   70962 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 19:36:28.059665   70962 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 19:36:28.059892   70962 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-734648 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 19:36:28.059966   70962 kubeadm.go:309] [bootstrap-token] Using token: x76swh.zbuhmc8jrh5hodf9
	I0401 19:36:28.061321   70962 out.go:204]   - Configuring RBAC rules ...
	I0401 19:36:28.061450   70962 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 19:36:28.061577   70962 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 19:36:28.061803   70962 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 19:36:28.061993   70962 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 19:36:28.062153   70962 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 19:36:28.062252   70962 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 19:36:28.062363   70962 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 19:36:28.062422   70962 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0401 19:36:28.062481   70962 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0401 19:36:28.062493   70962 kubeadm.go:309] 
	I0401 19:36:28.062556   70962 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0401 19:36:28.062569   70962 kubeadm.go:309] 
	I0401 19:36:28.062686   70962 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0401 19:36:28.062697   70962 kubeadm.go:309] 
	I0401 19:36:28.062727   70962 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0401 19:36:28.062805   70962 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 19:36:28.062872   70962 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 19:36:28.062886   70962 kubeadm.go:309] 
	I0401 19:36:28.062959   70962 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0401 19:36:28.062969   70962 kubeadm.go:309] 
	I0401 19:36:28.063050   70962 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 19:36:28.063061   70962 kubeadm.go:309] 
	I0401 19:36:28.063103   70962 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0401 19:36:28.063172   70962 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 19:36:28.063234   70962 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 19:36:28.063240   70962 kubeadm.go:309] 
	I0401 19:36:28.063337   70962 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 19:36:28.063440   70962 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0401 19:36:28.063453   70962 kubeadm.go:309] 
	I0401 19:36:28.063559   70962 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token x76swh.zbuhmc8jrh5hodf9 \
	I0401 19:36:28.063676   70962 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b8a0197ad47aa27a5800307c57228d22e61e4d31af785fa8a896f2b7fab267b8 \
	I0401 19:36:28.063725   70962 kubeadm.go:309] 	--control-plane 
	I0401 19:36:28.063734   70962 kubeadm.go:309] 
	I0401 19:36:28.063835   70962 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0401 19:36:28.063844   70962 kubeadm.go:309] 
	I0401 19:36:28.063955   70962 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token x76swh.zbuhmc8jrh5hodf9 \
	I0401 19:36:28.064092   70962 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b8a0197ad47aa27a5800307c57228d22e61e4d31af785fa8a896f2b7fab267b8 
	I0401 19:36:28.064105   70962 cni.go:84] Creating CNI manager for ""
	I0401 19:36:28.064114   70962 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:36:28.065560   70962 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0401 19:36:28.505029   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:31.005436   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:28.066823   70962 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0401 19:36:28.089595   70962 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0401 19:36:28.150074   70962 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 19:36:28.150195   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:28.150206   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-734648 minikube.k8s.io/updated_at=2024_04_01T19_36_28_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2 minikube.k8s.io/name=default-k8s-diff-port-734648 minikube.k8s.io/primary=true
	I0401 19:36:28.494391   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:28.529148   70962 ops.go:34] apiserver oom_adj: -16
	I0401 19:36:28.994780   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:29.494976   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:29.994627   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:30.495192   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:30.995334   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:31.494861   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:31.994576   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:33.505264   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:35.506298   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:32.495185   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:32.995090   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:33.494755   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:33.994758   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:34.494609   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:34.995423   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:35.495219   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:35.994557   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:36.495175   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:36.994857   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:37.494725   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:37.994846   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:38.494687   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:38.994615   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:39.494929   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:39.994514   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:40.494838   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:40.994846   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:41.105036   70962 kubeadm.go:1107] duration metric: took 12.954907711s to wait for elevateKubeSystemPrivileges
	W0401 19:36:41.105072   70962 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0401 19:36:41.105080   70962 kubeadm.go:393] duration metric: took 5m13.291890816s to StartCluster
	I0401 19:36:41.105098   70962 settings.go:142] acquiring lock: {Name:mk5cd3d9600680d3808ad7ff6310a5e71b09e71d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:36:41.105193   70962 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 19:36:41.107226   70962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/kubeconfig: {Name:mkbd988e40ba29769e9f8a43c4d876f38e957f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:36:41.107451   70962 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.145 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 19:36:41.109245   70962 out.go:177] * Verifying Kubernetes components...
	I0401 19:36:41.107543   70962 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0401 19:36:41.107682   70962 config.go:182] Loaded profile config "default-k8s-diff-port-734648": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 19:36:41.110583   70962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:36:41.110596   70962 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-734648"
	I0401 19:36:41.110621   70962 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-734648"
	I0401 19:36:41.110620   70962 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-734648"
	I0401 19:36:41.110652   70962 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-734648"
	I0401 19:36:41.110588   70962 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-734648"
	W0401 19:36:41.110665   70962 addons.go:243] addon metrics-server should already be in state true
	I0401 19:36:41.110685   70962 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-734648"
	W0401 19:36:41.110699   70962 addons.go:243] addon storage-provisioner should already be in state true
	I0401 19:36:41.110700   70962 host.go:66] Checking if "default-k8s-diff-port-734648" exists ...
	I0401 19:36:41.110727   70962 host.go:66] Checking if "default-k8s-diff-port-734648" exists ...
	I0401 19:36:41.111032   70962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:41.111039   70962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:41.111062   70962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:41.111098   70962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:41.111126   70962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:41.111158   70962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:41.129376   70962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46657
	I0401 19:36:41.130833   70962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38623
	I0401 19:36:41.131158   70962 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:41.131258   70962 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:41.131761   70962 main.go:141] libmachine: Using API Version  1
	I0401 19:36:41.131786   70962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:41.132119   70962 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:41.132313   70962 main.go:141] libmachine: Using API Version  1
	I0401 19:36:41.132437   70962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:41.132477   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetState
	I0401 19:36:41.133129   70962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36213
	I0401 19:36:41.133449   70962 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:41.133456   70962 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:41.133871   70962 main.go:141] libmachine: Using API Version  1
	I0401 19:36:41.133894   70962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:41.133990   70962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:41.134021   70962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:41.134159   70962 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:41.134572   70962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:41.134609   70962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:41.143808   70962 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-734648"
	W0401 19:36:41.143829   70962 addons.go:243] addon default-storageclass should already be in state true
	I0401 19:36:41.143858   70962 host.go:66] Checking if "default-k8s-diff-port-734648" exists ...
	I0401 19:36:41.144202   70962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:41.144241   70962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:41.154009   70962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38703
	I0401 19:36:41.156112   70962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45449
	I0401 19:36:41.156579   70962 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:41.157085   70962 main.go:141] libmachine: Using API Version  1
	I0401 19:36:41.157112   70962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:41.157458   70962 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:41.157631   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetState
	I0401 19:36:41.157891   70962 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:41.158593   70962 main.go:141] libmachine: Using API Version  1
	I0401 19:36:41.158615   70962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:41.158924   70962 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:41.159123   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetState
	I0401 19:36:41.160683   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:36:41.162801   70962 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 19:36:41.164275   70962 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 19:36:41.164292   70962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 19:36:41.164310   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:36:41.162762   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:36:41.163321   70962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39643
	I0401 19:36:41.166161   70962 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:36:38.004666   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:40.005118   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:41.164866   70962 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:41.167473   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:36:41.167806   70962 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 19:36:41.167833   70962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 19:36:41.167850   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:36:41.168056   70962 main.go:141] libmachine: Using API Version  1
	I0401 19:36:41.168074   70962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:41.168145   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:36:41.168163   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:36:41.168194   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:36:41.168353   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:36:41.168429   70962 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:41.168583   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:36:41.168723   70962 sshutil.go:53] new ssh client: &{IP:192.168.61.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/default-k8s-diff-port-734648/id_rsa Username:docker}
	I0401 19:36:41.169323   70962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:41.169374   70962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:41.170857   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:36:41.171269   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:36:41.171323   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:36:41.171412   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:36:41.171576   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:36:41.171723   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:36:41.171860   70962 sshutil.go:53] new ssh client: &{IP:192.168.61.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/default-k8s-diff-port-734648/id_rsa Username:docker}
	I0401 19:36:41.191280   70962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42133
	I0401 19:36:41.191576   70962 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:41.192122   70962 main.go:141] libmachine: Using API Version  1
	I0401 19:36:41.192152   70962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:41.192511   70962 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:41.192673   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetState
	I0401 19:36:41.194286   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:36:41.194528   70962 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 19:36:41.194546   70962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 19:36:41.194564   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:36:41.197639   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:36:41.198235   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:36:41.198259   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:36:41.198296   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:36:41.198491   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:36:41.198670   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:36:41.198857   70962 sshutil.go:53] new ssh client: &{IP:192.168.61.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/default-k8s-diff-port-734648/id_rsa Username:docker}
	I0401 19:36:41.308472   70962 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:36:41.334121   70962 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-734648" to be "Ready" ...
	I0401 19:36:41.343898   70962 node_ready.go:49] node "default-k8s-diff-port-734648" has status "Ready":"True"
	I0401 19:36:41.343943   70962 node_ready.go:38] duration metric: took 9.780821ms for node "default-k8s-diff-port-734648" to be "Ready" ...
	I0401 19:36:41.343952   70962 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:36:41.352294   70962 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:41.362318   70962 pod_ready.go:92] pod "etcd-default-k8s-diff-port-734648" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:41.362345   70962 pod_ready.go:81] duration metric: took 10.020335ms for pod "etcd-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:41.362358   70962 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:41.367338   70962 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-734648" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:41.367356   70962 pod_ready.go:81] duration metric: took 4.990987ms for pod "kube-apiserver-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:41.367364   70962 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:41.372379   70962 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-734648" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:41.372401   70962 pod_ready.go:81] duration metric: took 5.030239ms for pod "kube-controller-manager-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:41.372412   70962 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:41.377862   70962 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:41.377881   70962 pod_ready.go:81] duration metric: took 5.460968ms for pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:41.377891   70962 pod_ready.go:38] duration metric: took 33.929349ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:36:41.377915   70962 api_server.go:52] waiting for apiserver process to appear ...
	I0401 19:36:41.377965   70962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:36:41.396518   70962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 19:36:41.407024   70962 api_server.go:72] duration metric: took 299.545156ms to wait for apiserver process to appear ...
	I0401 19:36:41.407049   70962 api_server.go:88] waiting for apiserver healthz status ...
	I0401 19:36:41.407068   70962 api_server.go:253] Checking apiserver healthz at https://192.168.61.145:8444/healthz ...
	I0401 19:36:41.411429   70962 api_server.go:279] https://192.168.61.145:8444/healthz returned 200:
	ok
	I0401 19:36:41.412620   70962 api_server.go:141] control plane version: v1.29.3
	I0401 19:36:41.412640   70962 api_server.go:131] duration metric: took 5.58478ms to wait for apiserver health ...
	I0401 19:36:41.412646   70962 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 19:36:41.426474   70962 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 19:36:41.426500   70962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 19:36:41.447003   70962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 19:36:41.470135   70962 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 19:36:41.470153   70962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 19:36:41.526684   70962 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 19:36:41.526710   70962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0401 19:36:41.540871   70962 system_pods.go:59] 4 kube-system pods found
	I0401 19:36:41.540894   70962 system_pods.go:61] "etcd-default-k8s-diff-port-734648" [7b60f629-8a15-420e-936c-872a0d55ce74] Running
	I0401 19:36:41.540900   70962 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-734648" [811a3391-02c8-43dd-9129-3fc50a4fab41] Running
	I0401 19:36:41.540905   70962 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-734648" [4b57b14a-5f46-482f-8661-8fa500db5390] Running
	I0401 19:36:41.540908   70962 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-734648" [e0fb5e6b-aaa8-45ba-9df9-be947cbbdb80] Running
	I0401 19:36:41.540914   70962 system_pods.go:74] duration metric: took 128.262683ms to wait for pod list to return data ...
	I0401 19:36:41.540920   70962 default_sa.go:34] waiting for default service account to be created ...
	I0401 19:36:41.625507   70962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 19:36:41.750232   70962 default_sa.go:45] found service account: "default"
	I0401 19:36:41.750261   70962 default_sa.go:55] duration metric: took 209.334562ms for default service account to be created ...
	I0401 19:36:41.750273   70962 system_pods.go:116] waiting for k8s-apps to be running ...
	I0401 19:36:41.968623   70962 system_pods.go:86] 7 kube-system pods found
	I0401 19:36:41.968651   70962 system_pods.go:89] "coredns-76f75df574-lwsms" [9f432161-c5e3-42fa-8857-8e61959511b0] Pending
	I0401 19:36:41.968657   70962 system_pods.go:89] "coredns-76f75df574-ws9cc" [65660abf-9856-4df4-a07b-854cfd8e3fc6] Pending
	I0401 19:36:41.968663   70962 system_pods.go:89] "etcd-default-k8s-diff-port-734648" [7b60f629-8a15-420e-936c-872a0d55ce74] Running
	I0401 19:36:41.968669   70962 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-734648" [811a3391-02c8-43dd-9129-3fc50a4fab41] Running
	I0401 19:36:41.968675   70962 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-734648" [4b57b14a-5f46-482f-8661-8fa500db5390] Running
	I0401 19:36:41.968683   70962 system_pods.go:89] "kube-proxy-p8wrc" [2f6b37e6-b3f9-44b6-8ff9-e8fd781ef1a3] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:36:41.968690   70962 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-734648" [e0fb5e6b-aaa8-45ba-9df9-be947cbbdb80] Running
	I0401 19:36:41.968712   70962 retry.go:31] will retry after 288.42332ms: missing components: kube-dns, kube-proxy
	I0401 19:36:42.231814   70962 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:42.231848   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .Close
	I0401 19:36:42.231904   70962 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:42.231925   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .Close
	I0401 19:36:42.232160   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | Closing plugin on server side
	I0401 19:36:42.232161   70962 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:42.232179   70962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:42.232187   70962 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:42.232191   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | Closing plugin on server side
	I0401 19:36:42.232199   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .Close
	I0401 19:36:42.232223   70962 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:42.232235   70962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:42.232244   70962 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:42.232255   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .Close
	I0401 19:36:42.232431   70962 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:42.232478   70962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:42.232578   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | Closing plugin on server side
	I0401 19:36:42.232612   70962 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:42.232629   70962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:42.251515   70962 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:42.251538   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .Close
	I0401 19:36:42.251795   70962 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:42.251809   70962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:42.267102   70962 system_pods.go:86] 8 kube-system pods found
	I0401 19:36:42.267135   70962 system_pods.go:89] "coredns-76f75df574-lwsms" [9f432161-c5e3-42fa-8857-8e61959511b0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:36:42.267148   70962 system_pods.go:89] "coredns-76f75df574-ws9cc" [65660abf-9856-4df4-a07b-854cfd8e3fc6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:36:42.267163   70962 system_pods.go:89] "etcd-default-k8s-diff-port-734648" [7b60f629-8a15-420e-936c-872a0d55ce74] Running
	I0401 19:36:42.267181   70962 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-734648" [811a3391-02c8-43dd-9129-3fc50a4fab41] Running
	I0401 19:36:42.267187   70962 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-734648" [4b57b14a-5f46-482f-8661-8fa500db5390] Running
	I0401 19:36:42.267196   70962 system_pods.go:89] "kube-proxy-p8wrc" [2f6b37e6-b3f9-44b6-8ff9-e8fd781ef1a3] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:36:42.267204   70962 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-734648" [e0fb5e6b-aaa8-45ba-9df9-be947cbbdb80] Running
	I0401 19:36:42.267222   70962 system_pods.go:89] "storage-provisioner" [8509e661-1b53-4018-b6b0-b6a5e242768d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:36:42.267244   70962 retry.go:31] will retry after 336.906399ms: missing components: kube-dns, kube-proxy
	I0401 19:36:42.632180   70962 system_pods.go:86] 9 kube-system pods found
	I0401 19:36:42.632212   70962 system_pods.go:89] "coredns-76f75df574-lwsms" [9f432161-c5e3-42fa-8857-8e61959511b0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:36:42.632223   70962 system_pods.go:89] "coredns-76f75df574-ws9cc" [65660abf-9856-4df4-a07b-854cfd8e3fc6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:36:42.632232   70962 system_pods.go:89] "etcd-default-k8s-diff-port-734648" [7b60f629-8a15-420e-936c-872a0d55ce74] Running
	I0401 19:36:42.632240   70962 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-734648" [811a3391-02c8-43dd-9129-3fc50a4fab41] Running
	I0401 19:36:42.632247   70962 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-734648" [4b57b14a-5f46-482f-8661-8fa500db5390] Running
	I0401 19:36:42.632257   70962 system_pods.go:89] "kube-proxy-p8wrc" [2f6b37e6-b3f9-44b6-8ff9-e8fd781ef1a3] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:36:42.632264   70962 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-734648" [e0fb5e6b-aaa8-45ba-9df9-be947cbbdb80] Running
	I0401 19:36:42.632275   70962 system_pods.go:89] "metrics-server-57f55c9bc5-fj5x5" [e25fa51c-d80e-4ddc-898f-3b9903746537] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:36:42.632289   70962 system_pods.go:89] "storage-provisioner" [8509e661-1b53-4018-b6b0-b6a5e242768d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:36:42.632313   70962 retry.go:31] will retry after 406.571029ms: missing components: kube-dns, kube-proxy
	I0401 19:36:42.739308   70962 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.113759645s)
	I0401 19:36:42.739364   70962 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:42.739383   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .Close
	I0401 19:36:42.739822   70962 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:42.739842   70962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:42.739859   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | Closing plugin on server side
	I0401 19:36:42.739867   70962 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:42.739890   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .Close
	I0401 19:36:42.740171   70962 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:42.740186   70962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:42.740198   70962 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-734648"
	I0401 19:36:42.742233   70962 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0401 19:36:42.743265   70962 addons.go:505] duration metric: took 1.635721448s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0401 19:36:43.053149   70962 system_pods.go:86] 9 kube-system pods found
	I0401 19:36:43.053183   70962 system_pods.go:89] "coredns-76f75df574-lwsms" [9f432161-c5e3-42fa-8857-8e61959511b0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:36:43.053195   70962 system_pods.go:89] "coredns-76f75df574-ws9cc" [65660abf-9856-4df4-a07b-854cfd8e3fc6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:36:43.053205   70962 system_pods.go:89] "etcd-default-k8s-diff-port-734648" [7b60f629-8a15-420e-936c-872a0d55ce74] Running
	I0401 19:36:43.053215   70962 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-734648" [811a3391-02c8-43dd-9129-3fc50a4fab41] Running
	I0401 19:36:43.053223   70962 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-734648" [4b57b14a-5f46-482f-8661-8fa500db5390] Running
	I0401 19:36:43.053235   70962 system_pods.go:89] "kube-proxy-p8wrc" [2f6b37e6-b3f9-44b6-8ff9-e8fd781ef1a3] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:36:43.053240   70962 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-734648" [e0fb5e6b-aaa8-45ba-9df9-be947cbbdb80] Running
	I0401 19:36:43.053249   70962 system_pods.go:89] "metrics-server-57f55c9bc5-fj5x5" [e25fa51c-d80e-4ddc-898f-3b9903746537] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:36:43.053258   70962 system_pods.go:89] "storage-provisioner" [8509e661-1b53-4018-b6b0-b6a5e242768d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:36:43.053275   70962 retry.go:31] will retry after 524.250739ms: missing components: kube-dns, kube-proxy
	I0401 19:36:43.591419   70962 system_pods.go:86] 9 kube-system pods found
	I0401 19:36:43.591451   70962 system_pods.go:89] "coredns-76f75df574-lwsms" [9f432161-c5e3-42fa-8857-8e61959511b0] Running
	I0401 19:36:43.591463   70962 system_pods.go:89] "coredns-76f75df574-ws9cc" [65660abf-9856-4df4-a07b-854cfd8e3fc6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:36:43.591471   70962 system_pods.go:89] "etcd-default-k8s-diff-port-734648" [7b60f629-8a15-420e-936c-872a0d55ce74] Running
	I0401 19:36:43.591480   70962 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-734648" [811a3391-02c8-43dd-9129-3fc50a4fab41] Running
	I0401 19:36:43.591487   70962 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-734648" [4b57b14a-5f46-482f-8661-8fa500db5390] Running
	I0401 19:36:43.591493   70962 system_pods.go:89] "kube-proxy-p8wrc" [2f6b37e6-b3f9-44b6-8ff9-e8fd781ef1a3] Running
	I0401 19:36:43.591498   70962 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-734648" [e0fb5e6b-aaa8-45ba-9df9-be947cbbdb80] Running
	I0401 19:36:43.591508   70962 system_pods.go:89] "metrics-server-57f55c9bc5-fj5x5" [e25fa51c-d80e-4ddc-898f-3b9903746537] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:36:43.591517   70962 system_pods.go:89] "storage-provisioner" [8509e661-1b53-4018-b6b0-b6a5e242768d] Running
	I0401 19:36:43.591529   70962 system_pods.go:126] duration metric: took 1.841248999s to wait for k8s-apps to be running ...
	I0401 19:36:43.591561   70962 system_svc.go:44] waiting for kubelet service to be running ....
	I0401 19:36:43.591613   70962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:36:43.611873   70962 system_svc.go:56] duration metric: took 20.296001ms WaitForService to wait for kubelet
	I0401 19:36:43.611907   70962 kubeadm.go:576] duration metric: took 2.504430824s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 19:36:43.611930   70962 node_conditions.go:102] verifying NodePressure condition ...
	I0401 19:36:43.617697   70962 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 19:36:43.617720   70962 node_conditions.go:123] node cpu capacity is 2
	I0401 19:36:43.617732   70962 node_conditions.go:105] duration metric: took 5.796357ms to run NodePressure ...
	I0401 19:36:43.617745   70962 start.go:240] waiting for startup goroutines ...
	I0401 19:36:43.617754   70962 start.go:245] waiting for cluster config update ...
	I0401 19:36:43.617765   70962 start.go:254] writing updated cluster config ...
	I0401 19:36:43.618023   70962 ssh_runner.go:195] Run: rm -f paused
	I0401 19:36:43.666581   70962 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0401 19:36:43.668685   70962 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-734648" cluster and "default" namespace by default
	I0401 19:36:42.505149   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:45.003855   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:47.004247   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:49.504898   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:51.505403   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:54.005163   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:56.503395   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:58.503791   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:00.504001   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:02.504193   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:05.003540   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:07.003582   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:09.503975   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:12.005037   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:14.503460   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:16.504630   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:19.004307   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:21.004909   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:23.503286   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:25.503469   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:27.503520   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:30.004792   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:32.503693   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:35.005137   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:37.504848   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:39.504961   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:41.510644   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:44.004680   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:46.005118   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:51.561231   71168 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0401 19:37:51.561356   71168 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0401 19:37:51.563350   71168 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0401 19:37:51.563417   71168 kubeadm.go:309] [preflight] Running pre-flight checks
	I0401 19:37:51.563497   71168 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 19:37:51.563596   71168 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 19:37:51.563711   71168 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 19:37:51.563797   71168 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 19:37:51.565710   71168 out.go:204]   - Generating certificates and keys ...
	I0401 19:37:51.565809   71168 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0401 19:37:51.565908   71168 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0401 19:37:51.566051   71168 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0401 19:37:51.566136   71168 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0401 19:37:51.566230   71168 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0401 19:37:51.566325   71168 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0401 19:37:51.566402   71168 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0401 19:37:51.566464   71168 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0401 19:37:51.566580   71168 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0401 19:37:51.566688   71168 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0401 19:37:51.566727   71168 kubeadm.go:309] [certs] Using the existing "sa" key
	I0401 19:37:51.566774   71168 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 19:37:51.566822   71168 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 19:37:51.566917   71168 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 19:37:51.567001   71168 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 19:37:51.567068   71168 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 19:37:51.567210   71168 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 19:37:51.567314   71168 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 19:37:51.567371   71168 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0401 19:37:51.567473   71168 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 19:37:48.504708   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:51.005355   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:51.569285   71168 out.go:204]   - Booting up control plane ...
	I0401 19:37:51.569394   71168 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 19:37:51.569498   71168 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 19:37:51.569568   71168 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 19:37:51.569661   71168 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 19:37:51.569802   71168 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 19:37:51.569866   71168 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0401 19:37:51.569957   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:37:51.570195   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:37:51.570287   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:37:51.570514   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:37:51.570589   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:37:51.570769   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:37:51.570859   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:37:51.571033   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:37:51.571134   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:37:51.571342   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:37:51.571351   71168 kubeadm.go:309] 
	I0401 19:37:51.571394   71168 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0401 19:37:51.571453   71168 kubeadm.go:309] 		timed out waiting for the condition
	I0401 19:37:51.571475   71168 kubeadm.go:309] 
	I0401 19:37:51.571521   71168 kubeadm.go:309] 	This error is likely caused by:
	I0401 19:37:51.571558   71168 kubeadm.go:309] 		- The kubelet is not running
	I0401 19:37:51.571676   71168 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0401 19:37:51.571687   71168 kubeadm.go:309] 
	I0401 19:37:51.571824   71168 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0401 19:37:51.571880   71168 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0401 19:37:51.571921   71168 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0401 19:37:51.571931   71168 kubeadm.go:309] 
	I0401 19:37:51.572077   71168 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0401 19:37:51.572198   71168 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0401 19:37:51.572209   71168 kubeadm.go:309] 
	I0401 19:37:51.572359   71168 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0401 19:37:51.572477   71168 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0401 19:37:51.572576   71168 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0401 19:37:51.572676   71168 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0401 19:37:51.572731   71168 kubeadm.go:309] 
	W0401 19:37:51.572793   71168 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0401 19:37:51.572851   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0401 19:37:52.428554   71168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:37:52.445151   71168 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:37:52.456989   71168 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:37:52.457010   71168 kubeadm.go:156] found existing configuration files:
	
	I0401 19:37:52.457053   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 19:37:52.468305   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:37:52.468375   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:37:52.479305   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 19:37:52.489703   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:37:52.489753   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:37:52.501023   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 19:37:52.512418   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:37:52.512480   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:37:52.523850   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 19:37:52.534358   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:37:52.534425   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:37:52.546135   71168 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 19:37:52.779427   71168 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 19:37:52.997253   70284 pod_ready.go:81] duration metric: took 4m0.000092266s for pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace to be "Ready" ...
	E0401 19:37:52.997287   70284 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace to be "Ready" (will not retry!)
	I0401 19:37:52.997309   70284 pod_ready.go:38] duration metric: took 4m43.911595731s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:37:52.997333   70284 kubeadm.go:591] duration metric: took 5m31.840082505s to restartPrimaryControlPlane
	W0401 19:37:52.997393   70284 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0401 19:37:52.997421   70284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0401 19:38:25.458760   70284 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.46129187s)
	I0401 19:38:25.458845   70284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:38:25.476633   70284 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:38:25.487615   70284 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:38:25.498590   70284 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:38:25.498616   70284 kubeadm.go:156] found existing configuration files:
	
	I0401 19:38:25.498701   70284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 19:38:25.509063   70284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:38:25.509128   70284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:38:25.519806   70284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 19:38:25.530433   70284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:38:25.530488   70284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:38:25.540979   70284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 19:38:25.550786   70284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:38:25.550847   70284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:38:25.561979   70284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 19:38:25.571832   70284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:38:25.571898   70284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:38:25.582501   70284 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 19:38:25.646956   70284 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0-rc.0
	I0401 19:38:25.647046   70284 kubeadm.go:309] [preflight] Running pre-flight checks
	I0401 19:38:25.825328   70284 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 19:38:25.825459   70284 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 19:38:25.825574   70284 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 19:38:26.066201   70284 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 19:38:26.069071   70284 out.go:204]   - Generating certificates and keys ...
	I0401 19:38:26.069170   70284 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0401 19:38:26.069260   70284 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0401 19:38:26.069402   70284 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0401 19:38:26.069493   70284 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0401 19:38:26.069588   70284 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0401 19:38:26.069703   70284 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0401 19:38:26.069765   70284 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0401 19:38:26.069822   70284 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0401 19:38:26.069986   70284 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0401 19:38:26.070644   70284 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0401 19:38:26.071149   70284 kubeadm.go:309] [certs] Using the existing "sa" key
	I0401 19:38:26.071308   70284 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 19:38:26.204651   70284 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 19:38:26.368926   70284 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 19:38:26.586004   70284 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 19:38:26.710851   70284 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 19:38:26.858015   70284 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 19:38:26.858741   70284 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 19:38:26.863879   70284 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 19:38:26.865794   70284 out.go:204]   - Booting up control plane ...
	I0401 19:38:26.865898   70284 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 19:38:26.865984   70284 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 19:38:26.866081   70284 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 19:38:26.886171   70284 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 19:38:26.887118   70284 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 19:38:26.887177   70284 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0401 19:38:27.021053   70284 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0401 19:38:27.021142   70284 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0401 19:38:28.023462   70284 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002303634s
	I0401 19:38:28.023549   70284 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0401 19:38:34.026967   70284 kubeadm.go:309] [api-check] The API server is healthy after 6.003391014s
	I0401 19:38:34.044095   70284 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 19:38:34.061716   70284 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 19:38:34.092708   70284 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 19:38:34.093037   70284 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-472858 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 19:38:34.111758   70284 kubeadm.go:309] [bootstrap-token] Using token: 45cmca.rj16278sw3ueq3us
	I0401 19:38:34.113211   70284 out.go:204]   - Configuring RBAC rules ...
	I0401 19:38:34.113333   70284 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 19:38:34.122292   70284 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 19:38:34.133114   70284 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 19:38:34.138441   70284 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 19:38:34.143964   70284 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 19:38:34.148675   70284 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 19:38:34.438167   70284 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 19:38:34.885250   70284 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0401 19:38:35.439990   70284 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0401 19:38:35.441439   70284 kubeadm.go:309] 
	I0401 19:38:35.441532   70284 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0401 19:38:35.441545   70284 kubeadm.go:309] 
	I0401 19:38:35.441659   70284 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0401 19:38:35.441690   70284 kubeadm.go:309] 
	I0401 19:38:35.441752   70284 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0401 19:38:35.441845   70284 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 19:38:35.441930   70284 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 19:38:35.441938   70284 kubeadm.go:309] 
	I0401 19:38:35.442014   70284 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0401 19:38:35.442028   70284 kubeadm.go:309] 
	I0401 19:38:35.442067   70284 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 19:38:35.442073   70284 kubeadm.go:309] 
	I0401 19:38:35.442120   70284 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0401 19:38:35.442186   70284 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 19:38:35.442295   70284 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 19:38:35.442307   70284 kubeadm.go:309] 
	I0401 19:38:35.442426   70284 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 19:38:35.442552   70284 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0401 19:38:35.442565   70284 kubeadm.go:309] 
	I0401 19:38:35.442643   70284 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 45cmca.rj16278sw3ueq3us \
	I0401 19:38:35.442766   70284 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b8a0197ad47aa27a5800307c57228d22e61e4d31af785fa8a896f2b7fab267b8 \
	I0401 19:38:35.442803   70284 kubeadm.go:309] 	--control-plane 
	I0401 19:38:35.442813   70284 kubeadm.go:309] 
	I0401 19:38:35.442922   70284 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0401 19:38:35.442936   70284 kubeadm.go:309] 
	I0401 19:38:35.443008   70284 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 45cmca.rj16278sw3ueq3us \
	I0401 19:38:35.443097   70284 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b8a0197ad47aa27a5800307c57228d22e61e4d31af785fa8a896f2b7fab267b8 
	I0401 19:38:35.443436   70284 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 19:38:35.443530   70284 cni.go:84] Creating CNI manager for ""
	I0401 19:38:35.443546   70284 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:38:35.445089   70284 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0401 19:38:35.446328   70284 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0401 19:38:35.459788   70284 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0401 19:38:35.486202   70284 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 19:38:35.486300   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:35.486308   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-472858 minikube.k8s.io/updated_at=2024_04_01T19_38_35_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2 minikube.k8s.io/name=no-preload-472858 minikube.k8s.io/primary=true
	I0401 19:38:35.700677   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:35.731567   70284 ops.go:34] apiserver oom_adj: -16
	I0401 19:38:36.200955   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:36.701003   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:37.201632   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:37.700719   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:38.201316   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:38.701334   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:39.201609   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:39.701034   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:40.201771   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:40.700786   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:41.201750   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:41.701709   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:42.201682   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:42.700838   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:43.201123   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:43.701587   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:44.200860   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:44.700795   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:45.200850   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:45.701273   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:46.201701   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:46.701450   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:47.201496   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:47.701351   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:47.800239   70284 kubeadm.go:1107] duration metric: took 12.313994383s to wait for elevateKubeSystemPrivileges
	W0401 19:38:47.800287   70284 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0401 19:38:47.800298   70284 kubeadm.go:393] duration metric: took 6m26.705086714s to StartCluster
	I0401 19:38:47.800320   70284 settings.go:142] acquiring lock: {Name:mk5cd3d9600680d3808ad7ff6310a5e71b09e71d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:38:47.800410   70284 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 19:38:47.802818   70284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/kubeconfig: {Name:mkbd988e40ba29769e9f8a43c4d876f38e957f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:38:47.803132   70284 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.119 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 19:38:47.805445   70284 out.go:177] * Verifying Kubernetes components...
	I0401 19:38:47.803273   70284 config.go:182] Loaded profile config "no-preload-472858": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0401 19:38:47.803252   70284 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0401 19:38:47.806734   70284 addons.go:69] Setting storage-provisioner=true in profile "no-preload-472858"
	I0401 19:38:47.806761   70284 addons.go:69] Setting default-storageclass=true in profile "no-preload-472858"
	I0401 19:38:47.806774   70284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:38:47.806777   70284 addons.go:69] Setting metrics-server=true in profile "no-preload-472858"
	I0401 19:38:47.806802   70284 addons.go:234] Setting addon metrics-server=true in "no-preload-472858"
	W0401 19:38:47.806815   70284 addons.go:243] addon metrics-server should already be in state true
	I0401 19:38:47.806850   70284 host.go:66] Checking if "no-preload-472858" exists ...
	I0401 19:38:47.806802   70284 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-472858"
	I0401 19:38:47.806768   70284 addons.go:234] Setting addon storage-provisioner=true in "no-preload-472858"
	W0401 19:38:47.807229   70284 addons.go:243] addon storage-provisioner should already be in state true
	I0401 19:38:47.807257   70284 host.go:66] Checking if "no-preload-472858" exists ...
	I0401 19:38:47.807289   70284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:38:47.807332   70284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:38:47.807340   70284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:38:47.807366   70284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:38:47.807620   70284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:38:47.807690   70284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:38:47.823665   70284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38305
	I0401 19:38:47.823684   70284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35487
	I0401 19:38:47.824174   70284 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:38:47.824205   70284 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:38:47.824709   70284 main.go:141] libmachine: Using API Version  1
	I0401 19:38:47.824732   70284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:38:47.824838   70284 main.go:141] libmachine: Using API Version  1
	I0401 19:38:47.824867   70284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:38:47.825094   70284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:38:47.825276   70284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:38:47.825700   70284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:38:47.825746   70284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:38:47.825844   70284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:38:47.825866   70284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:38:47.826415   70284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38845
	I0401 19:38:47.826845   70284 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:38:47.827305   70284 main.go:141] libmachine: Using API Version  1
	I0401 19:38:47.827330   70284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:38:47.827800   70284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:38:47.828004   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetState
	I0401 19:38:47.831735   70284 addons.go:234] Setting addon default-storageclass=true in "no-preload-472858"
	W0401 19:38:47.831760   70284 addons.go:243] addon default-storageclass should already be in state true
	I0401 19:38:47.831791   70284 host.go:66] Checking if "no-preload-472858" exists ...
	I0401 19:38:47.832170   70284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:38:47.832218   70284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:38:47.842050   70284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42037
	I0401 19:38:47.842479   70284 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:38:47.842963   70284 main.go:141] libmachine: Using API Version  1
	I0401 19:38:47.842983   70284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:38:47.843354   70284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:38:47.843513   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetState
	I0401 19:38:47.845360   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:38:47.845430   70284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33357
	I0401 19:38:47.847622   70284 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:38:47.845959   70284 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:38:47.847568   70284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38785
	I0401 19:38:47.849255   70284 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 19:38:47.849283   70284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 19:38:47.849303   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:38:47.849356   70284 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:38:47.849524   70284 main.go:141] libmachine: Using API Version  1
	I0401 19:38:47.849536   70284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:38:47.850173   70284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:38:47.850228   70284 main.go:141] libmachine: Using API Version  1
	I0401 19:38:47.850238   70284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:38:47.850362   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetState
	I0401 19:38:47.851206   70284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:38:47.851773   70284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:38:47.851803   70284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:38:47.852404   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:38:47.854167   70284 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 19:38:47.853141   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:38:47.853926   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:38:47.855729   70284 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 19:38:47.855746   70284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 19:38:47.855763   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:38:47.855728   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:38:47.855809   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:38:47.855854   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:38:47.856000   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:38:47.856160   70284 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/no-preload-472858/id_rsa Username:docker}
	I0401 19:38:47.858726   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:38:47.859782   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:38:47.859826   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:38:47.859948   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:38:47.860138   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:38:47.860310   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:38:47.860593   70284 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/no-preload-472858/id_rsa Username:docker}
	I0401 19:38:47.870182   70284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34517
	I0401 19:38:47.870616   70284 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:38:47.871182   70284 main.go:141] libmachine: Using API Version  1
	I0401 19:38:47.871203   70284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:38:47.871561   70284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:38:47.871947   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetState
	I0401 19:38:47.873606   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:38:47.873931   70284 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 19:38:47.873949   70284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 19:38:47.873967   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:38:47.876826   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:38:47.877259   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:38:47.877286   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:38:47.877389   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:38:47.877672   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:38:47.877816   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:38:47.877974   70284 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/no-preload-472858/id_rsa Username:docker}
	I0401 19:38:48.053731   70284 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:38:48.081160   70284 node_ready.go:35] waiting up to 6m0s for node "no-preload-472858" to be "Ready" ...
	I0401 19:38:48.107976   70284 node_ready.go:49] node "no-preload-472858" has status "Ready":"True"
	I0401 19:38:48.107998   70284 node_ready.go:38] duration metric: took 26.793115ms for node "no-preload-472858" to be "Ready" ...
	I0401 19:38:48.108009   70284 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:38:48.115968   70284 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:38:48.158349   70284 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 19:38:48.158383   70284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 19:38:48.166047   70284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 19:38:48.181902   70284 pod_ready.go:92] pod "etcd-no-preload-472858" in "kube-system" namespace has status "Ready":"True"
	I0401 19:38:48.181922   70284 pod_ready.go:81] duration metric: took 65.920299ms for pod "etcd-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:38:48.181935   70284 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:38:48.199372   70284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 19:38:48.232110   70284 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 19:38:48.232140   70284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 19:38:48.251891   70284 pod_ready.go:92] pod "kube-apiserver-no-preload-472858" in "kube-system" namespace has status "Ready":"True"
	I0401 19:38:48.251914   70284 pod_ready.go:81] duration metric: took 69.970077ms for pod "kube-apiserver-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:38:48.251929   70284 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:38:48.309605   70284 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 19:38:48.309627   70284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0401 19:38:48.325907   70284 pod_ready.go:92] pod "kube-controller-manager-no-preload-472858" in "kube-system" namespace has status "Ready":"True"
	I0401 19:38:48.325928   70284 pod_ready.go:81] duration metric: took 73.991711ms for pod "kube-controller-manager-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:38:48.325938   70284 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:38:48.373418   70284 pod_ready.go:92] pod "kube-scheduler-no-preload-472858" in "kube-system" namespace has status "Ready":"True"
	I0401 19:38:48.373448   70284 pod_ready.go:81] duration metric: took 47.503272ms for pod "kube-scheduler-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:38:48.373456   70284 pod_ready.go:38] duration metric: took 265.436317ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:38:48.373479   70284 api_server.go:52] waiting for apiserver process to appear ...
	I0401 19:38:48.373543   70284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:38:48.396444   70284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 19:38:48.564838   70284 main.go:141] libmachine: Making call to close driver server
	I0401 19:38:48.564860   70284 main.go:141] libmachine: (no-preload-472858) Calling .Close
	I0401 19:38:48.565180   70284 main.go:141] libmachine: (no-preload-472858) DBG | Closing plugin on server side
	I0401 19:38:48.565197   70284 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:38:48.565227   70284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:38:48.565247   70284 main.go:141] libmachine: Making call to close driver server
	I0401 19:38:48.565258   70284 main.go:141] libmachine: (no-preload-472858) Calling .Close
	I0401 19:38:48.565489   70284 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:38:48.565506   70284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:38:48.579332   70284 main.go:141] libmachine: Making call to close driver server
	I0401 19:38:48.579355   70284 main.go:141] libmachine: (no-preload-472858) Calling .Close
	I0401 19:38:48.579599   70284 main.go:141] libmachine: (no-preload-472858) DBG | Closing plugin on server side
	I0401 19:38:48.579637   70284 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:38:48.579645   70284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:38:48.884887   70284 main.go:141] libmachine: Making call to close driver server
	I0401 19:38:48.884920   70284 main.go:141] libmachine: (no-preload-472858) Calling .Close
	I0401 19:38:48.884938   70284 api_server.go:72] duration metric: took 1.08176251s to wait for apiserver process to appear ...
	I0401 19:38:48.884958   70284 api_server.go:88] waiting for apiserver healthz status ...
	I0401 19:38:48.885018   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:38:48.885232   70284 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:38:48.885252   70284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:38:48.885260   70284 main.go:141] libmachine: Making call to close driver server
	I0401 19:38:48.885269   70284 main.go:141] libmachine: (no-preload-472858) Calling .Close
	I0401 19:38:48.885236   70284 main.go:141] libmachine: (no-preload-472858) DBG | Closing plugin on server side
	I0401 19:38:48.885519   70284 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:38:48.887182   70284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:38:48.885555   70284 main.go:141] libmachine: (no-preload-472858) DBG | Closing plugin on server side
	I0401 19:38:48.895737   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 200:
	ok
	I0401 19:38:48.899521   70284 api_server.go:141] control plane version: v1.30.0-rc.0
	I0401 19:38:48.899539   70284 api_server.go:131] duration metric: took 14.574989ms to wait for apiserver health ...
	I0401 19:38:48.899547   70284 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 19:38:48.914064   70284 system_pods.go:59] 8 kube-system pods found
	I0401 19:38:48.914090   70284 system_pods.go:61] "coredns-7db6d8ff4d-8285w" [c450ac4a-974e-4322-9857-fb65792a142b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:48.914106   70284 system_pods.go:61] "coredns-7db6d8ff4d-wmbsp" [7a73f081-42f4-4854-8785-25e54eb0a391] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:48.914112   70284 system_pods.go:61] "etcd-no-preload-472858" [d96862c6-4b97-4239-a79a-e877f2825eb6] Running
	I0401 19:38:48.914117   70284 system_pods.go:61] "kube-apiserver-no-preload-472858" [78418540-b912-4457-98ef-94cf57cf9379] Running
	I0401 19:38:48.914122   70284 system_pods.go:61] "kube-controller-manager-no-preload-472858" [4a48aaa7-c47f-4d1f-aace-f02d2f24c791] Running
	I0401 19:38:48.914126   70284 system_pods.go:61] "kube-proxy-5dmtl" [c243321b-b01a-4fd5-895a-888d18ee8527] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:38:48.914134   70284 system_pods.go:61] "kube-scheduler-no-preload-472858" [3564e7d0-f6cc-4584-a2cc-39fc6f884836] Running
	I0401 19:38:48.914138   70284 system_pods.go:61] "storage-provisioner" [844e010a-3bee-4fd1-942f-10fa50306617] Pending
	I0401 19:38:48.914146   70284 system_pods.go:74] duration metric: took 14.594359ms to wait for pod list to return data ...
	I0401 19:38:48.914156   70284 default_sa.go:34] waiting for default service account to be created ...
	I0401 19:38:48.924790   70284 default_sa.go:45] found service account: "default"
	I0401 19:38:48.924814   70284 default_sa.go:55] duration metric: took 10.649887ms for default service account to be created ...
	I0401 19:38:48.924825   70284 system_pods.go:116] waiting for k8s-apps to be running ...
	I0401 19:38:48.930993   70284 system_pods.go:86] 8 kube-system pods found
	I0401 19:38:48.931020   70284 system_pods.go:89] "coredns-7db6d8ff4d-8285w" [c450ac4a-974e-4322-9857-fb65792a142b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:48.931037   70284 system_pods.go:89] "coredns-7db6d8ff4d-wmbsp" [7a73f081-42f4-4854-8785-25e54eb0a391] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:48.931047   70284 system_pods.go:89] "etcd-no-preload-472858" [d96862c6-4b97-4239-a79a-e877f2825eb6] Running
	I0401 19:38:48.931056   70284 system_pods.go:89] "kube-apiserver-no-preload-472858" [78418540-b912-4457-98ef-94cf57cf9379] Running
	I0401 19:38:48.931066   70284 system_pods.go:89] "kube-controller-manager-no-preload-472858" [4a48aaa7-c47f-4d1f-aace-f02d2f24c791] Running
	I0401 19:38:48.931074   70284 system_pods.go:89] "kube-proxy-5dmtl" [c243321b-b01a-4fd5-895a-888d18ee8527] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:38:48.931089   70284 system_pods.go:89] "kube-scheduler-no-preload-472858" [3564e7d0-f6cc-4584-a2cc-39fc6f884836] Running
	I0401 19:38:48.931098   70284 system_pods.go:89] "storage-provisioner" [844e010a-3bee-4fd1-942f-10fa50306617] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:38:48.931117   70284 retry.go:31] will retry after 297.45527ms: missing components: kube-dns, kube-proxy
	I0401 19:38:49.123999   70284 main.go:141] libmachine: Making call to close driver server
	I0401 19:38:49.124019   70284 main.go:141] libmachine: (no-preload-472858) Calling .Close
	I0401 19:38:49.124344   70284 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:38:49.124394   70284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:38:49.124406   70284 main.go:141] libmachine: Making call to close driver server
	I0401 19:38:49.124414   70284 main.go:141] libmachine: (no-preload-472858) Calling .Close
	I0401 19:38:49.124356   70284 main.go:141] libmachine: (no-preload-472858) DBG | Closing plugin on server side
	I0401 19:38:49.124627   70284 main.go:141] libmachine: (no-preload-472858) DBG | Closing plugin on server side
	I0401 19:38:49.124661   70284 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:38:49.124677   70284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:38:49.124690   70284 addons.go:470] Verifying addon metrics-server=true in "no-preload-472858"
	I0401 19:38:49.127415   70284 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0401 19:38:49.129047   70284 addons.go:505] duration metric: took 1.325796036s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0401 19:38:49.236094   70284 system_pods.go:86] 9 kube-system pods found
	I0401 19:38:49.236127   70284 system_pods.go:89] "coredns-7db6d8ff4d-8285w" [c450ac4a-974e-4322-9857-fb65792a142b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:49.236136   70284 system_pods.go:89] "coredns-7db6d8ff4d-wmbsp" [7a73f081-42f4-4854-8785-25e54eb0a391] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:49.236145   70284 system_pods.go:89] "etcd-no-preload-472858" [d96862c6-4b97-4239-a79a-e877f2825eb6] Running
	I0401 19:38:49.236152   70284 system_pods.go:89] "kube-apiserver-no-preload-472858" [78418540-b912-4457-98ef-94cf57cf9379] Running
	I0401 19:38:49.236159   70284 system_pods.go:89] "kube-controller-manager-no-preload-472858" [4a48aaa7-c47f-4d1f-aace-f02d2f24c791] Running
	I0401 19:38:49.236168   70284 system_pods.go:89] "kube-proxy-5dmtl" [c243321b-b01a-4fd5-895a-888d18ee8527] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:38:49.236175   70284 system_pods.go:89] "kube-scheduler-no-preload-472858" [3564e7d0-f6cc-4584-a2cc-39fc6f884836] Running
	I0401 19:38:49.236185   70284 system_pods.go:89] "metrics-server-569cc877fc-wj2tt" [5259722c-3d0b-468f-b941-419806e91177] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:38:49.236198   70284 system_pods.go:89] "storage-provisioner" [844e010a-3bee-4fd1-942f-10fa50306617] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:38:49.236218   70284 retry.go:31] will retry after 287.299528ms: missing components: kube-dns, kube-proxy
	I0401 19:38:49.530606   70284 system_pods.go:86] 9 kube-system pods found
	I0401 19:38:49.530643   70284 system_pods.go:89] "coredns-7db6d8ff4d-8285w" [c450ac4a-974e-4322-9857-fb65792a142b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:49.530654   70284 system_pods.go:89] "coredns-7db6d8ff4d-wmbsp" [7a73f081-42f4-4854-8785-25e54eb0a391] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:49.530663   70284 system_pods.go:89] "etcd-no-preload-472858" [d96862c6-4b97-4239-a79a-e877f2825eb6] Running
	I0401 19:38:49.530670   70284 system_pods.go:89] "kube-apiserver-no-preload-472858" [78418540-b912-4457-98ef-94cf57cf9379] Running
	I0401 19:38:49.530678   70284 system_pods.go:89] "kube-controller-manager-no-preload-472858" [4a48aaa7-c47f-4d1f-aace-f02d2f24c791] Running
	I0401 19:38:49.530687   70284 system_pods.go:89] "kube-proxy-5dmtl" [c243321b-b01a-4fd5-895a-888d18ee8527] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:38:49.530697   70284 system_pods.go:89] "kube-scheduler-no-preload-472858" [3564e7d0-f6cc-4584-a2cc-39fc6f884836] Running
	I0401 19:38:49.530711   70284 system_pods.go:89] "metrics-server-569cc877fc-wj2tt" [5259722c-3d0b-468f-b941-419806e91177] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:38:49.530721   70284 system_pods.go:89] "storage-provisioner" [844e010a-3bee-4fd1-942f-10fa50306617] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:38:49.530744   70284 retry.go:31] will retry after 435.286919ms: missing components: kube-dns, kube-proxy
	I0401 19:38:49.974049   70284 system_pods.go:86] 9 kube-system pods found
	I0401 19:38:49.974090   70284 system_pods.go:89] "coredns-7db6d8ff4d-8285w" [c450ac4a-974e-4322-9857-fb65792a142b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:49.974103   70284 system_pods.go:89] "coredns-7db6d8ff4d-wmbsp" [7a73f081-42f4-4854-8785-25e54eb0a391] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:49.974113   70284 system_pods.go:89] "etcd-no-preload-472858" [d96862c6-4b97-4239-a79a-e877f2825eb6] Running
	I0401 19:38:49.974121   70284 system_pods.go:89] "kube-apiserver-no-preload-472858" [78418540-b912-4457-98ef-94cf57cf9379] Running
	I0401 19:38:49.974128   70284 system_pods.go:89] "kube-controller-manager-no-preload-472858" [4a48aaa7-c47f-4d1f-aace-f02d2f24c791] Running
	I0401 19:38:49.974142   70284 system_pods.go:89] "kube-proxy-5dmtl" [c243321b-b01a-4fd5-895a-888d18ee8527] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:38:49.974153   70284 system_pods.go:89] "kube-scheduler-no-preload-472858" [3564e7d0-f6cc-4584-a2cc-39fc6f884836] Running
	I0401 19:38:49.974168   70284 system_pods.go:89] "metrics-server-569cc877fc-wj2tt" [5259722c-3d0b-468f-b941-419806e91177] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:38:49.974181   70284 system_pods.go:89] "storage-provisioner" [844e010a-3bee-4fd1-942f-10fa50306617] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:38:49.974203   70284 retry.go:31] will retry after 577.959209ms: missing components: kube-dns, kube-proxy
	I0401 19:38:50.558750   70284 system_pods.go:86] 9 kube-system pods found
	I0401 19:38:50.558780   70284 system_pods.go:89] "coredns-7db6d8ff4d-8285w" [c450ac4a-974e-4322-9857-fb65792a142b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:50.558787   70284 system_pods.go:89] "coredns-7db6d8ff4d-wmbsp" [7a73f081-42f4-4854-8785-25e54eb0a391] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:50.558795   70284 system_pods.go:89] "etcd-no-preload-472858" [d96862c6-4b97-4239-a79a-e877f2825eb6] Running
	I0401 19:38:50.558805   70284 system_pods.go:89] "kube-apiserver-no-preload-472858" [78418540-b912-4457-98ef-94cf57cf9379] Running
	I0401 19:38:50.558812   70284 system_pods.go:89] "kube-controller-manager-no-preload-472858" [4a48aaa7-c47f-4d1f-aace-f02d2f24c791] Running
	I0401 19:38:50.558820   70284 system_pods.go:89] "kube-proxy-5dmtl" [c243321b-b01a-4fd5-895a-888d18ee8527] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:38:50.558833   70284 system_pods.go:89] "kube-scheduler-no-preload-472858" [3564e7d0-f6cc-4584-a2cc-39fc6f884836] Running
	I0401 19:38:50.558840   70284 system_pods.go:89] "metrics-server-569cc877fc-wj2tt" [5259722c-3d0b-468f-b941-419806e91177] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:38:50.558846   70284 system_pods.go:89] "storage-provisioner" [844e010a-3bee-4fd1-942f-10fa50306617] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:38:50.558863   70284 retry.go:31] will retry after 723.380101ms: missing components: kube-dns, kube-proxy
	I0401 19:38:51.291450   70284 system_pods.go:86] 9 kube-system pods found
	I0401 19:38:51.291487   70284 system_pods.go:89] "coredns-7db6d8ff4d-8285w" [c450ac4a-974e-4322-9857-fb65792a142b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:51.291498   70284 system_pods.go:89] "coredns-7db6d8ff4d-wmbsp" [7a73f081-42f4-4854-8785-25e54eb0a391] Running
	I0401 19:38:51.291508   70284 system_pods.go:89] "etcd-no-preload-472858" [d96862c6-4b97-4239-a79a-e877f2825eb6] Running
	I0401 19:38:51.291514   70284 system_pods.go:89] "kube-apiserver-no-preload-472858" [78418540-b912-4457-98ef-94cf57cf9379] Running
	I0401 19:38:51.291521   70284 system_pods.go:89] "kube-controller-manager-no-preload-472858" [4a48aaa7-c47f-4d1f-aace-f02d2f24c791] Running
	I0401 19:38:51.291527   70284 system_pods.go:89] "kube-proxy-5dmtl" [c243321b-b01a-4fd5-895a-888d18ee8527] Running
	I0401 19:38:51.291532   70284 system_pods.go:89] "kube-scheduler-no-preload-472858" [3564e7d0-f6cc-4584-a2cc-39fc6f884836] Running
	I0401 19:38:51.291543   70284 system_pods.go:89] "metrics-server-569cc877fc-wj2tt" [5259722c-3d0b-468f-b941-419806e91177] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:38:51.291551   70284 system_pods.go:89] "storage-provisioner" [844e010a-3bee-4fd1-942f-10fa50306617] Running
	I0401 19:38:51.291559   70284 system_pods.go:126] duration metric: took 2.366728733s to wait for k8s-apps to be running ...
	I0401 19:38:51.291576   70284 system_svc.go:44] waiting for kubelet service to be running ....
	I0401 19:38:51.291622   70284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:38:51.310224   70284 system_svc.go:56] duration metric: took 18.63923ms WaitForService to wait for kubelet
	I0401 19:38:51.310250   70284 kubeadm.go:576] duration metric: took 3.50708191s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 19:38:51.310269   70284 node_conditions.go:102] verifying NodePressure condition ...
	I0401 19:38:51.312899   70284 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 19:38:51.312919   70284 node_conditions.go:123] node cpu capacity is 2
	I0401 19:38:51.312930   70284 node_conditions.go:105] duration metric: took 2.654739ms to run NodePressure ...
	I0401 19:38:51.312945   70284 start.go:240] waiting for startup goroutines ...
	I0401 19:38:51.312958   70284 start.go:245] waiting for cluster config update ...
	I0401 19:38:51.312985   70284 start.go:254] writing updated cluster config ...
	I0401 19:38:51.313269   70284 ssh_runner.go:195] Run: rm -f paused
	I0401 19:38:51.365041   70284 start.go:600] kubectl: 1.29.3, cluster: 1.30.0-rc.0 (minor skew: 1)
	I0401 19:38:51.367173   70284 out.go:177] * Done! kubectl is now configured to use "no-preload-472858" cluster and "default" namespace by default
	I0401 19:39:48.856665   71168 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0401 19:39:48.856779   71168 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0401 19:39:48.858840   71168 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0401 19:39:48.858896   71168 kubeadm.go:309] [preflight] Running pre-flight checks
	I0401 19:39:48.858987   71168 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 19:39:48.859122   71168 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 19:39:48.859222   71168 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 19:39:48.859314   71168 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 19:39:48.861104   71168 out.go:204]   - Generating certificates and keys ...
	I0401 19:39:48.861202   71168 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0401 19:39:48.861277   71168 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0401 19:39:48.861381   71168 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0401 19:39:48.861492   71168 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0401 19:39:48.861596   71168 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0401 19:39:48.861699   71168 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0401 19:39:48.861791   71168 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0401 19:39:48.861897   71168 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0401 19:39:48.862009   71168 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0401 19:39:48.862118   71168 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0401 19:39:48.862176   71168 kubeadm.go:309] [certs] Using the existing "sa" key
	I0401 19:39:48.862260   71168 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 19:39:48.862338   71168 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 19:39:48.862420   71168 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 19:39:48.862480   71168 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 19:39:48.862527   71168 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 19:39:48.862618   71168 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 19:39:48.862693   71168 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 19:39:48.862734   71168 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0401 19:39:48.862804   71168 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 19:39:48.864199   71168 out.go:204]   - Booting up control plane ...
	I0401 19:39:48.864291   71168 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 19:39:48.864359   71168 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 19:39:48.864420   71168 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 19:39:48.864504   71168 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 19:39:48.864712   71168 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 19:39:48.864788   71168 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0401 19:39:48.864871   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:39:48.865069   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:39:48.865153   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:39:48.865344   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:39:48.865453   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:39:48.865674   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:39:48.865755   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:39:48.865989   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:39:48.866095   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:39:48.866269   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:39:48.866285   71168 kubeadm.go:309] 
	I0401 19:39:48.866343   71168 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0401 19:39:48.866402   71168 kubeadm.go:309] 		timed out waiting for the condition
	I0401 19:39:48.866414   71168 kubeadm.go:309] 
	I0401 19:39:48.866458   71168 kubeadm.go:309] 	This error is likely caused by:
	I0401 19:39:48.866506   71168 kubeadm.go:309] 		- The kubelet is not running
	I0401 19:39:48.866651   71168 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0401 19:39:48.866665   71168 kubeadm.go:309] 
	I0401 19:39:48.866816   71168 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0401 19:39:48.866865   71168 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0401 19:39:48.866895   71168 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0401 19:39:48.866901   71168 kubeadm.go:309] 
	I0401 19:39:48.866989   71168 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0401 19:39:48.867061   71168 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0401 19:39:48.867070   71168 kubeadm.go:309] 
	I0401 19:39:48.867194   71168 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0401 19:39:48.867327   71168 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0401 19:39:48.867417   71168 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0401 19:39:48.867526   71168 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0401 19:39:48.867555   71168 kubeadm.go:309] 
	I0401 19:39:48.867633   71168 kubeadm.go:393] duration metric: took 7m58.404831893s to StartCluster
	I0401 19:39:48.867702   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:39:48.867764   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:39:48.922329   71168 cri.go:89] found id: ""
	I0401 19:39:48.922359   71168 logs.go:276] 0 containers: []
	W0401 19:39:48.922369   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:39:48.922377   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:39:48.922435   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:39:48.966212   71168 cri.go:89] found id: ""
	I0401 19:39:48.966235   71168 logs.go:276] 0 containers: []
	W0401 19:39:48.966243   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:39:48.966248   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:39:48.966309   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:39:49.015141   71168 cri.go:89] found id: ""
	I0401 19:39:49.015171   71168 logs.go:276] 0 containers: []
	W0401 19:39:49.015182   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:39:49.015189   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:39:49.015249   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:39:49.053042   71168 cri.go:89] found id: ""
	I0401 19:39:49.053067   71168 logs.go:276] 0 containers: []
	W0401 19:39:49.053077   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:39:49.053085   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:39:49.053144   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:39:49.093880   71168 cri.go:89] found id: ""
	I0401 19:39:49.093906   71168 logs.go:276] 0 containers: []
	W0401 19:39:49.093914   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:39:49.093923   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:39:49.093976   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:39:49.129730   71168 cri.go:89] found id: ""
	I0401 19:39:49.129752   71168 logs.go:276] 0 containers: []
	W0401 19:39:49.129760   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:39:49.129766   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:39:49.129818   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:39:49.171075   71168 cri.go:89] found id: ""
	I0401 19:39:49.171107   71168 logs.go:276] 0 containers: []
	W0401 19:39:49.171118   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:39:49.171125   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:39:49.171204   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:39:49.208279   71168 cri.go:89] found id: ""
	I0401 19:39:49.208308   71168 logs.go:276] 0 containers: []
	W0401 19:39:49.208319   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:39:49.208330   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:39:49.208345   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:39:49.294128   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:39:49.294148   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:39:49.294162   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:39:49.400930   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:39:49.400963   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:39:49.443111   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:39:49.443140   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:39:49.501382   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:39:49.501417   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0401 19:39:49.516418   71168 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0401 19:39:49.516461   71168 out.go:239] * 
	W0401 19:39:49.516521   71168 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0401 19:39:49.516591   71168 out.go:239] * 
	W0401 19:39:49.517377   71168 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 19:39:49.520389   71168 out.go:177] 
	W0401 19:39:49.521593   71168 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0401 19:39:49.521639   71168 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0401 19:39:49.521686   71168 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0401 19:39:49.523181   71168 out.go:177] 
	
	
	==> CRI-O <==
	Apr 01 19:48:54 old-k8s-version-163608 crio[649]: time="2024-04-01 19:48:54.816198776Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712000934816172517,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b12a5a8b-854a-41e2-98a5-fd03f4705e24 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:48:54 old-k8s-version-163608 crio[649]: time="2024-04-01 19:48:54.817183716Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b0ca63c1-2a3d-42aa-bc80-1a0a3d1dbd15 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:48:54 old-k8s-version-163608 crio[649]: time="2024-04-01 19:48:54.817262697Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b0ca63c1-2a3d-42aa-bc80-1a0a3d1dbd15 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:48:54 old-k8s-version-163608 crio[649]: time="2024-04-01 19:48:54.817320235Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b0ca63c1-2a3d-42aa-bc80-1a0a3d1dbd15 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:48:54 old-k8s-version-163608 crio[649]: time="2024-04-01 19:48:54.861997435Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=be28e18b-70e6-4a6d-a8b1-05f40d344e4e name=/runtime.v1.RuntimeService/Version
	Apr 01 19:48:54 old-k8s-version-163608 crio[649]: time="2024-04-01 19:48:54.862098489Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=be28e18b-70e6-4a6d-a8b1-05f40d344e4e name=/runtime.v1.RuntimeService/Version
	Apr 01 19:48:54 old-k8s-version-163608 crio[649]: time="2024-04-01 19:48:54.863468516Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b1ecf709-e3e7-4b5f-a8c7-b46c04b5eb47 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:48:54 old-k8s-version-163608 crio[649]: time="2024-04-01 19:48:54.864022443Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712000934863990352,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b1ecf709-e3e7-4b5f-a8c7-b46c04b5eb47 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:48:54 old-k8s-version-163608 crio[649]: time="2024-04-01 19:48:54.864832776Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=80c43ea6-e7ec-4d08-baed-9fdaae0fb2e4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:48:54 old-k8s-version-163608 crio[649]: time="2024-04-01 19:48:54.864914488Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=80c43ea6-e7ec-4d08-baed-9fdaae0fb2e4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:48:54 old-k8s-version-163608 crio[649]: time="2024-04-01 19:48:54.864954994Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=80c43ea6-e7ec-4d08-baed-9fdaae0fb2e4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:48:54 old-k8s-version-163608 crio[649]: time="2024-04-01 19:48:54.903773504Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e1947601-d078-4879-b9f5-a5bf18537c45 name=/runtime.v1.RuntimeService/Version
	Apr 01 19:48:54 old-k8s-version-163608 crio[649]: time="2024-04-01 19:48:54.903904278Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e1947601-d078-4879-b9f5-a5bf18537c45 name=/runtime.v1.RuntimeService/Version
	Apr 01 19:48:54 old-k8s-version-163608 crio[649]: time="2024-04-01 19:48:54.905795157Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b7f9f52d-6fa2-4630-badd-81aed9139f88 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:48:54 old-k8s-version-163608 crio[649]: time="2024-04-01 19:48:54.906224247Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712000934906197916,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b7f9f52d-6fa2-4630-badd-81aed9139f88 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:48:54 old-k8s-version-163608 crio[649]: time="2024-04-01 19:48:54.907149000Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1c06a5a5-8b00-402c-a02a-a38cfacb4939 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:48:54 old-k8s-version-163608 crio[649]: time="2024-04-01 19:48:54.907392676Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1c06a5a5-8b00-402c-a02a-a38cfacb4939 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:48:54 old-k8s-version-163608 crio[649]: time="2024-04-01 19:48:54.907492694Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1c06a5a5-8b00-402c-a02a-a38cfacb4939 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:48:54 old-k8s-version-163608 crio[649]: time="2024-04-01 19:48:54.946978816Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fbac79aa-a2a8-4a6a-93cd-05023ab1e7a9 name=/runtime.v1.RuntimeService/Version
	Apr 01 19:48:54 old-k8s-version-163608 crio[649]: time="2024-04-01 19:48:54.947088225Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fbac79aa-a2a8-4a6a-93cd-05023ab1e7a9 name=/runtime.v1.RuntimeService/Version
	Apr 01 19:48:54 old-k8s-version-163608 crio[649]: time="2024-04-01 19:48:54.948532848Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=69844552-38c8-4720-81d7-031a554e6922 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:48:54 old-k8s-version-163608 crio[649]: time="2024-04-01 19:48:54.949059273Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712000934949019991,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=69844552-38c8-4720-81d7-031a554e6922 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:48:54 old-k8s-version-163608 crio[649]: time="2024-04-01 19:48:54.949819987Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=26dcadb3-300e-4ad6-9c06-fd160e2ca69c name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:48:54 old-k8s-version-163608 crio[649]: time="2024-04-01 19:48:54.949899242Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=26dcadb3-300e-4ad6-9c06-fd160e2ca69c name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:48:54 old-k8s-version-163608 crio[649]: time="2024-04-01 19:48:54.949933526Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=26dcadb3-300e-4ad6-9c06-fd160e2ca69c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr 1 19:31] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054895] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.048499] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.863744] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.552305] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.682250] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.094710] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.062423] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068466] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.188548] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.189826] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.311320] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +7.231097] systemd-fstab-generator[843]: Ignoring "noauto" option for root device
	[  +0.070737] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.974260] systemd-fstab-generator[968]: Ignoring "noauto" option for root device
	[Apr 1 19:32] kauditd_printk_skb: 46 callbacks suppressed
	[Apr 1 19:35] systemd-fstab-generator[4981]: Ignoring "noauto" option for root device
	[Apr 1 19:37] systemd-fstab-generator[5267]: Ignoring "noauto" option for root device
	[  +0.080693] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:48:55 up 17 min,  0 users,  load average: 0.04, 0.05, 0.06
	Linux old-k8s-version-163608 5.10.207 #1 SMP Wed Mar 27 22:02:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 01 19:48:50 old-k8s-version-163608 kubelet[6441]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Apr 01 19:48:50 old-k8s-version-163608 kubelet[6441]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Apr 01 19:48:50 old-k8s-version-163608 kubelet[6441]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Apr 01 19:48:50 old-k8s-version-163608 kubelet[6441]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0006306f0)
	Apr 01 19:48:50 old-k8s-version-163608 kubelet[6441]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Apr 01 19:48:50 old-k8s-version-163608 kubelet[6441]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000c2fef0, 0x4f0ac20, 0xc000bd4b40, 0x1, 0xc0001000c0)
	Apr 01 19:48:50 old-k8s-version-163608 kubelet[6441]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Apr 01 19:48:50 old-k8s-version-163608 kubelet[6441]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000adac40, 0xc0001000c0)
	Apr 01 19:48:50 old-k8s-version-163608 kubelet[6441]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Apr 01 19:48:50 old-k8s-version-163608 kubelet[6441]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Apr 01 19:48:50 old-k8s-version-163608 kubelet[6441]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Apr 01 19:48:50 old-k8s-version-163608 kubelet[6441]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000bd66c0, 0xc000beee20)
	Apr 01 19:48:50 old-k8s-version-163608 kubelet[6441]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Apr 01 19:48:50 old-k8s-version-163608 kubelet[6441]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Apr 01 19:48:50 old-k8s-version-163608 kubelet[6441]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Apr 01 19:48:50 old-k8s-version-163608 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Apr 01 19:48:50 old-k8s-version-163608 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Apr 01 19:48:50 old-k8s-version-163608 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 113.
	Apr 01 19:48:50 old-k8s-version-163608 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 01 19:48:50 old-k8s-version-163608 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Apr 01 19:48:50 old-k8s-version-163608 kubelet[6450]: I0401 19:48:50.837996    6450 server.go:416] Version: v1.20.0
	Apr 01 19:48:50 old-k8s-version-163608 kubelet[6450]: I0401 19:48:50.838322    6450 server.go:837] Client rotation is on, will bootstrap in background
	Apr 01 19:48:50 old-k8s-version-163608 kubelet[6450]: I0401 19:48:50.840478    6450 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Apr 01 19:48:50 old-k8s-version-163608 kubelet[6450]: W0401 19:48:50.841651    6450 manager.go:159] Cannot detect current cgroup on cgroup v2
	Apr 01 19:48:50 old-k8s-version-163608 kubelet[6450]: I0401 19:48:50.841895    6450 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-163608 -n old-k8s-version-163608
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-163608 -n old-k8s-version-163608: exit status 2 (345.421612ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-163608" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.62s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (414.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-882095 -n embed-certs-882095
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-04-01 19:52:18.03203325 +0000 UTC m=+6367.487584430
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-882095 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-882095 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.729µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-882095 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-882095 -n embed-certs-882095
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-882095 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-882095 logs -n 25: (1.530582685s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p bridge-408543 sudo                                  | bridge-408543                | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |                |                     |                     |
	| ssh     | -p bridge-408543 sudo find                             | bridge-408543                | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |                |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |                |                     |                     |
	| ssh     | -p bridge-408543 sudo crio                             | bridge-408543                | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | config                                                 |                              |         |                |                     |                     |
	| delete  | -p bridge-408543                                       | bridge-408543                | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	| delete  | -p                                                     | disable-driver-mounts-580301 | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | disable-driver-mounts-580301                           |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-734648 | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:24 UTC |
	|         | default-k8s-diff-port-734648                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p no-preload-472858             | no-preload-472858            | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p no-preload-472858                                   | no-preload-472858            | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-882095            | embed-certs-882095           | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:24 UTC | 01 Apr 24 19:24 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-882095                                  | embed-certs-882095           | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:24 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-734648  | default-k8s-diff-port-734648 | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:25 UTC | 01 Apr 24 19:25 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-734648 | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:25 UTC |                     |
	|         | default-k8s-diff-port-734648                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-472858                  | no-preload-472858            | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p no-preload-472858                                   | no-preload-472858            | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:26 UTC | 01 Apr 24 19:38 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |                |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.0                      |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-163608        | old-k8s-version-163608       | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:26 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-882095                 | embed-certs-882095           | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:26 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-882095                                  | embed-certs-882095           | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:26 UTC | 01 Apr 24 19:36 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-734648       | default-k8s-diff-port-734648 | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:27 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-734648 | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:27 UTC | 01 Apr 24 19:36 UTC |
	|         | default-k8s-diff-port-734648                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-163608                              | old-k8s-version-163608       | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:27 UTC | 01 Apr 24 19:27 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-163608             | old-k8s-version-163608       | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:27 UTC | 01 Apr 24 19:27 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-163608                              | old-k8s-version-163608       | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:27 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	| delete  | -p old-k8s-version-163608                              | old-k8s-version-163608       | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:51 UTC | 01 Apr 24 19:51 UTC |
	| start   | -p newest-cni-705837 --memory=2200 --alsologtostderr   | newest-cni-705837            | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:51 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |                |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |                |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.0                      |                              |         |                |                     |                     |
	| delete  | -p no-preload-472858                                   | no-preload-472858            | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:51 UTC | 01 Apr 24 19:51 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/01 19:51:22
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 19:51:22.604401   75954 out.go:291] Setting OutFile to fd 1 ...
	I0401 19:51:22.604724   75954 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 19:51:22.604737   75954 out.go:304] Setting ErrFile to fd 2...
	I0401 19:51:22.604743   75954 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 19:51:22.605045   75954 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-10493/.minikube/bin
	I0401 19:51:22.605907   75954 out.go:298] Setting JSON to false
	I0401 19:51:22.607176   75954 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":9235,"bootTime":1711991848,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 19:51:22.607244   75954 start.go:139] virtualization: kvm guest
	I0401 19:51:22.609753   75954 out.go:177] * [newest-cni-705837] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 19:51:22.611342   75954 out.go:177]   - MINIKUBE_LOCATION=18233
	I0401 19:51:22.612692   75954 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 19:51:22.611406   75954 notify.go:220] Checking for updates...
	I0401 19:51:22.615020   75954 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 19:51:22.616229   75954 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-10493/.minikube
	I0401 19:51:22.617343   75954 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 19:51:22.618536   75954 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 19:51:22.620250   75954 config.go:182] Loaded profile config "default-k8s-diff-port-734648": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 19:51:22.620371   75954 config.go:182] Loaded profile config "embed-certs-882095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 19:51:22.620485   75954 config.go:182] Loaded profile config "no-preload-472858": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0401 19:51:22.620561   75954 driver.go:392] Setting default libvirt URI to qemu:///system
	I0401 19:51:22.659043   75954 out.go:177] * Using the kvm2 driver based on user configuration
	I0401 19:51:22.660461   75954 start.go:297] selected driver: kvm2
	I0401 19:51:22.660480   75954 start.go:901] validating driver "kvm2" against <nil>
	I0401 19:51:22.660509   75954 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 19:51:22.661496   75954 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 19:51:22.661571   75954 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18233-10493/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0401 19:51:22.677956   75954 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0401 19:51:22.678000   75954 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0401 19:51:22.678027   75954 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0401 19:51:22.678221   75954 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0401 19:51:22.678285   75954 cni.go:84] Creating CNI manager for ""
	I0401 19:51:22.678302   75954 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:51:22.678312   75954 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0401 19:51:22.678386   75954 start.go:340] cluster config:
	{Name:newest-cni-705837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.0 ClusterName:newest-cni-705837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:51:22.678508   75954 iso.go:125] acquiring lock: {Name:mka511ffe42ecd86bd7f46e7a17ddcdd3e5e4327 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 19:51:22.680755   75954 out.go:177] * Starting "newest-cni-705837" primary control-plane node in "newest-cni-705837" cluster
	I0401 19:51:22.681975   75954 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.0 and runtime crio
	I0401 19:51:22.682019   75954 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0401 19:51:22.682035   75954 cache.go:56] Caching tarball of preloaded images
	I0401 19:51:22.682109   75954 preload.go:173] Found /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 19:51:22.682120   75954 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-rc.0 on crio
	I0401 19:51:22.682201   75954 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/newest-cni-705837/config.json ...
	I0401 19:51:22.682224   75954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/newest-cni-705837/config.json: {Name:mk1747df3b632b74720f206e606cf8d8eb1fd247 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:51:22.682355   75954 start.go:360] acquireMachinesLock for newest-cni-705837: {Name:mk6b7472209a8db5f40be4c2f0565da7e0094c19 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 19:51:22.682402   75954 start.go:364] duration metric: took 34.429µs to acquireMachinesLock for "newest-cni-705837"
	I0401 19:51:22.682423   75954 start.go:93] Provisioning new machine with config: &{Name:newest-cni-705837 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.0-rc.0 ClusterName:newest-cni-705837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 19:51:22.682476   75954 start.go:125] createHost starting for "" (driver="kvm2")
	I0401 19:51:22.684063   75954 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0401 19:51:22.684183   75954 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:51:22.684221   75954 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:51:22.699048   75954 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46443
	I0401 19:51:22.699501   75954 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:51:22.700106   75954 main.go:141] libmachine: Using API Version  1
	I0401 19:51:22.700156   75954 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:51:22.700576   75954 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:51:22.700794   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetMachineName
	I0401 19:51:22.700979   75954 main.go:141] libmachine: (newest-cni-705837) Calling .DriverName
	I0401 19:51:22.701151   75954 start.go:159] libmachine.API.Create for "newest-cni-705837" (driver="kvm2")
	I0401 19:51:22.701184   75954 client.go:168] LocalClient.Create starting
	I0401 19:51:22.701229   75954 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem
	I0401 19:51:22.701275   75954 main.go:141] libmachine: Decoding PEM data...
	I0401 19:51:22.701300   75954 main.go:141] libmachine: Parsing certificate...
	I0401 19:51:22.701366   75954 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem
	I0401 19:51:22.701395   75954 main.go:141] libmachine: Decoding PEM data...
	I0401 19:51:22.701414   75954 main.go:141] libmachine: Parsing certificate...
	I0401 19:51:22.701438   75954 main.go:141] libmachine: Running pre-create checks...
	I0401 19:51:22.701455   75954 main.go:141] libmachine: (newest-cni-705837) Calling .PreCreateCheck
	I0401 19:51:22.701831   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetConfigRaw
	I0401 19:51:22.702295   75954 main.go:141] libmachine: Creating machine...
	I0401 19:51:22.702314   75954 main.go:141] libmachine: (newest-cni-705837) Calling .Create
	I0401 19:51:22.702466   75954 main.go:141] libmachine: (newest-cni-705837) Creating KVM machine...
	I0401 19:51:22.703683   75954 main.go:141] libmachine: (newest-cni-705837) DBG | found existing default KVM network
	I0401 19:51:22.704817   75954 main.go:141] libmachine: (newest-cni-705837) DBG | I0401 19:51:22.704657   75977 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:b9:62:99} reservation:<nil>}
	I0401 19:51:22.705878   75954 main.go:141] libmachine: (newest-cni-705837) DBG | I0401 19:51:22.705792   75977 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002888f0}
	I0401 19:51:22.705896   75954 main.go:141] libmachine: (newest-cni-705837) DBG | created network xml: 
	I0401 19:51:22.705908   75954 main.go:141] libmachine: (newest-cni-705837) DBG | <network>
	I0401 19:51:22.705917   75954 main.go:141] libmachine: (newest-cni-705837) DBG |   <name>mk-newest-cni-705837</name>
	I0401 19:51:22.705927   75954 main.go:141] libmachine: (newest-cni-705837) DBG |   <dns enable='no'/>
	I0401 19:51:22.705938   75954 main.go:141] libmachine: (newest-cni-705837) DBG |   
	I0401 19:51:22.705949   75954 main.go:141] libmachine: (newest-cni-705837) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0401 19:51:22.705960   75954 main.go:141] libmachine: (newest-cni-705837) DBG |     <dhcp>
	I0401 19:51:22.705970   75954 main.go:141] libmachine: (newest-cni-705837) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0401 19:51:22.705986   75954 main.go:141] libmachine: (newest-cni-705837) DBG |     </dhcp>
	I0401 19:51:22.706005   75954 main.go:141] libmachine: (newest-cni-705837) DBG |   </ip>
	I0401 19:51:22.706016   75954 main.go:141] libmachine: (newest-cni-705837) DBG |   
	I0401 19:51:22.706025   75954 main.go:141] libmachine: (newest-cni-705837) DBG | </network>
	I0401 19:51:22.706034   75954 main.go:141] libmachine: (newest-cni-705837) DBG | 
	I0401 19:51:22.711502   75954 main.go:141] libmachine: (newest-cni-705837) DBG | trying to create private KVM network mk-newest-cni-705837 192.168.50.0/24...
	I0401 19:51:22.784162   75954 main.go:141] libmachine: (newest-cni-705837) DBG | private KVM network mk-newest-cni-705837 192.168.50.0/24 created
	I0401 19:51:22.784221   75954 main.go:141] libmachine: (newest-cni-705837) DBG | I0401 19:51:22.784119   75977 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18233-10493/.minikube
	I0401 19:51:22.784245   75954 main.go:141] libmachine: (newest-cni-705837) Setting up store path in /home/jenkins/minikube-integration/18233-10493/.minikube/machines/newest-cni-705837 ...
	I0401 19:51:22.784269   75954 main.go:141] libmachine: (newest-cni-705837) Building disk image from file:///home/jenkins/minikube-integration/18233-10493/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso
	I0401 19:51:22.784285   75954 main.go:141] libmachine: (newest-cni-705837) Downloading /home/jenkins/minikube-integration/18233-10493/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18233-10493/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso...
	I0401 19:51:23.014593   75954 main.go:141] libmachine: (newest-cni-705837) DBG | I0401 19:51:23.014485   75977 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/newest-cni-705837/id_rsa...
	I0401 19:51:23.243065   75954 main.go:141] libmachine: (newest-cni-705837) DBG | I0401 19:51:23.242941   75977 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/newest-cni-705837/newest-cni-705837.rawdisk...
	I0401 19:51:23.243093   75954 main.go:141] libmachine: (newest-cni-705837) DBG | Writing magic tar header
	I0401 19:51:23.243109   75954 main.go:141] libmachine: (newest-cni-705837) DBG | Writing SSH key tar header
	I0401 19:51:23.243123   75954 main.go:141] libmachine: (newest-cni-705837) DBG | I0401 19:51:23.243054   75977 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18233-10493/.minikube/machines/newest-cni-705837 ...
	I0401 19:51:23.243187   75954 main.go:141] libmachine: (newest-cni-705837) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/newest-cni-705837
	I0401 19:51:23.243224   75954 main.go:141] libmachine: (newest-cni-705837) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18233-10493/.minikube/machines
	I0401 19:51:23.243258   75954 main.go:141] libmachine: (newest-cni-705837) Setting executable bit set on /home/jenkins/minikube-integration/18233-10493/.minikube/machines/newest-cni-705837 (perms=drwx------)
	I0401 19:51:23.243269   75954 main.go:141] libmachine: (newest-cni-705837) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18233-10493/.minikube
	I0401 19:51:23.243283   75954 main.go:141] libmachine: (newest-cni-705837) Setting executable bit set on /home/jenkins/minikube-integration/18233-10493/.minikube/machines (perms=drwxr-xr-x)
	I0401 19:51:23.243317   75954 main.go:141] libmachine: (newest-cni-705837) Setting executable bit set on /home/jenkins/minikube-integration/18233-10493/.minikube (perms=drwxr-xr-x)
	I0401 19:51:23.243332   75954 main.go:141] libmachine: (newest-cni-705837) Setting executable bit set on /home/jenkins/minikube-integration/18233-10493 (perms=drwxrwxr-x)
	I0401 19:51:23.243343   75954 main.go:141] libmachine: (newest-cni-705837) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18233-10493
	I0401 19:51:23.243355   75954 main.go:141] libmachine: (newest-cni-705837) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0401 19:51:23.243364   75954 main.go:141] libmachine: (newest-cni-705837) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0401 19:51:23.243378   75954 main.go:141] libmachine: (newest-cni-705837) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0401 19:51:23.243389   75954 main.go:141] libmachine: (newest-cni-705837) Creating domain...
	I0401 19:51:23.243405   75954 main.go:141] libmachine: (newest-cni-705837) DBG | Checking permissions on dir: /home/jenkins
	I0401 19:51:23.243416   75954 main.go:141] libmachine: (newest-cni-705837) DBG | Checking permissions on dir: /home
	I0401 19:51:23.243429   75954 main.go:141] libmachine: (newest-cni-705837) DBG | Skipping /home - not owner
	I0401 19:51:23.244469   75954 main.go:141] libmachine: (newest-cni-705837) define libvirt domain using xml: 
	I0401 19:51:23.244497   75954 main.go:141] libmachine: (newest-cni-705837) <domain type='kvm'>
	I0401 19:51:23.244509   75954 main.go:141] libmachine: (newest-cni-705837)   <name>newest-cni-705837</name>
	I0401 19:51:23.244517   75954 main.go:141] libmachine: (newest-cni-705837)   <memory unit='MiB'>2200</memory>
	I0401 19:51:23.244526   75954 main.go:141] libmachine: (newest-cni-705837)   <vcpu>2</vcpu>
	I0401 19:51:23.244534   75954 main.go:141] libmachine: (newest-cni-705837)   <features>
	I0401 19:51:23.244542   75954 main.go:141] libmachine: (newest-cni-705837)     <acpi/>
	I0401 19:51:23.244549   75954 main.go:141] libmachine: (newest-cni-705837)     <apic/>
	I0401 19:51:23.244559   75954 main.go:141] libmachine: (newest-cni-705837)     <pae/>
	I0401 19:51:23.244570   75954 main.go:141] libmachine: (newest-cni-705837)     
	I0401 19:51:23.244593   75954 main.go:141] libmachine: (newest-cni-705837)   </features>
	I0401 19:51:23.244619   75954 main.go:141] libmachine: (newest-cni-705837)   <cpu mode='host-passthrough'>
	I0401 19:51:23.244628   75954 main.go:141] libmachine: (newest-cni-705837)   
	I0401 19:51:23.244655   75954 main.go:141] libmachine: (newest-cni-705837)   </cpu>
	I0401 19:51:23.244668   75954 main.go:141] libmachine: (newest-cni-705837)   <os>
	I0401 19:51:23.244679   75954 main.go:141] libmachine: (newest-cni-705837)     <type>hvm</type>
	I0401 19:51:23.244690   75954 main.go:141] libmachine: (newest-cni-705837)     <boot dev='cdrom'/>
	I0401 19:51:23.244700   75954 main.go:141] libmachine: (newest-cni-705837)     <boot dev='hd'/>
	I0401 19:51:23.244714   75954 main.go:141] libmachine: (newest-cni-705837)     <bootmenu enable='no'/>
	I0401 19:51:23.244728   75954 main.go:141] libmachine: (newest-cni-705837)   </os>
	I0401 19:51:23.244760   75954 main.go:141] libmachine: (newest-cni-705837)   <devices>
	I0401 19:51:23.244770   75954 main.go:141] libmachine: (newest-cni-705837)     <disk type='file' device='cdrom'>
	I0401 19:51:23.244783   75954 main.go:141] libmachine: (newest-cni-705837)       <source file='/home/jenkins/minikube-integration/18233-10493/.minikube/machines/newest-cni-705837/boot2docker.iso'/>
	I0401 19:51:23.244795   75954 main.go:141] libmachine: (newest-cni-705837)       <target dev='hdc' bus='scsi'/>
	I0401 19:51:23.244812   75954 main.go:141] libmachine: (newest-cni-705837)       <readonly/>
	I0401 19:51:23.244821   75954 main.go:141] libmachine: (newest-cni-705837)     </disk>
	I0401 19:51:23.244832   75954 main.go:141] libmachine: (newest-cni-705837)     <disk type='file' device='disk'>
	I0401 19:51:23.244847   75954 main.go:141] libmachine: (newest-cni-705837)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0401 19:51:23.244863   75954 main.go:141] libmachine: (newest-cni-705837)       <source file='/home/jenkins/minikube-integration/18233-10493/.minikube/machines/newest-cni-705837/newest-cni-705837.rawdisk'/>
	I0401 19:51:23.244877   75954 main.go:141] libmachine: (newest-cni-705837)       <target dev='hda' bus='virtio'/>
	I0401 19:51:23.244887   75954 main.go:141] libmachine: (newest-cni-705837)     </disk>
	I0401 19:51:23.244896   75954 main.go:141] libmachine: (newest-cni-705837)     <interface type='network'>
	I0401 19:51:23.244912   75954 main.go:141] libmachine: (newest-cni-705837)       <source network='mk-newest-cni-705837'/>
	I0401 19:51:23.244924   75954 main.go:141] libmachine: (newest-cni-705837)       <model type='virtio'/>
	I0401 19:51:23.244932   75954 main.go:141] libmachine: (newest-cni-705837)     </interface>
	I0401 19:51:23.244940   75954 main.go:141] libmachine: (newest-cni-705837)     <interface type='network'>
	I0401 19:51:23.244951   75954 main.go:141] libmachine: (newest-cni-705837)       <source network='default'/>
	I0401 19:51:23.244964   75954 main.go:141] libmachine: (newest-cni-705837)       <model type='virtio'/>
	I0401 19:51:23.244971   75954 main.go:141] libmachine: (newest-cni-705837)     </interface>
	I0401 19:51:23.245000   75954 main.go:141] libmachine: (newest-cni-705837)     <serial type='pty'>
	I0401 19:51:23.245019   75954 main.go:141] libmachine: (newest-cni-705837)       <target port='0'/>
	I0401 19:51:23.245032   75954 main.go:141] libmachine: (newest-cni-705837)     </serial>
	I0401 19:51:23.245052   75954 main.go:141] libmachine: (newest-cni-705837)     <console type='pty'>
	I0401 19:51:23.245062   75954 main.go:141] libmachine: (newest-cni-705837)       <target type='serial' port='0'/>
	I0401 19:51:23.245077   75954 main.go:141] libmachine: (newest-cni-705837)     </console>
	I0401 19:51:23.245091   75954 main.go:141] libmachine: (newest-cni-705837)     <rng model='virtio'>
	I0401 19:51:23.245099   75954 main.go:141] libmachine: (newest-cni-705837)       <backend model='random'>/dev/random</backend>
	I0401 19:51:23.245110   75954 main.go:141] libmachine: (newest-cni-705837)     </rng>
	I0401 19:51:23.245120   75954 main.go:141] libmachine: (newest-cni-705837)     
	I0401 19:51:23.245129   75954 main.go:141] libmachine: (newest-cni-705837)     
	I0401 19:51:23.245138   75954 main.go:141] libmachine: (newest-cni-705837)   </devices>
	I0401 19:51:23.245191   75954 main.go:141] libmachine: (newest-cni-705837) </domain>
	I0401 19:51:23.245214   75954 main.go:141] libmachine: (newest-cni-705837) 
	I0401 19:51:23.249936   75954 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:28:dc:a6 in network default
	I0401 19:51:23.250542   75954 main.go:141] libmachine: (newest-cni-705837) Ensuring networks are active...
	I0401 19:51:23.250570   75954 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:51:23.251319   75954 main.go:141] libmachine: (newest-cni-705837) Ensuring network default is active
	I0401 19:51:23.251745   75954 main.go:141] libmachine: (newest-cni-705837) Ensuring network mk-newest-cni-705837 is active
	I0401 19:51:23.252265   75954 main.go:141] libmachine: (newest-cni-705837) Getting domain xml...
	I0401 19:51:23.252900   75954 main.go:141] libmachine: (newest-cni-705837) Creating domain...
	I0401 19:51:24.527604   75954 main.go:141] libmachine: (newest-cni-705837) Waiting to get IP...
	I0401 19:51:24.528512   75954 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:51:24.529015   75954 main.go:141] libmachine: (newest-cni-705837) DBG | unable to find current IP address of domain newest-cni-705837 in network mk-newest-cni-705837
	I0401 19:51:24.529041   75954 main.go:141] libmachine: (newest-cni-705837) DBG | I0401 19:51:24.528977   75977 retry.go:31] will retry after 210.548985ms: waiting for machine to come up
	I0401 19:51:24.741503   75954 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:51:24.742001   75954 main.go:141] libmachine: (newest-cni-705837) DBG | unable to find current IP address of domain newest-cni-705837 in network mk-newest-cni-705837
	I0401 19:51:24.742029   75954 main.go:141] libmachine: (newest-cni-705837) DBG | I0401 19:51:24.741945   75977 retry.go:31] will retry after 380.353465ms: waiting for machine to come up
	I0401 19:51:25.123444   75954 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:51:25.124075   75954 main.go:141] libmachine: (newest-cni-705837) DBG | unable to find current IP address of domain newest-cni-705837 in network mk-newest-cni-705837
	I0401 19:51:25.124103   75954 main.go:141] libmachine: (newest-cni-705837) DBG | I0401 19:51:25.124024   75977 retry.go:31] will retry after 341.244384ms: waiting for machine to come up
	I0401 19:51:25.466604   75954 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:51:25.467116   75954 main.go:141] libmachine: (newest-cni-705837) DBG | unable to find current IP address of domain newest-cni-705837 in network mk-newest-cni-705837
	I0401 19:51:25.467142   75954 main.go:141] libmachine: (newest-cni-705837) DBG | I0401 19:51:25.467076   75977 retry.go:31] will retry after 454.921575ms: waiting for machine to come up
	I0401 19:51:25.923279   75954 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:51:25.923750   75954 main.go:141] libmachine: (newest-cni-705837) DBG | unable to find current IP address of domain newest-cni-705837 in network mk-newest-cni-705837
	I0401 19:51:25.923870   75954 main.go:141] libmachine: (newest-cni-705837) DBG | I0401 19:51:25.923785   75977 retry.go:31] will retry after 710.631047ms: waiting for machine to come up
	I0401 19:51:26.635786   75954 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:51:26.636337   75954 main.go:141] libmachine: (newest-cni-705837) DBG | unable to find current IP address of domain newest-cni-705837 in network mk-newest-cni-705837
	I0401 19:51:26.636366   75954 main.go:141] libmachine: (newest-cni-705837) DBG | I0401 19:51:26.636288   75977 retry.go:31] will retry after 737.0527ms: waiting for machine to come up
	I0401 19:51:27.374893   75954 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:51:27.375361   75954 main.go:141] libmachine: (newest-cni-705837) DBG | unable to find current IP address of domain newest-cni-705837 in network mk-newest-cni-705837
	I0401 19:51:27.375387   75954 main.go:141] libmachine: (newest-cni-705837) DBG | I0401 19:51:27.375310   75977 retry.go:31] will retry after 1.047881478s: waiting for machine to come up
	I0401 19:51:28.425220   75954 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:51:28.425765   75954 main.go:141] libmachine: (newest-cni-705837) DBG | unable to find current IP address of domain newest-cni-705837 in network mk-newest-cni-705837
	I0401 19:51:28.425788   75954 main.go:141] libmachine: (newest-cni-705837) DBG | I0401 19:51:28.425712   75977 retry.go:31] will retry after 985.292948ms: waiting for machine to come up
	I0401 19:51:29.938207   75954 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:51:29.938823   75954 main.go:141] libmachine: (newest-cni-705837) DBG | unable to find current IP address of domain newest-cni-705837 in network mk-newest-cni-705837
	I0401 19:51:29.938858   75954 main.go:141] libmachine: (newest-cni-705837) DBG | I0401 19:51:29.938769   75977 retry.go:31] will retry after 1.834768219s: waiting for machine to come up
	I0401 19:51:31.775554   75954 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:51:31.776082   75954 main.go:141] libmachine: (newest-cni-705837) DBG | unable to find current IP address of domain newest-cni-705837 in network mk-newest-cni-705837
	I0401 19:51:31.776102   75954 main.go:141] libmachine: (newest-cni-705837) DBG | I0401 19:51:31.776035   75977 retry.go:31] will retry after 1.968508626s: waiting for machine to come up
	I0401 19:51:33.746286   75954 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:51:33.746859   75954 main.go:141] libmachine: (newest-cni-705837) DBG | unable to find current IP address of domain newest-cni-705837 in network mk-newest-cni-705837
	I0401 19:51:33.746891   75954 main.go:141] libmachine: (newest-cni-705837) DBG | I0401 19:51:33.746795   75977 retry.go:31] will retry after 2.08288953s: waiting for machine to come up
	I0401 19:51:35.831120   75954 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:51:35.831519   75954 main.go:141] libmachine: (newest-cni-705837) DBG | unable to find current IP address of domain newest-cni-705837 in network mk-newest-cni-705837
	I0401 19:51:35.831542   75954 main.go:141] libmachine: (newest-cni-705837) DBG | I0401 19:51:35.831485   75977 retry.go:31] will retry after 2.463336343s: waiting for machine to come up
	I0401 19:51:38.296713   75954 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:51:38.297214   75954 main.go:141] libmachine: (newest-cni-705837) DBG | unable to find current IP address of domain newest-cni-705837 in network mk-newest-cni-705837
	I0401 19:51:38.297248   75954 main.go:141] libmachine: (newest-cni-705837) DBG | I0401 19:51:38.297143   75977 retry.go:31] will retry after 3.170454218s: waiting for machine to come up
	I0401 19:51:41.470665   75954 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:51:41.471091   75954 main.go:141] libmachine: (newest-cni-705837) DBG | unable to find current IP address of domain newest-cni-705837 in network mk-newest-cni-705837
	I0401 19:51:41.471123   75954 main.go:141] libmachine: (newest-cni-705837) DBG | I0401 19:51:41.471043   75977 retry.go:31] will retry after 4.015871809s: waiting for machine to come up
	I0401 19:51:45.489801   75954 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:51:45.490297   75954 main.go:141] libmachine: (newest-cni-705837) Found IP for machine: 192.168.50.29
	I0401 19:51:45.490325   75954 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has current primary IP address 192.168.50.29 and MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:51:45.490338   75954 main.go:141] libmachine: (newest-cni-705837) Reserving static IP address...
	I0401 19:51:45.490712   75954 main.go:141] libmachine: (newest-cni-705837) DBG | unable to find host DHCP lease matching {name: "newest-cni-705837", mac: "52:54:00:27:87:01", ip: "192.168.50.29"} in network mk-newest-cni-705837
	I0401 19:51:45.566040   75954 main.go:141] libmachine: (newest-cni-705837) DBG | Getting to WaitForSSH function...
	I0401 19:51:45.566077   75954 main.go:141] libmachine: (newest-cni-705837) Reserved static IP address: 192.168.50.29
	I0401 19:51:45.566090   75954 main.go:141] libmachine: (newest-cni-705837) Waiting for SSH to be available...
	I0401 19:51:45.568924   75954 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:51:45.569272   75954 main.go:141] libmachine: (newest-cni-705837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:87:01", ip: ""} in network mk-newest-cni-705837: {Iface:virbr2 ExpiryTime:2024-04-01 20:51:39 +0000 UTC Type:0 Mac:52:54:00:27:87:01 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:minikube Clientid:01:52:54:00:27:87:01}
	I0401 19:51:45.569298   75954 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined IP address 192.168.50.29 and MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:51:45.569415   75954 main.go:141] libmachine: (newest-cni-705837) DBG | Using SSH client type: external
	I0401 19:51:45.569446   75954 main.go:141] libmachine: (newest-cni-705837) DBG | Using SSH private key: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/newest-cni-705837/id_rsa (-rw-------)
	I0401 19:51:45.569487   75954 main.go:141] libmachine: (newest-cni-705837) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.29 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18233-10493/.minikube/machines/newest-cni-705837/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0401 19:51:45.569515   75954 main.go:141] libmachine: (newest-cni-705837) DBG | About to run SSH command:
	I0401 19:51:45.569555   75954 main.go:141] libmachine: (newest-cni-705837) DBG | exit 0
	I0401 19:51:45.694092   75954 main.go:141] libmachine: (newest-cni-705837) DBG | SSH cmd err, output: <nil>: 
	I0401 19:51:45.694404   75954 main.go:141] libmachine: (newest-cni-705837) KVM machine creation complete!
	I0401 19:51:45.694830   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetConfigRaw
	I0401 19:51:45.695403   75954 main.go:141] libmachine: (newest-cni-705837) Calling .DriverName
	I0401 19:51:45.695611   75954 main.go:141] libmachine: (newest-cni-705837) Calling .DriverName
	I0401 19:51:45.695791   75954 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0401 19:51:45.695814   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetState
	I0401 19:51:45.697045   75954 main.go:141] libmachine: Detecting operating system of created instance...
	I0401 19:51:45.697062   75954 main.go:141] libmachine: Waiting for SSH to be available...
	I0401 19:51:45.697069   75954 main.go:141] libmachine: Getting to WaitForSSH function...
	I0401 19:51:45.697076   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHHostname
	I0401 19:51:45.699569   75954 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:51:45.700026   75954 main.go:141] libmachine: (newest-cni-705837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:87:01", ip: ""} in network mk-newest-cni-705837: {Iface:virbr2 ExpiryTime:2024-04-01 20:51:39 +0000 UTC Type:0 Mac:52:54:00:27:87:01 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:newest-cni-705837 Clientid:01:52:54:00:27:87:01}
	I0401 19:51:45.700059   75954 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined IP address 192.168.50.29 and MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:51:45.700198   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHPort
	I0401 19:51:45.700363   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHKeyPath
	I0401 19:51:45.700534   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHKeyPath
	I0401 19:51:45.700679   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHUsername
	I0401 19:51:45.700834   75954 main.go:141] libmachine: Using SSH client type: native
	I0401 19:51:45.701044   75954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.29 22 <nil> <nil>}
	I0401 19:51:45.701058   75954 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0401 19:51:45.805383   75954 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 19:51:45.805403   75954 main.go:141] libmachine: Detecting the provisioner...
	I0401 19:51:45.805410   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHHostname
	I0401 19:51:45.808190   75954 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:51:45.808525   75954 main.go:141] libmachine: (newest-cni-705837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:87:01", ip: ""} in network mk-newest-cni-705837: {Iface:virbr2 ExpiryTime:2024-04-01 20:51:39 +0000 UTC Type:0 Mac:52:54:00:27:87:01 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:newest-cni-705837 Clientid:01:52:54:00:27:87:01}
	I0401 19:51:45.808546   75954 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined IP address 192.168.50.29 and MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:51:45.808672   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHPort
	I0401 19:51:45.808876   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHKeyPath
	I0401 19:51:45.809015   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHKeyPath
	I0401 19:51:45.809154   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHUsername
	I0401 19:51:45.809384   75954 main.go:141] libmachine: Using SSH client type: native
	I0401 19:51:45.809574   75954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.29 22 <nil> <nil>}
	I0401 19:51:45.809592   75954 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0401 19:51:45.914946   75954 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0401 19:51:45.915031   75954 main.go:141] libmachine: found compatible host: buildroot
	I0401 19:51:45.915052   75954 main.go:141] libmachine: Provisioning with buildroot...
	I0401 19:51:45.915061   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetMachineName
	I0401 19:51:45.915326   75954 buildroot.go:166] provisioning hostname "newest-cni-705837"
	I0401 19:51:45.915351   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetMachineName
	I0401 19:51:45.915548   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHHostname
	I0401 19:51:45.918329   75954 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:51:45.918673   75954 main.go:141] libmachine: (newest-cni-705837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:87:01", ip: ""} in network mk-newest-cni-705837: {Iface:virbr2 ExpiryTime:2024-04-01 20:51:39 +0000 UTC Type:0 Mac:52:54:00:27:87:01 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:newest-cni-705837 Clientid:01:52:54:00:27:87:01}
	I0401 19:51:45.918709   75954 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined IP address 192.168.50.29 and MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:51:45.918867   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHPort
	I0401 19:51:45.919034   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHKeyPath
	I0401 19:51:45.919203   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHKeyPath
	I0401 19:51:45.919345   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHUsername
	I0401 19:51:45.919480   75954 main.go:141] libmachine: Using SSH client type: native
	I0401 19:51:45.919645   75954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.29 22 <nil> <nil>}
	I0401 19:51:45.919658   75954 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-705837 && echo "newest-cni-705837" | sudo tee /etc/hostname
	I0401 19:51:46.047373   75954 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-705837
	
	I0401 19:51:46.047404   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHHostname
	I0401 19:51:46.050181   75954 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:51:46.050671   75954 main.go:141] libmachine: (newest-cni-705837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:87:01", ip: ""} in network mk-newest-cni-705837: {Iface:virbr2 ExpiryTime:2024-04-01 20:51:39 +0000 UTC Type:0 Mac:52:54:00:27:87:01 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:newest-cni-705837 Clientid:01:52:54:00:27:87:01}
	I0401 19:51:46.050703   75954 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined IP address 192.168.50.29 and MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:51:46.050881   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHPort
	I0401 19:51:46.051054   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHKeyPath
	I0401 19:51:46.051241   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHKeyPath
	I0401 19:51:46.051400   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHUsername
	I0401 19:51:46.051580   75954 main.go:141] libmachine: Using SSH client type: native
	I0401 19:51:46.051744   75954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.29 22 <nil> <nil>}
	I0401 19:51:46.051761   75954 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-705837' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-705837/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-705837' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 19:51:46.168246   75954 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 19:51:46.168280   75954 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18233-10493/.minikube CaCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18233-10493/.minikube}
	I0401 19:51:46.168363   75954 buildroot.go:174] setting up certificates
	I0401 19:51:46.168382   75954 provision.go:84] configureAuth start
	I0401 19:51:46.168402   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetMachineName
	I0401 19:51:46.168703   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetIP
	I0401 19:51:46.171367   75954 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:51:46.171715   75954 main.go:141] libmachine: (newest-cni-705837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:87:01", ip: ""} in network mk-newest-cni-705837: {Iface:virbr2 ExpiryTime:2024-04-01 20:51:39 +0000 UTC Type:0 Mac:52:54:00:27:87:01 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:newest-cni-705837 Clientid:01:52:54:00:27:87:01}
	I0401 19:51:46.171755   75954 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined IP address 192.168.50.29 and MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:51:46.171863   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHHostname
	I0401 19:51:46.174171   75954 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:51:46.174501   75954 main.go:141] libmachine: (newest-cni-705837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:87:01", ip: ""} in network mk-newest-cni-705837: {Iface:virbr2 ExpiryTime:2024-04-01 20:51:39 +0000 UTC Type:0 Mac:52:54:00:27:87:01 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:newest-cni-705837 Clientid:01:52:54:00:27:87:01}
	I0401 19:51:46.174532   75954 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined IP address 192.168.50.29 and MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:51:46.174645   75954 provision.go:143] copyHostCerts
	I0401 19:51:46.174699   75954 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem, removing ...
	I0401 19:51:46.174709   75954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem
	I0401 19:51:46.174771   75954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem (1082 bytes)
	I0401 19:51:46.174866   75954 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem, removing ...
	I0401 19:51:46.174874   75954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem
	I0401 19:51:46.174904   75954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem (1123 bytes)
	I0401 19:51:46.174977   75954 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem, removing ...
	I0401 19:51:46.174985   75954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem
	I0401 19:51:46.175005   75954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem (1679 bytes)
	I0401 19:51:46.175074   75954 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem org=jenkins.newest-cni-705837 san=[127.0.0.1 192.168.50.29 localhost minikube newest-cni-705837]
	I0401 19:51:46.236745   75954 provision.go:177] copyRemoteCerts
	I0401 19:51:46.236798   75954 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 19:51:46.236819   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHHostname
	I0401 19:51:46.239818   75954 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:51:46.240216   75954 main.go:141] libmachine: (newest-cni-705837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:87:01", ip: ""} in network mk-newest-cni-705837: {Iface:virbr2 ExpiryTime:2024-04-01 20:51:39 +0000 UTC Type:0 Mac:52:54:00:27:87:01 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:newest-cni-705837 Clientid:01:52:54:00:27:87:01}
	I0401 19:51:46.240252   75954 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined IP address 192.168.50.29 and MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:51:46.240461   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHPort
	I0401 19:51:46.240637   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHKeyPath
	I0401 19:51:46.240779   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHUsername
	I0401 19:51:46.240899   75954 sshutil.go:53] new ssh client: &{IP:192.168.50.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/newest-cni-705837/id_rsa Username:docker}
	I0401 19:51:46.325982   75954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 19:51:46.357253   75954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0401 19:51:46.385468   75954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 19:51:46.411650   75954 provision.go:87] duration metric: took 243.249372ms to configureAuth
	I0401 19:51:46.411686   75954 buildroot.go:189] setting minikube options for container-runtime
	I0401 19:51:46.411854   75954 config.go:182] Loaded profile config "newest-cni-705837": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0401 19:51:46.411931   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHHostname
	I0401 19:51:46.414879   75954 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:51:46.415245   75954 main.go:141] libmachine: (newest-cni-705837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:87:01", ip: ""} in network mk-newest-cni-705837: {Iface:virbr2 ExpiryTime:2024-04-01 20:51:39 +0000 UTC Type:0 Mac:52:54:00:27:87:01 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:newest-cni-705837 Clientid:01:52:54:00:27:87:01}
	I0401 19:51:46.415278   75954 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined IP address 192.168.50.29 and MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:51:46.415406   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHPort
	I0401 19:51:46.415583   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHKeyPath
	I0401 19:51:46.415743   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHKeyPath
	I0401 19:51:46.415908   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHUsername
	I0401 19:51:46.416080   75954 main.go:141] libmachine: Using SSH client type: native
	I0401 19:51:46.416292   75954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.29 22 <nil> <nil>}
	I0401 19:51:46.416319   75954 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 19:51:46.703088   75954 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 19:51:46.703114   75954 main.go:141] libmachine: Checking connection to Docker...
	I0401 19:51:46.703122   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetURL
	I0401 19:51:46.704703   75954 main.go:141] libmachine: (newest-cni-705837) DBG | Using libvirt version 6000000
	I0401 19:51:46.706982   75954 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:51:46.707392   75954 main.go:141] libmachine: (newest-cni-705837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:87:01", ip: ""} in network mk-newest-cni-705837: {Iface:virbr2 ExpiryTime:2024-04-01 20:51:39 +0000 UTC Type:0 Mac:52:54:00:27:87:01 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:newest-cni-705837 Clientid:01:52:54:00:27:87:01}
	I0401 19:51:46.707423   75954 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined IP address 192.168.50.29 and MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:51:46.707625   75954 main.go:141] libmachine: Docker is up and running!
	I0401 19:51:46.707636   75954 main.go:141] libmachine: Reticulating splines...
	I0401 19:51:46.707643   75954 client.go:171] duration metric: took 24.006447047s to LocalClient.Create
	I0401 19:51:46.707664   75954 start.go:167] duration metric: took 24.006516931s to libmachine.API.Create "newest-cni-705837"
	I0401 19:51:46.707677   75954 start.go:293] postStartSetup for "newest-cni-705837" (driver="kvm2")
	I0401 19:51:46.707691   75954 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 19:51:46.707710   75954 main.go:141] libmachine: (newest-cni-705837) Calling .DriverName
	I0401 19:51:46.707992   75954 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 19:51:46.708015   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHHostname
	I0401 19:51:46.710331   75954 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:51:46.710667   75954 main.go:141] libmachine: (newest-cni-705837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:87:01", ip: ""} in network mk-newest-cni-705837: {Iface:virbr2 ExpiryTime:2024-04-01 20:51:39 +0000 UTC Type:0 Mac:52:54:00:27:87:01 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:newest-cni-705837 Clientid:01:52:54:00:27:87:01}
	I0401 19:51:46.710694   75954 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined IP address 192.168.50.29 and MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:51:46.710811   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHPort
	I0401 19:51:46.711027   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHKeyPath
	I0401 19:51:46.711208   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHUsername
	I0401 19:51:46.711372   75954 sshutil.go:53] new ssh client: &{IP:192.168.50.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/newest-cni-705837/id_rsa Username:docker}
	I0401 19:51:46.801370   75954 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 19:51:46.806785   75954 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 19:51:46.806808   75954 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/addons for local assets ...
	I0401 19:51:46.806881   75954 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/files for local assets ...
	I0401 19:51:46.806978   75954 filesync.go:149] local asset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> 177512.pem in /etc/ssl/certs
	I0401 19:51:46.807086   75954 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 19:51:46.817894   75954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:51:46.848186   75954 start.go:296] duration metric: took 140.495867ms for postStartSetup
	I0401 19:51:46.848241   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetConfigRaw
	I0401 19:51:46.848846   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetIP
	I0401 19:51:46.851456   75954 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:51:46.851742   75954 main.go:141] libmachine: (newest-cni-705837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:87:01", ip: ""} in network mk-newest-cni-705837: {Iface:virbr2 ExpiryTime:2024-04-01 20:51:39 +0000 UTC Type:0 Mac:52:54:00:27:87:01 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:newest-cni-705837 Clientid:01:52:54:00:27:87:01}
	I0401 19:51:46.851785   75954 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined IP address 192.168.50.29 and MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:51:46.852063   75954 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/newest-cni-705837/config.json ...
	I0401 19:51:46.852241   75954 start.go:128] duration metric: took 24.169750665s to createHost
	I0401 19:51:46.852264   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHHostname
	I0401 19:51:46.854295   75954 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:51:46.854606   75954 main.go:141] libmachine: (newest-cni-705837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:87:01", ip: ""} in network mk-newest-cni-705837: {Iface:virbr2 ExpiryTime:2024-04-01 20:51:39 +0000 UTC Type:0 Mac:52:54:00:27:87:01 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:newest-cni-705837 Clientid:01:52:54:00:27:87:01}
	I0401 19:51:46.854644   75954 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined IP address 192.168.50.29 and MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:51:46.854730   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHPort
	I0401 19:51:46.854903   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHKeyPath
	I0401 19:51:46.855067   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHKeyPath
	I0401 19:51:46.855202   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHUsername
	I0401 19:51:46.855455   75954 main.go:141] libmachine: Using SSH client type: native
	I0401 19:51:46.855636   75954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.29 22 <nil> <nil>}
	I0401 19:51:46.855649   75954 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 19:51:46.967242   75954 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712001106.942859527
	
	I0401 19:51:46.967268   75954 fix.go:216] guest clock: 1712001106.942859527
	I0401 19:51:46.967279   75954 fix.go:229] Guest: 2024-04-01 19:51:46.942859527 +0000 UTC Remote: 2024-04-01 19:51:46.852253805 +0000 UTC m=+24.299428550 (delta=90.605722ms)
	I0401 19:51:46.967304   75954 fix.go:200] guest clock delta is within tolerance: 90.605722ms
	I0401 19:51:46.967310   75954 start.go:83] releasing machines lock for "newest-cni-705837", held for 24.284899233s
	I0401 19:51:46.967342   75954 main.go:141] libmachine: (newest-cni-705837) Calling .DriverName
	I0401 19:51:46.967614   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetIP
	I0401 19:51:46.970265   75954 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:51:46.970677   75954 main.go:141] libmachine: (newest-cni-705837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:87:01", ip: ""} in network mk-newest-cni-705837: {Iface:virbr2 ExpiryTime:2024-04-01 20:51:39 +0000 UTC Type:0 Mac:52:54:00:27:87:01 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:newest-cni-705837 Clientid:01:52:54:00:27:87:01}
	I0401 19:51:46.970703   75954 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined IP address 192.168.50.29 and MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:51:46.970874   75954 main.go:141] libmachine: (newest-cni-705837) Calling .DriverName
	I0401 19:51:46.971413   75954 main.go:141] libmachine: (newest-cni-705837) Calling .DriverName
	I0401 19:51:46.971593   75954 main.go:141] libmachine: (newest-cni-705837) Calling .DriverName
	I0401 19:51:46.971690   75954 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 19:51:46.971736   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHHostname
	I0401 19:51:46.971848   75954 ssh_runner.go:195] Run: cat /version.json
	I0401 19:51:46.971888   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHHostname
	I0401 19:51:46.974594   75954 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:51:46.974775   75954 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:51:46.975032   75954 main.go:141] libmachine: (newest-cni-705837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:87:01", ip: ""} in network mk-newest-cni-705837: {Iface:virbr2 ExpiryTime:2024-04-01 20:51:39 +0000 UTC Type:0 Mac:52:54:00:27:87:01 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:newest-cni-705837 Clientid:01:52:54:00:27:87:01}
	I0401 19:51:46.975061   75954 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined IP address 192.168.50.29 and MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:51:46.975091   75954 main.go:141] libmachine: (newest-cni-705837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:87:01", ip: ""} in network mk-newest-cni-705837: {Iface:virbr2 ExpiryTime:2024-04-01 20:51:39 +0000 UTC Type:0 Mac:52:54:00:27:87:01 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:newest-cni-705837 Clientid:01:52:54:00:27:87:01}
	I0401 19:51:46.975109   75954 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined IP address 192.168.50.29 and MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:51:46.975239   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHPort
	I0401 19:51:46.975360   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHPort
	I0401 19:51:46.975428   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHKeyPath
	I0401 19:51:46.975546   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHKeyPath
	I0401 19:51:46.975565   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHUsername
	I0401 19:51:46.975709   75954 sshutil.go:53] new ssh client: &{IP:192.168.50.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/newest-cni-705837/id_rsa Username:docker}
	I0401 19:51:46.975750   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHUsername
	I0401 19:51:46.975904   75954 sshutil.go:53] new ssh client: &{IP:192.168.50.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/newest-cni-705837/id_rsa Username:docker}
	I0401 19:51:47.051979   75954 ssh_runner.go:195] Run: systemctl --version
	I0401 19:51:47.074817   75954 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 19:51:47.247266   75954 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 19:51:47.255511   75954 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 19:51:47.255600   75954 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 19:51:47.274521   75954 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 19:51:47.274545   75954 start.go:494] detecting cgroup driver to use...
	I0401 19:51:47.274598   75954 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 19:51:47.294187   75954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 19:51:47.310888   75954 docker.go:217] disabling cri-docker service (if available) ...
	I0401 19:51:47.310954   75954 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 19:51:47.326650   75954 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 19:51:47.342200   75954 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 19:51:47.463374   75954 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 19:51:47.612029   75954 docker.go:233] disabling docker service ...
	I0401 19:51:47.612118   75954 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 19:51:47.628228   75954 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 19:51:47.644765   75954 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 19:51:47.806534   75954 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 19:51:47.938869   75954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 19:51:47.954690   75954 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 19:51:47.976497   75954 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0401 19:51:47.976563   75954 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:51:47.989327   75954 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 19:51:47.989390   75954 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:51:48.001308   75954 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:51:48.013899   75954 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:51:48.026248   75954 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 19:51:48.041758   75954 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:51:48.057262   75954 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:51:48.079592   75954 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:51:48.092288   75954 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 19:51:48.104614   75954 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0401 19:51:48.104759   75954 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0401 19:51:48.122325   75954 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 19:51:48.134399   75954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:51:48.257749   75954 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 19:51:48.407389   75954 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 19:51:48.407463   75954 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 19:51:48.413161   75954 start.go:562] Will wait 60s for crictl version
	I0401 19:51:48.413224   75954 ssh_runner.go:195] Run: which crictl
	I0401 19:51:48.417976   75954 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 19:51:48.466938   75954 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 19:51:48.467037   75954 ssh_runner.go:195] Run: crio --version
	I0401 19:51:48.501012   75954 ssh_runner.go:195] Run: crio --version
	I0401 19:51:48.534745   75954 out.go:177] * Preparing Kubernetes v1.30.0-rc.0 on CRI-O 1.29.1 ...
	I0401 19:51:48.535945   75954 main.go:141] libmachine: (newest-cni-705837) Calling .GetIP
	I0401 19:51:48.538563   75954 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:51:48.538895   75954 main.go:141] libmachine: (newest-cni-705837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:87:01", ip: ""} in network mk-newest-cni-705837: {Iface:virbr2 ExpiryTime:2024-04-01 20:51:39 +0000 UTC Type:0 Mac:52:54:00:27:87:01 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:newest-cni-705837 Clientid:01:52:54:00:27:87:01}
	I0401 19:51:48.538914   75954 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined IP address 192.168.50.29 and MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:51:48.539134   75954 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0401 19:51:48.544449   75954 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:51:48.563115   75954 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0401 19:51:48.564684   75954 kubeadm.go:877] updating cluster {Name:newest-cni-705837 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0-rc.0 ClusterName:newest-cni-705837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.29 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host M
ount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 19:51:48.564801   75954 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.0 and runtime crio
	I0401 19:51:48.564862   75954 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:51:48.608903   75954 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0-rc.0". assuming images are not preloaded.
	I0401 19:51:48.608974   75954 ssh_runner.go:195] Run: which lz4
	I0401 19:51:48.614025   75954 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0401 19:51:48.619375   75954 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 19:51:48.619399   75954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394409945 bytes)
	I0401 19:51:50.402639   75954 crio.go:462] duration metric: took 1.788629446s to copy over tarball
	I0401 19:51:50.402712   75954 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 19:51:53.006266   75954 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.603520945s)
	I0401 19:51:53.006308   75954 crio.go:469] duration metric: took 2.603642438s to extract the tarball
	I0401 19:51:53.006318   75954 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 19:51:53.046008   75954 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:51:53.095459   75954 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 19:51:53.095481   75954 cache_images.go:84] Images are preloaded, skipping loading
	I0401 19:51:53.095488   75954 kubeadm.go:928] updating node { 192.168.50.29 8443 v1.30.0-rc.0 crio true true} ...
	I0401 19:51:53.095606   75954 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-705837 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.29
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-rc.0 ClusterName:newest-cni-705837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 19:51:53.095711   75954 ssh_runner.go:195] Run: crio config
	I0401 19:51:53.155792   75954 cni.go:84] Creating CNI manager for ""
	I0401 19:51:53.155826   75954 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:51:53.155840   75954 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0401 19:51:53.155865   75954 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.50.29 APIServerPort:8443 KubernetesVersion:v1.30.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-705837 NodeName:newest-cni-705837 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.29"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs
:map[] NodeIP:192.168.50.29 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 19:51:53.156001   75954 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.29
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-705837"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.29
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.29"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 19:51:53.156056   75954 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.0
	I0401 19:51:53.169029   75954 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 19:51:53.169103   75954 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 19:51:53.182124   75954 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (358 bytes)
	I0401 19:51:53.203137   75954 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0401 19:51:53.222890   75954 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I0401 19:51:53.242659   75954 ssh_runner.go:195] Run: grep 192.168.50.29	control-plane.minikube.internal$ /etc/hosts
	I0401 19:51:53.247790   75954 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.29	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:51:53.262091   75954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:51:53.384077   75954 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:51:53.404096   75954 certs.go:68] Setting up /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/newest-cni-705837 for IP: 192.168.50.29
	I0401 19:51:53.404127   75954 certs.go:194] generating shared ca certs ...
	I0401 19:51:53.404150   75954 certs.go:226] acquiring lock for ca certs: {Name:mk348b3e250c104b662139cd7212c6c6dfda3180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:51:53.404315   75954 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key
	I0401 19:51:53.404377   75954 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key
	I0401 19:51:53.404386   75954 certs.go:256] generating profile certs ...
	I0401 19:51:53.404457   75954 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/newest-cni-705837/client.key
	I0401 19:51:53.404475   75954 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/newest-cni-705837/client.crt with IP's: []
	I0401 19:51:53.484992   75954 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/newest-cni-705837/client.crt ...
	I0401 19:51:53.485021   75954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/newest-cni-705837/client.crt: {Name:mk6fb110049c99dd04cb3bbdc7fc8e99b63264cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:51:53.485206   75954 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/newest-cni-705837/client.key ...
	I0401 19:51:53.485220   75954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/newest-cni-705837/client.key: {Name:mk37ab5668adad3fe6aff2b4d9201e1fd7719cdd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:51:53.485320   75954 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/newest-cni-705837/apiserver.key.552e185d
	I0401 19:51:53.485337   75954 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/newest-cni-705837/apiserver.crt.552e185d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.29]
	I0401 19:51:53.632008   75954 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/newest-cni-705837/apiserver.crt.552e185d ...
	I0401 19:51:53.632037   75954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/newest-cni-705837/apiserver.crt.552e185d: {Name:mk7035291ed7d7829d734724fbdc0e1c863a6abf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:51:53.632196   75954 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/newest-cni-705837/apiserver.key.552e185d ...
	I0401 19:51:53.632209   75954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/newest-cni-705837/apiserver.key.552e185d: {Name:mka0ba897a54170d511fd32f99a6a6d93d707b8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:51:53.632279   75954 certs.go:381] copying /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/newest-cni-705837/apiserver.crt.552e185d -> /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/newest-cni-705837/apiserver.crt
	I0401 19:51:53.632350   75954 certs.go:385] copying /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/newest-cni-705837/apiserver.key.552e185d -> /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/newest-cni-705837/apiserver.key
	I0401 19:51:53.632402   75954 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/newest-cni-705837/proxy-client.key
	I0401 19:51:53.632417   75954 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/newest-cni-705837/proxy-client.crt with IP's: []
	I0401 19:51:53.850947   75954 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/newest-cni-705837/proxy-client.crt ...
	I0401 19:51:53.850979   75954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/newest-cni-705837/proxy-client.crt: {Name:mk88761b733b3664ff62e134e1156311e4b79730 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:51:53.851161   75954 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/newest-cni-705837/proxy-client.key ...
	I0401 19:51:53.851197   75954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/newest-cni-705837/proxy-client.key: {Name:mk97fdd19ff0f0af2db08df50f08c0350e61b731 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:51:53.851408   75954 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem (1338 bytes)
	W0401 19:51:53.851454   75954 certs.go:480] ignoring /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751_empty.pem, impossibly tiny 0 bytes
	I0401 19:51:53.851464   75954 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem (1675 bytes)
	I0401 19:51:53.851489   75954 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem (1082 bytes)
	I0401 19:51:53.851526   75954 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem (1123 bytes)
	I0401 19:51:53.851553   75954 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem (1679 bytes)
	I0401 19:51:53.851590   75954 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:51:53.852134   75954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 19:51:53.882995   75954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 19:51:53.913335   75954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 19:51:53.941372   75954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 19:51:53.969801   75954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/newest-cni-705837/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0401 19:51:53.998443   75954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/newest-cni-705837/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 19:51:54.027635   75954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/newest-cni-705837/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 19:51:54.057140   75954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/newest-cni-705837/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 19:51:54.086653   75954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 19:51:54.117778   75954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem --> /usr/share/ca-certificates/17751.pem (1338 bytes)
	I0401 19:51:54.147158   75954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /usr/share/ca-certificates/177512.pem (1708 bytes)
	I0401 19:51:54.180159   75954 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0401 19:51:54.218146   75954 ssh_runner.go:195] Run: openssl version
	I0401 19:51:54.227147   75954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 19:51:54.246512   75954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:51:54.254353   75954 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 18:07 /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:51:54.254417   75954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:51:54.261724   75954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 19:51:54.276120   75954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17751.pem && ln -fs /usr/share/ca-certificates/17751.pem /etc/ssl/certs/17751.pem"
	I0401 19:51:54.290088   75954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17751.pem
	I0401 19:51:54.295702   75954 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 18:15 /usr/share/ca-certificates/17751.pem
	I0401 19:51:54.295757   75954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17751.pem
	I0401 19:51:54.303085   75954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17751.pem /etc/ssl/certs/51391683.0"
	I0401 19:51:54.317176   75954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177512.pem && ln -fs /usr/share/ca-certificates/177512.pem /etc/ssl/certs/177512.pem"
	I0401 19:51:54.331423   75954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177512.pem
	I0401 19:51:54.337425   75954 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 18:15 /usr/share/ca-certificates/177512.pem
	I0401 19:51:54.337483   75954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177512.pem
	I0401 19:51:54.344110   75954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177512.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 19:51:54.358258   75954 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 19:51:54.362940   75954 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 19:51:54.362989   75954 kubeadm.go:391] StartCluster: {Name:newest-cni-705837 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0-rc.0 ClusterName:newest-cni-705837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.29 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:51:54.363080   75954 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 19:51:54.363147   75954 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:51:54.408832   75954 cri.go:89] found id: ""
	I0401 19:51:54.408909   75954 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 19:51:54.421869   75954 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:51:54.442813   75954 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:51:54.454523   75954 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:51:54.454541   75954 kubeadm.go:156] found existing configuration files:
	
	I0401 19:51:54.454591   75954 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 19:51:54.465880   75954 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:51:54.465935   75954 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:51:54.477707   75954 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 19:51:54.489033   75954 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:51:54.489101   75954 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:51:54.500959   75954 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 19:51:54.512544   75954 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:51:54.512601   75954 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:51:54.525179   75954 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 19:51:54.537918   75954 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:51:54.537978   75954 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:51:54.550833   75954 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 19:51:54.681261   75954 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0-rc.0
	I0401 19:51:54.681317   75954 kubeadm.go:309] [preflight] Running pre-flight checks
	I0401 19:51:54.836232   75954 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 19:51:54.836365   75954 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 19:51:54.836529   75954 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 19:51:55.078112   75954 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 19:51:55.111196   75954 out.go:204]   - Generating certificates and keys ...
	I0401 19:51:55.111332   75954 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0401 19:51:55.111509   75954 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0401 19:51:55.221552   75954 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 19:51:55.302897   75954 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0401 19:51:55.409316   75954 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0401 19:51:55.610218   75954 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0401 19:51:55.764903   75954 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0401 19:51:55.765214   75954 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-705837] and IPs [192.168.50.29 127.0.0.1 ::1]
	I0401 19:51:55.957408   75954 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0401 19:51:55.957753   75954 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-705837] and IPs [192.168.50.29 127.0.0.1 ::1]
	I0401 19:51:56.359903   75954 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 19:51:56.548967   75954 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 19:51:56.770943   75954 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0401 19:51:56.771208   75954 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 19:51:56.847776   75954 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 19:51:57.047684   75954 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 19:51:57.154221   75954 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 19:51:57.251555   75954 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 19:51:57.389715   75954 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 19:51:57.390346   75954 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 19:51:57.393621   75954 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 19:51:57.395488   75954 out.go:204]   - Booting up control plane ...
	I0401 19:51:57.395596   75954 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 19:51:57.395716   75954 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 19:51:57.395794   75954 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 19:51:57.412421   75954 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 19:51:57.413422   75954 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 19:51:57.413460   75954 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0401 19:51:57.578489   75954 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0401 19:51:57.578637   75954 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0401 19:51:58.079930   75954 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.815623ms
	I0401 19:51:58.080042   75954 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0401 19:52:04.082533   75954 kubeadm.go:309] [api-check] The API server is healthy after 6.00287138s
	I0401 19:52:04.097051   75954 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 19:52:04.116232   75954 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 19:52:04.150781   75954 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 19:52:04.151051   75954 kubeadm.go:309] [mark-control-plane] Marking the node newest-cni-705837 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 19:52:04.167134   75954 kubeadm.go:309] [bootstrap-token] Using token: 4s0efs.lszfy9caw14u4kqd
	I0401 19:52:04.168499   75954 out.go:204]   - Configuring RBAC rules ...
	I0401 19:52:04.168618   75954 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 19:52:04.179968   75954 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 19:52:04.189413   75954 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 19:52:04.196474   75954 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 19:52:04.201065   75954 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 19:52:04.204298   75954 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 19:52:04.487996   75954 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 19:52:04.932992   75954 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0401 19:52:05.488080   75954 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0401 19:52:05.489109   75954 kubeadm.go:309] 
	I0401 19:52:05.489211   75954 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0401 19:52:05.489226   75954 kubeadm.go:309] 
	I0401 19:52:05.489326   75954 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0401 19:52:05.489335   75954 kubeadm.go:309] 
	I0401 19:52:05.489356   75954 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0401 19:52:05.489425   75954 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 19:52:05.489489   75954 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 19:52:05.489499   75954 kubeadm.go:309] 
	I0401 19:52:05.489572   75954 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0401 19:52:05.489580   75954 kubeadm.go:309] 
	I0401 19:52:05.489671   75954 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 19:52:05.489686   75954 kubeadm.go:309] 
	I0401 19:52:05.489772   75954 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0401 19:52:05.489886   75954 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 19:52:05.489977   75954 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 19:52:05.489991   75954 kubeadm.go:309] 
	I0401 19:52:05.490127   75954 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 19:52:05.490232   75954 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0401 19:52:05.490243   75954 kubeadm.go:309] 
	I0401 19:52:05.490363   75954 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 4s0efs.lszfy9caw14u4kqd \
	I0401 19:52:05.490506   75954 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b8a0197ad47aa27a5800307c57228d22e61e4d31af785fa8a896f2b7fab267b8 \
	I0401 19:52:05.490538   75954 kubeadm.go:309] 	--control-plane 
	I0401 19:52:05.490548   75954 kubeadm.go:309] 
	I0401 19:52:05.490673   75954 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0401 19:52:05.490682   75954 kubeadm.go:309] 
	I0401 19:52:05.490782   75954 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 4s0efs.lszfy9caw14u4kqd \
	I0401 19:52:05.490943   75954 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b8a0197ad47aa27a5800307c57228d22e61e4d31af785fa8a896f2b7fab267b8 
	I0401 19:52:05.491102   75954 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 19:52:05.491144   75954 cni.go:84] Creating CNI manager for ""
	I0401 19:52:05.491153   75954 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:52:05.492935   75954 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0401 19:52:05.494303   75954 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0401 19:52:05.509373   75954 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0401 19:52:05.538767   75954 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 19:52:05.538864   75954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:52:05.538950   75954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-705837 minikube.k8s.io/updated_at=2024_04_01T19_52_05_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2 minikube.k8s.io/name=newest-cni-705837 minikube.k8s.io/primary=true
	I0401 19:52:05.557774   75954 ops.go:34] apiserver oom_adj: -16
	I0401 19:52:05.724153   75954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:52:06.224772   75954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:52:06.725037   75954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:52:07.224498   75954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:52:07.724870   75954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:52:08.224238   75954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:52:08.724758   75954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:52:09.225157   75954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:52:09.724320   75954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:52:10.224535   75954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:52:10.724565   75954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:52:11.224712   75954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:52:11.725070   75954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:52:12.224849   75954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:52:12.724465   75954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:52:13.225195   75954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:52:13.724896   75954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:52:14.224491   75954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:52:14.724978   75954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:52:15.224738   75954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:52:15.724810   75954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:52:16.224939   75954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:52:16.725121   75954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:52:17.224272   75954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	
	==> CRI-O <==
	Apr 01 19:52:18 embed-certs-882095 crio[694]: time="2024-04-01 19:52:18.812498585Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712001138812468834,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=594dec58-16ca-4fad-83f8-e32a63fb74ce name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:52:18 embed-certs-882095 crio[694]: time="2024-04-01 19:52:18.813142922Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4dbee187-9508-40a1-a235-0a7892361b71 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:52:18 embed-certs-882095 crio[694]: time="2024-04-01 19:52:18.813216332Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4dbee187-9508-40a1-a235-0a7892361b71 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:52:18 embed-certs-882095 crio[694]: time="2024-04-01 19:52:18.813474977Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b8e68f339de5aabe221346199d822c1b5ddea21d7db127a33649a98290d7828,PodSandboxId:05a1ecbece859bb687f1f7c87b81d94bcc34d8c4cfc2ce964a1af6767cac0980,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712000178213437449,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-fx6hf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c07b740-3374-4a54-a786-784b23ec6b83,},Annotations:map[string]string{io.kubernetes.container.hash: 4d9197d6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b48da0daaed361d9b2ac31516ceefe1d139fefed8bd29120857dbb518cd0b37c,PodSandboxId:7efef89ca0ece46a2d45c5f3e7b1fbbc0b0b1c7bc7165d5b391eb8c6ca6160eb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712000178113220801,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-hwbw6,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 7b12145a-2689-47e9-9724-d80790ed079c,},Annotations:map[string]string{io.kubernetes.container.hash: 16a80cfd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0843b91761c5daeaab1269368aaf342feaccd94a7c047a6e1a440c82a308249f,PodSandboxId:eb961299b2b5969ecf7d07ffcee4669a43569f76212f1b554ad7365a69bd200f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNI
NG,CreatedAt:1712000177901575392,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mbs4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffccbae0-7538-4a75-a6ce-afce49865f07,},Annotations:map[string]string{io.kubernetes.container.hash: 5cb0570b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3945afdef36c0a42c6c2597baed81cb27663f89912da17b7c026add868d0b02e,PodSandboxId:b10c1f3540bdb9f2555f329c4806c77af88fe248106bc9ab2f5e036d610f0d20,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:17120001774
49418688,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcff0d1d-a555-4b25-9aa5-7ab1188c21fd,},Annotations:map[string]string{io.kubernetes.container.hash: b997ae06,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68ca30006b9112dfd948bf271564137d17fdc5584ca52aa74b709acffa7651b9,PodSandboxId:55c2751e91071735f77e489fc672e2c953faa6474ab08045e6a8bc00dd36745f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712000156698353392,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-882095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1cc3c8c20214dafbf32ab81b034b1d9,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6af00be29964479e217c8d9c6a3de0ed6a2b2ca3f03344c9b1ef869b474f8161,PodSandboxId:0774ff5dbf1c87860057ec0b08579f55d5a695f3cdf274366d9574195abae87f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712000156695296711,Labels:map[
string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-882095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e55eef5d459380400f8def1b6fef235c,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc65a14cc9d3896ed4a0aab8e1ef8215bf34c52e9af1d0b381a685d67ba785b6,PodSandboxId:64afbc2eccd701da86afdf0443707ab70c3710cd95c3fd6a9452cd8c2f580a8f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712000156711892515,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-882095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a21d2ec505bee6951e4280a8eb9da666,},Annotations:map[string]string{io.kubernetes.container.hash: f62b4a34,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:025ab47445c0ce9c1bbf3521e04360d8e449f5a9cc3b9cfc32faadb0b088b625,PodSandboxId:8ce2d8941be452051ac31da325ca6913ada7cc1a63bba38f822adefe1ae158ab,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712000156580774809,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-882095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b48e9bdc35fa015990becffe532986ba,},Annotations:map[string]string{io.kubernetes.container.hash: 9b8fd1d4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4dbee187-9508-40a1-a235-0a7892361b71 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:52:18 embed-certs-882095 crio[694]: time="2024-04-01 19:52:18.876337569Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d271e1e3-f12d-431a-81f5-b27791475f93 name=/runtime.v1.RuntimeService/Version
	Apr 01 19:52:18 embed-certs-882095 crio[694]: time="2024-04-01 19:52:18.876449111Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d271e1e3-f12d-431a-81f5-b27791475f93 name=/runtime.v1.RuntimeService/Version
	Apr 01 19:52:18 embed-certs-882095 crio[694]: time="2024-04-01 19:52:18.878662897Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bd56cfb9-1e07-48af-a5b7-faf05dd84499 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:52:18 embed-certs-882095 crio[694]: time="2024-04-01 19:52:18.879269223Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712001138879239098,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bd56cfb9-1e07-48af-a5b7-faf05dd84499 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:52:18 embed-certs-882095 crio[694]: time="2024-04-01 19:52:18.880360988Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=742f1df6-f8de-453e-8ab9-febb1c1e0431 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:52:18 embed-certs-882095 crio[694]: time="2024-04-01 19:52:18.880453813Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=742f1df6-f8de-453e-8ab9-febb1c1e0431 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:52:18 embed-certs-882095 crio[694]: time="2024-04-01 19:52:18.880835581Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b8e68f339de5aabe221346199d822c1b5ddea21d7db127a33649a98290d7828,PodSandboxId:05a1ecbece859bb687f1f7c87b81d94bcc34d8c4cfc2ce964a1af6767cac0980,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712000178213437449,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-fx6hf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c07b740-3374-4a54-a786-784b23ec6b83,},Annotations:map[string]string{io.kubernetes.container.hash: 4d9197d6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b48da0daaed361d9b2ac31516ceefe1d139fefed8bd29120857dbb518cd0b37c,PodSandboxId:7efef89ca0ece46a2d45c5f3e7b1fbbc0b0b1c7bc7165d5b391eb8c6ca6160eb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712000178113220801,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-hwbw6,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 7b12145a-2689-47e9-9724-d80790ed079c,},Annotations:map[string]string{io.kubernetes.container.hash: 16a80cfd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0843b91761c5daeaab1269368aaf342feaccd94a7c047a6e1a440c82a308249f,PodSandboxId:eb961299b2b5969ecf7d07ffcee4669a43569f76212f1b554ad7365a69bd200f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNI
NG,CreatedAt:1712000177901575392,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mbs4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffccbae0-7538-4a75-a6ce-afce49865f07,},Annotations:map[string]string{io.kubernetes.container.hash: 5cb0570b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3945afdef36c0a42c6c2597baed81cb27663f89912da17b7c026add868d0b02e,PodSandboxId:b10c1f3540bdb9f2555f329c4806c77af88fe248106bc9ab2f5e036d610f0d20,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:17120001774
49418688,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcff0d1d-a555-4b25-9aa5-7ab1188c21fd,},Annotations:map[string]string{io.kubernetes.container.hash: b997ae06,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68ca30006b9112dfd948bf271564137d17fdc5584ca52aa74b709acffa7651b9,PodSandboxId:55c2751e91071735f77e489fc672e2c953faa6474ab08045e6a8bc00dd36745f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712000156698353392,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-882095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1cc3c8c20214dafbf32ab81b034b1d9,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6af00be29964479e217c8d9c6a3de0ed6a2b2ca3f03344c9b1ef869b474f8161,PodSandboxId:0774ff5dbf1c87860057ec0b08579f55d5a695f3cdf274366d9574195abae87f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712000156695296711,Labels:map[
string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-882095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e55eef5d459380400f8def1b6fef235c,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc65a14cc9d3896ed4a0aab8e1ef8215bf34c52e9af1d0b381a685d67ba785b6,PodSandboxId:64afbc2eccd701da86afdf0443707ab70c3710cd95c3fd6a9452cd8c2f580a8f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712000156711892515,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-882095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a21d2ec505bee6951e4280a8eb9da666,},Annotations:map[string]string{io.kubernetes.container.hash: f62b4a34,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:025ab47445c0ce9c1bbf3521e04360d8e449f5a9cc3b9cfc32faadb0b088b625,PodSandboxId:8ce2d8941be452051ac31da325ca6913ada7cc1a63bba38f822adefe1ae158ab,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712000156580774809,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-882095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b48e9bdc35fa015990becffe532986ba,},Annotations:map[string]string{io.kubernetes.container.hash: 9b8fd1d4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=742f1df6-f8de-453e-8ab9-febb1c1e0431 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:52:18 embed-certs-882095 crio[694]: time="2024-04-01 19:52:18.925284056Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a7dbe04d-458b-4a2e-a2db-e82740b4fdad name=/runtime.v1.RuntimeService/Version
	Apr 01 19:52:18 embed-certs-882095 crio[694]: time="2024-04-01 19:52:18.925396831Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a7dbe04d-458b-4a2e-a2db-e82740b4fdad name=/runtime.v1.RuntimeService/Version
	Apr 01 19:52:18 embed-certs-882095 crio[694]: time="2024-04-01 19:52:18.932040223Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a7972149-1820-4a47-90ee-efa3e0916236 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:52:18 embed-certs-882095 crio[694]: time="2024-04-01 19:52:18.932445020Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712001138932423673,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a7972149-1820-4a47-90ee-efa3e0916236 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:52:18 embed-certs-882095 crio[694]: time="2024-04-01 19:52:18.933094414Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=924ff3c8-c4d0-43f0-9c1a-0e70ecf0d6bf name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:52:18 embed-certs-882095 crio[694]: time="2024-04-01 19:52:18.933170413Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=924ff3c8-c4d0-43f0-9c1a-0e70ecf0d6bf name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:52:18 embed-certs-882095 crio[694]: time="2024-04-01 19:52:18.933454701Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b8e68f339de5aabe221346199d822c1b5ddea21d7db127a33649a98290d7828,PodSandboxId:05a1ecbece859bb687f1f7c87b81d94bcc34d8c4cfc2ce964a1af6767cac0980,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712000178213437449,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-fx6hf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c07b740-3374-4a54-a786-784b23ec6b83,},Annotations:map[string]string{io.kubernetes.container.hash: 4d9197d6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b48da0daaed361d9b2ac31516ceefe1d139fefed8bd29120857dbb518cd0b37c,PodSandboxId:7efef89ca0ece46a2d45c5f3e7b1fbbc0b0b1c7bc7165d5b391eb8c6ca6160eb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712000178113220801,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-hwbw6,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 7b12145a-2689-47e9-9724-d80790ed079c,},Annotations:map[string]string{io.kubernetes.container.hash: 16a80cfd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0843b91761c5daeaab1269368aaf342feaccd94a7c047a6e1a440c82a308249f,PodSandboxId:eb961299b2b5969ecf7d07ffcee4669a43569f76212f1b554ad7365a69bd200f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNI
NG,CreatedAt:1712000177901575392,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mbs4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffccbae0-7538-4a75-a6ce-afce49865f07,},Annotations:map[string]string{io.kubernetes.container.hash: 5cb0570b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3945afdef36c0a42c6c2597baed81cb27663f89912da17b7c026add868d0b02e,PodSandboxId:b10c1f3540bdb9f2555f329c4806c77af88fe248106bc9ab2f5e036d610f0d20,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:17120001774
49418688,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcff0d1d-a555-4b25-9aa5-7ab1188c21fd,},Annotations:map[string]string{io.kubernetes.container.hash: b997ae06,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68ca30006b9112dfd948bf271564137d17fdc5584ca52aa74b709acffa7651b9,PodSandboxId:55c2751e91071735f77e489fc672e2c953faa6474ab08045e6a8bc00dd36745f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712000156698353392,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-882095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1cc3c8c20214dafbf32ab81b034b1d9,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6af00be29964479e217c8d9c6a3de0ed6a2b2ca3f03344c9b1ef869b474f8161,PodSandboxId:0774ff5dbf1c87860057ec0b08579f55d5a695f3cdf274366d9574195abae87f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712000156695296711,Labels:map[
string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-882095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e55eef5d459380400f8def1b6fef235c,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc65a14cc9d3896ed4a0aab8e1ef8215bf34c52e9af1d0b381a685d67ba785b6,PodSandboxId:64afbc2eccd701da86afdf0443707ab70c3710cd95c3fd6a9452cd8c2f580a8f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712000156711892515,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-882095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a21d2ec505bee6951e4280a8eb9da666,},Annotations:map[string]string{io.kubernetes.container.hash: f62b4a34,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:025ab47445c0ce9c1bbf3521e04360d8e449f5a9cc3b9cfc32faadb0b088b625,PodSandboxId:8ce2d8941be452051ac31da325ca6913ada7cc1a63bba38f822adefe1ae158ab,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712000156580774809,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-882095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b48e9bdc35fa015990becffe532986ba,},Annotations:map[string]string{io.kubernetes.container.hash: 9b8fd1d4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=924ff3c8-c4d0-43f0-9c1a-0e70ecf0d6bf name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:52:18 embed-certs-882095 crio[694]: time="2024-04-01 19:52:18.971249656Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d6f681a3-6722-45e8-b3f2-eb359bc5880b name=/runtime.v1.RuntimeService/Version
	Apr 01 19:52:18 embed-certs-882095 crio[694]: time="2024-04-01 19:52:18.971360392Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d6f681a3-6722-45e8-b3f2-eb359bc5880b name=/runtime.v1.RuntimeService/Version
	Apr 01 19:52:18 embed-certs-882095 crio[694]: time="2024-04-01 19:52:18.973110582Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=728d3dfa-492f-461b-9f8e-a4dfa7197331 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:52:18 embed-certs-882095 crio[694]: time="2024-04-01 19:52:18.973528180Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712001138973503663,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=728d3dfa-492f-461b-9f8e-a4dfa7197331 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:52:18 embed-certs-882095 crio[694]: time="2024-04-01 19:52:18.974184433Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0e6d2dcc-3c5e-46a0-a659-3464bc9ed9cc name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:52:18 embed-certs-882095 crio[694]: time="2024-04-01 19:52:18.974263879Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0e6d2dcc-3c5e-46a0-a659-3464bc9ed9cc name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:52:18 embed-certs-882095 crio[694]: time="2024-04-01 19:52:18.974506795Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6b8e68f339de5aabe221346199d822c1b5ddea21d7db127a33649a98290d7828,PodSandboxId:05a1ecbece859bb687f1f7c87b81d94bcc34d8c4cfc2ce964a1af6767cac0980,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712000178213437449,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-fx6hf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c07b740-3374-4a54-a786-784b23ec6b83,},Annotations:map[string]string{io.kubernetes.container.hash: 4d9197d6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b48da0daaed361d9b2ac31516ceefe1d139fefed8bd29120857dbb518cd0b37c,PodSandboxId:7efef89ca0ece46a2d45c5f3e7b1fbbc0b0b1c7bc7165d5b391eb8c6ca6160eb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712000178113220801,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-hwbw6,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 7b12145a-2689-47e9-9724-d80790ed079c,},Annotations:map[string]string{io.kubernetes.container.hash: 16a80cfd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0843b91761c5daeaab1269368aaf342feaccd94a7c047a6e1a440c82a308249f,PodSandboxId:eb961299b2b5969ecf7d07ffcee4669a43569f76212f1b554ad7365a69bd200f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNI
NG,CreatedAt:1712000177901575392,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mbs4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffccbae0-7538-4a75-a6ce-afce49865f07,},Annotations:map[string]string{io.kubernetes.container.hash: 5cb0570b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3945afdef36c0a42c6c2597baed81cb27663f89912da17b7c026add868d0b02e,PodSandboxId:b10c1f3540bdb9f2555f329c4806c77af88fe248106bc9ab2f5e036d610f0d20,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:17120001774
49418688,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcff0d1d-a555-4b25-9aa5-7ab1188c21fd,},Annotations:map[string]string{io.kubernetes.container.hash: b997ae06,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68ca30006b9112dfd948bf271564137d17fdc5584ca52aa74b709acffa7651b9,PodSandboxId:55c2751e91071735f77e489fc672e2c953faa6474ab08045e6a8bc00dd36745f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712000156698353392,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-882095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1cc3c8c20214dafbf32ab81b034b1d9,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6af00be29964479e217c8d9c6a3de0ed6a2b2ca3f03344c9b1ef869b474f8161,PodSandboxId:0774ff5dbf1c87860057ec0b08579f55d5a695f3cdf274366d9574195abae87f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712000156695296711,Labels:map[
string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-882095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e55eef5d459380400f8def1b6fef235c,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc65a14cc9d3896ed4a0aab8e1ef8215bf34c52e9af1d0b381a685d67ba785b6,PodSandboxId:64afbc2eccd701da86afdf0443707ab70c3710cd95c3fd6a9452cd8c2f580a8f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1712000156711892515,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-882095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a21d2ec505bee6951e4280a8eb9da666,},Annotations:map[string]string{io.kubernetes.container.hash: f62b4a34,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:025ab47445c0ce9c1bbf3521e04360d8e449f5a9cc3b9cfc32faadb0b088b625,PodSandboxId:8ce2d8941be452051ac31da325ca6913ada7cc1a63bba38f822adefe1ae158ab,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712000156580774809,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-882095,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b48e9bdc35fa015990becffe532986ba,},Annotations:map[string]string{io.kubernetes.container.hash: 9b8fd1d4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0e6d2dcc-3c5e-46a0-a659-3464bc9ed9cc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6b8e68f339de5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 minutes ago      Running             coredns                   0                   05a1ecbece859       coredns-76f75df574-fx6hf
	b48da0daaed36       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 minutes ago      Running             coredns                   0                   7efef89ca0ece       coredns-76f75df574-hwbw6
	0843b91761c5d       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392   16 minutes ago      Running             kube-proxy                0                   eb961299b2b59       kube-proxy-mbs4m
	3945afdef36c0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   b10c1f3540bdb       storage-provisioner
	cc65a14cc9d38       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   16 minutes ago      Running             kube-apiserver            2                   64afbc2eccd70       kube-apiserver-embed-certs-882095
	68ca30006b911       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b   16 minutes ago      Running             kube-scheduler            2                   55c2751e91071       kube-scheduler-embed-certs-882095
	6af00be299644       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3   16 minutes ago      Running             kube-controller-manager   2                   0774ff5dbf1c8       kube-controller-manager-embed-certs-882095
	025ab47445c0c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   16 minutes ago      Running             etcd                      2                   8ce2d8941be45       etcd-embed-certs-882095
	
	
	==> coredns [6b8e68f339de5aabe221346199d822c1b5ddea21d7db127a33649a98290d7828] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [b48da0daaed361d9b2ac31516ceefe1d139fefed8bd29120857dbb518cd0b37c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-882095
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-882095
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2
	                    minikube.k8s.io/name=embed-certs-882095
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_01T19_36_02_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Apr 2024 19:35:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-882095
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Apr 2024 19:52:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Apr 2024 19:51:41 +0000   Mon, 01 Apr 2024 19:35:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Apr 2024 19:51:41 +0000   Mon, 01 Apr 2024 19:35:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Apr 2024 19:51:41 +0000   Mon, 01 Apr 2024 19:35:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Apr 2024 19:51:41 +0000   Mon, 01 Apr 2024 19:36:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.190
	  Hostname:    embed-certs-882095
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 806f0049a29b4e6f9f7bd026d87d4347
	  System UUID:                806f0049-a29b-4e6f-9f7b-d026d87d4347
	  Boot ID:                    fb23a3fc-e023-4508-a5f4-6fc43a813270
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-fx6hf                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-76f75df574-hwbw6                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-embed-certs-882095                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kube-apiserver-embed-certs-882095             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-embed-certs-882095    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-mbs4m                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-embed-certs-882095             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 metrics-server-57f55c9bc5-dktr6               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 16m   kube-proxy       
	  Normal  Starting                 16m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m   kubelet          Node embed-certs-882095 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m   kubelet          Node embed-certs-882095 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m   kubelet          Node embed-certs-882095 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             16m   kubelet          Node embed-certs-882095 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  16m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                16m   kubelet          Node embed-certs-882095 status is now: NodeReady
	  Normal  RegisteredNode           16m   node-controller  Node embed-certs-882095 event: Registered Node embed-certs-882095 in Controller
	
	
	==> dmesg <==
	[  +0.051843] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042462] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.592134] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.500482] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.684030] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000014] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.944730] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.058870] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.071296] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[Apr 1 19:31] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.173357] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.329537] systemd-fstab-generator[680]: Ignoring "noauto" option for root device
	[  +5.034549] systemd-fstab-generator[776]: Ignoring "noauto" option for root device
	[  +0.059087] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.384042] systemd-fstab-generator[901]: Ignoring "noauto" option for root device
	[  +4.670677] kauditd_printk_skb: 97 callbacks suppressed
	[  +8.380208] kauditd_printk_skb: 74 callbacks suppressed
	[Apr 1 19:35] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.956839] systemd-fstab-generator[3440]: Ignoring "noauto" option for root device
	[  +4.520898] kauditd_printk_skb: 53 callbacks suppressed
	[Apr 1 19:36] systemd-fstab-generator[3766]: Ignoring "noauto" option for root device
	[ +13.921687] systemd-fstab-generator[3961]: Ignoring "noauto" option for root device
	[  +0.084326] kauditd_printk_skb: 14 callbacks suppressed
	[Apr 1 19:37] kauditd_printk_skb: 78 callbacks suppressed
	
	
	==> etcd [025ab47445c0ce9c1bbf3521e04360d8e449f5a9cc3b9cfc32faadb0b088b625] <==
	{"level":"info","ts":"2024-04-01T19:35:57.008838Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.190:2380"}
	{"level":"info","ts":"2024-04-01T19:35:57.623806Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dc6e2f4e9dcc679a is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-01T19:35:57.623938Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dc6e2f4e9dcc679a became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-01T19:35:57.623994Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dc6e2f4e9dcc679a received MsgPreVoteResp from dc6e2f4e9dcc679a at term 1"}
	{"level":"info","ts":"2024-04-01T19:35:57.624027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dc6e2f4e9dcc679a became candidate at term 2"}
	{"level":"info","ts":"2024-04-01T19:35:57.624051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dc6e2f4e9dcc679a received MsgVoteResp from dc6e2f4e9dcc679a at term 2"}
	{"level":"info","ts":"2024-04-01T19:35:57.624078Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dc6e2f4e9dcc679a became leader at term 2"}
	{"level":"info","ts":"2024-04-01T19:35:57.624107Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dc6e2f4e9dcc679a elected leader dc6e2f4e9dcc679a at term 2"}
	{"level":"info","ts":"2024-04-01T19:35:57.629017Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"dc6e2f4e9dcc679a","local-member-attributes":"{Name:embed-certs-882095 ClientURLs:[https://192.168.39.190:2379]}","request-path":"/0/members/dc6e2f4e9dcc679a/attributes","cluster-id":"22dc5a3adec033ed","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-01T19:35:57.629805Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-01T19:35:57.630001Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-01T19:35:57.630579Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-01T19:35:57.636747Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-01T19:35:57.636794Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-01T19:35:57.636831Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"22dc5a3adec033ed","local-member-id":"dc6e2f4e9dcc679a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-01T19:35:57.636919Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-01T19:35:57.636939Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-01T19:35:57.638511Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.190:2379"}
	{"level":"info","ts":"2024-04-01T19:35:57.643881Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-01T19:45:57.710315Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":676}
	{"level":"info","ts":"2024-04-01T19:45:57.720113Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":676,"took":"9.117275ms","hash":3476782287,"current-db-size-bytes":2154496,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2154496,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-04-01T19:45:57.720173Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3476782287,"revision":676,"compact-revision":-1}
	{"level":"info","ts":"2024-04-01T19:50:57.723282Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":919}
	{"level":"info","ts":"2024-04-01T19:50:57.727963Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":919,"took":"4.299808ms","hash":726036478,"current-db-size-bytes":2154496,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1552384,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-04-01T19:50:57.728026Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":726036478,"revision":919,"compact-revision":676}
	
	
	==> kernel <==
	 19:52:19 up 21 min,  0 users,  load average: 0.02, 0.11, 0.14
	Linux embed-certs-882095 5.10.207 #1 SMP Wed Mar 27 22:02:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [cc65a14cc9d3896ed4a0aab8e1ef8215bf34c52e9af1d0b381a685d67ba785b6] <==
	I0401 19:47:00.357792       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0401 19:49:00.357055       1 handler_proxy.go:93] no RequestInfo found in the context
	E0401 19:49:00.357134       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0401 19:49:00.357143       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0401 19:49:00.358299       1 handler_proxy.go:93] no RequestInfo found in the context
	E0401 19:49:00.358469       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0401 19:49:00.358512       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0401 19:50:59.359322       1 handler_proxy.go:93] no RequestInfo found in the context
	E0401 19:50:59.359861       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0401 19:51:00.360533       1 handler_proxy.go:93] no RequestInfo found in the context
	E0401 19:51:00.360609       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0401 19:51:00.360621       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0401 19:51:00.360824       1 handler_proxy.go:93] no RequestInfo found in the context
	E0401 19:51:00.360901       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0401 19:51:00.362132       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0401 19:52:00.361290       1 handler_proxy.go:93] no RequestInfo found in the context
	E0401 19:52:00.361403       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0401 19:52:00.361418       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0401 19:52:00.362666       1 handler_proxy.go:93] no RequestInfo found in the context
	E0401 19:52:00.362830       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0401 19:52:00.362841       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [6af00be29964479e217c8d9c6a3de0ed6a2b2ca3f03344c9b1ef869b474f8161] <==
	I0401 19:46:46.369214       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:47:15.957605       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:47:16.378062       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0401 19:47:23.772587       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="150.158µs"
	I0401 19:47:35.770873       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="115.457µs"
	E0401 19:47:45.963600       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:47:46.386889       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:48:15.970923       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:48:16.396330       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:48:45.976840       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:48:46.405629       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:49:15.983173       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:49:16.414162       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:49:45.988384       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:49:46.422537       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:50:15.994830       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:50:16.430578       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:50:46.000397       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:50:46.442455       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:51:16.006032       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:51:16.455320       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:51:46.014555       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:51:46.464008       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:52:16.021136       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:52:16.471665       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [0843b91761c5daeaab1269368aaf342feaccd94a7c047a6e1a440c82a308249f] <==
	I0401 19:36:18.422031       1 server_others.go:72] "Using iptables proxy"
	I0401 19:36:18.442888       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.190"]
	I0401 19:36:18.531240       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0401 19:36:18.531314       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0401 19:36:18.531342       1 server_others.go:168] "Using iptables Proxier"
	I0401 19:36:18.535404       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0401 19:36:18.535959       1 server.go:865] "Version info" version="v1.29.3"
	I0401 19:36:18.536012       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 19:36:18.538024       1 config.go:188] "Starting service config controller"
	I0401 19:36:18.538111       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0401 19:36:18.538183       1 config.go:97] "Starting endpoint slice config controller"
	I0401 19:36:18.538191       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0401 19:36:18.540405       1 config.go:315] "Starting node config controller"
	I0401 19:36:18.540453       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0401 19:36:18.638768       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0401 19:36:18.639029       1 shared_informer.go:318] Caches are synced for service config
	I0401 19:36:18.640660       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [68ca30006b9112dfd948bf271564137d17fdc5584ca52aa74b709acffa7651b9] <==
	W0401 19:35:59.421112       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0401 19:35:59.422823       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0401 19:35:59.422960       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0401 19:35:59.422997       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0401 19:35:59.423073       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0401 19:35:59.423104       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0401 19:35:59.423191       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0401 19:35:59.423226       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0401 19:35:59.423287       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0401 19:35:59.423315       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0401 19:35:59.431965       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0401 19:35:59.432011       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0401 19:36:00.282250       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0401 19:36:00.282311       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0401 19:36:00.320668       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0401 19:36:00.320753       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0401 19:36:00.369215       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0401 19:36:00.369269       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0401 19:36:00.370429       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0401 19:36:00.370474       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0401 19:36:00.443893       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0401 19:36:00.443949       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0401 19:36:00.475488       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0401 19:36:00.475545       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0401 19:36:03.185006       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 01 19:50:02 embed-certs-882095 kubelet[3773]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 01 19:50:12 embed-certs-882095 kubelet[3773]: E0401 19:50:12.752673    3773 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dktr6" podUID="c6adfcab-c746-4ad8-abe2-8b300389a4f5"
	Apr 01 19:50:25 embed-certs-882095 kubelet[3773]: E0401 19:50:25.751990    3773 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dktr6" podUID="c6adfcab-c746-4ad8-abe2-8b300389a4f5"
	Apr 01 19:50:38 embed-certs-882095 kubelet[3773]: E0401 19:50:38.753191    3773 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dktr6" podUID="c6adfcab-c746-4ad8-abe2-8b300389a4f5"
	Apr 01 19:50:49 embed-certs-882095 kubelet[3773]: E0401 19:50:49.753265    3773 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dktr6" podUID="c6adfcab-c746-4ad8-abe2-8b300389a4f5"
	Apr 01 19:51:00 embed-certs-882095 kubelet[3773]: E0401 19:51:00.754387    3773 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dktr6" podUID="c6adfcab-c746-4ad8-abe2-8b300389a4f5"
	Apr 01 19:51:02 embed-certs-882095 kubelet[3773]: E0401 19:51:02.835493    3773 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 01 19:51:02 embed-certs-882095 kubelet[3773]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 01 19:51:02 embed-certs-882095 kubelet[3773]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 01 19:51:02 embed-certs-882095 kubelet[3773]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 01 19:51:02 embed-certs-882095 kubelet[3773]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 01 19:51:11 embed-certs-882095 kubelet[3773]: E0401 19:51:11.753656    3773 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dktr6" podUID="c6adfcab-c746-4ad8-abe2-8b300389a4f5"
	Apr 01 19:51:26 embed-certs-882095 kubelet[3773]: E0401 19:51:26.752607    3773 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dktr6" podUID="c6adfcab-c746-4ad8-abe2-8b300389a4f5"
	Apr 01 19:51:37 embed-certs-882095 kubelet[3773]: E0401 19:51:37.753951    3773 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dktr6" podUID="c6adfcab-c746-4ad8-abe2-8b300389a4f5"
	Apr 01 19:51:48 embed-certs-882095 kubelet[3773]: E0401 19:51:48.753461    3773 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dktr6" podUID="c6adfcab-c746-4ad8-abe2-8b300389a4f5"
	Apr 01 19:52:00 embed-certs-882095 kubelet[3773]: E0401 19:52:00.754406    3773 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-dktr6" podUID="c6adfcab-c746-4ad8-abe2-8b300389a4f5"
	Apr 01 19:52:02 embed-certs-882095 kubelet[3773]: E0401 19:52:02.832783    3773 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 01 19:52:02 embed-certs-882095 kubelet[3773]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 01 19:52:02 embed-certs-882095 kubelet[3773]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 01 19:52:02 embed-certs-882095 kubelet[3773]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 01 19:52:02 embed-certs-882095 kubelet[3773]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 01 19:52:15 embed-certs-882095 kubelet[3773]: E0401 19:52:15.766020    3773 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Apr 01 19:52:15 embed-certs-882095 kubelet[3773]: E0401 19:52:15.766069    3773 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Apr 01 19:52:15 embed-certs-882095 kubelet[3773]: E0401 19:52:15.766276    3773 kuberuntime_manager.go:1262] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-nnw8d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pr
obeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:F
ile,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-dktr6_kube-system(c6adfcab-c746-4ad8-abe2-8b300389a4f5): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Apr 01 19:52:15 embed-certs-882095 kubelet[3773]: E0401 19:52:15.766313    3773 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-dktr6" podUID="c6adfcab-c746-4ad8-abe2-8b300389a4f5"
	
	
	==> storage-provisioner [3945afdef36c0a42c6c2597baed81cb27663f89912da17b7c026add868d0b02e] <==
	I0401 19:36:17.637798       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0401 19:36:17.664429       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0401 19:36:17.664589       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0401 19:36:17.701009       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0401 19:36:17.702297       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-882095_f78022a6-6657-40d3-a79c-93195cbd8c04!
	I0401 19:36:17.720534       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"34547e79-08e2-4192-8dc1-bf0d1269fb5d", APIVersion:"v1", ResourceVersion:"391", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-882095_f78022a6-6657-40d3-a79c-93195cbd8c04 became leader
	I0401 19:36:17.802638       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-882095_f78022a6-6657-40d3-a79c-93195cbd8c04!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-882095 -n embed-certs-882095
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-882095 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-dktr6
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-882095 describe pod metrics-server-57f55c9bc5-dktr6
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-882095 describe pod metrics-server-57f55c9bc5-dktr6: exit status 1 (79.334153ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-dktr6" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-882095 describe pod metrics-server-57f55c9bc5-dktr6: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (414.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (446.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-734648 -n default-k8s-diff-port-734648
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-04-01 19:53:12.414206584 +0000 UTC m=+6421.869757764
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-734648 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-734648 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.253µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-734648 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-734648 -n default-k8s-diff-port-734648
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-734648 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-734648 logs -n 25: (1.381071012s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| addons  | enable metrics-server -p default-k8s-diff-port-734648  | default-k8s-diff-port-734648 | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:25 UTC | 01 Apr 24 19:25 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-734648 | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:25 UTC |                     |
	|         | default-k8s-diff-port-734648                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-472858                  | no-preload-472858            | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p no-preload-472858                                   | no-preload-472858            | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:26 UTC | 01 Apr 24 19:38 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |                |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.0                      |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-163608        | old-k8s-version-163608       | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:26 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-882095                 | embed-certs-882095           | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:26 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-882095                                  | embed-certs-882095           | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:26 UTC | 01 Apr 24 19:36 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-734648       | default-k8s-diff-port-734648 | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:27 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-734648 | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:27 UTC | 01 Apr 24 19:36 UTC |
	|         | default-k8s-diff-port-734648                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-163608                              | old-k8s-version-163608       | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:27 UTC | 01 Apr 24 19:27 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-163608             | old-k8s-version-163608       | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:27 UTC | 01 Apr 24 19:27 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-163608                              | old-k8s-version-163608       | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:27 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	| delete  | -p old-k8s-version-163608                              | old-k8s-version-163608       | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:51 UTC | 01 Apr 24 19:51 UTC |
	| start   | -p newest-cni-705837 --memory=2200 --alsologtostderr   | newest-cni-705837            | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:51 UTC | 01 Apr 24 19:52 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |                |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |                |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.0                      |                              |         |                |                     |                     |
	| delete  | -p no-preload-472858                                   | no-preload-472858            | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:51 UTC | 01 Apr 24 19:51 UTC |
	| delete  | -p embed-certs-882095                                  | embed-certs-882095           | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:52 UTC | 01 Apr 24 19:52 UTC |
	| addons  | enable metrics-server -p newest-cni-705837             | newest-cni-705837            | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:52 UTC | 01 Apr 24 19:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p newest-cni-705837                                   | newest-cni-705837            | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:52 UTC | 01 Apr 24 19:52 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p newest-cni-705837                  | newest-cni-705837            | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:52 UTC | 01 Apr 24 19:52 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p newest-cni-705837 --memory=2200 --alsologtostderr   | newest-cni-705837            | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:52 UTC | 01 Apr 24 19:53 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |                |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |                |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.0                      |                              |         |                |                     |                     |
	| image   | newest-cni-705837 image list                           | newest-cni-705837            | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:53 UTC | 01 Apr 24 19:53 UTC |
	|         | --format=json                                          |                              |         |                |                     |                     |
	| pause   | -p newest-cni-705837                                   | newest-cni-705837            | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:53 UTC | 01 Apr 24 19:53 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |                |                     |                     |
	| unpause | -p newest-cni-705837                                   | newest-cni-705837            | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:53 UTC | 01 Apr 24 19:53 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |                |                     |                     |
	| delete  | -p newest-cni-705837                                   | newest-cni-705837            | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:53 UTC | 01 Apr 24 19:53 UTC |
	| delete  | -p newest-cni-705837                                   | newest-cni-705837            | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:53 UTC | 01 Apr 24 19:53 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/01 19:52:29
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 19:52:29.768522   76835 out.go:291] Setting OutFile to fd 1 ...
	I0401 19:52:29.768774   76835 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 19:52:29.768783   76835 out.go:304] Setting ErrFile to fd 2...
	I0401 19:52:29.768788   76835 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 19:52:29.768988   76835 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-10493/.minikube/bin
	I0401 19:52:29.769540   76835 out.go:298] Setting JSON to false
	I0401 19:52:29.770457   76835 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":9302,"bootTime":1711991848,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 19:52:29.770515   76835 start.go:139] virtualization: kvm guest
	I0401 19:52:29.772978   76835 out.go:177] * [newest-cni-705837] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 19:52:29.775098   76835 out.go:177]   - MINIKUBE_LOCATION=18233
	I0401 19:52:29.775151   76835 notify.go:220] Checking for updates...
	I0401 19:52:29.776567   76835 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 19:52:29.777743   76835 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 19:52:29.779045   76835 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-10493/.minikube
	I0401 19:52:29.780278   76835 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 19:52:29.781628   76835 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 19:52:29.783381   76835 config.go:182] Loaded profile config "newest-cni-705837": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0401 19:52:29.783777   76835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:52:29.783827   76835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:52:29.798553   76835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40363
	I0401 19:52:29.798945   76835 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:52:29.799480   76835 main.go:141] libmachine: Using API Version  1
	I0401 19:52:29.799498   76835 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:52:29.799838   76835 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:52:29.800048   76835 main.go:141] libmachine: (newest-cni-705837) Calling .DriverName
	I0401 19:52:29.800309   76835 driver.go:392] Setting default libvirt URI to qemu:///system
	I0401 19:52:29.800631   76835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:52:29.800672   76835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:52:29.815124   76835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40577
	I0401 19:52:29.815512   76835 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:52:29.816030   76835 main.go:141] libmachine: Using API Version  1
	I0401 19:52:29.816057   76835 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:52:29.816384   76835 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:52:29.816575   76835 main.go:141] libmachine: (newest-cni-705837) Calling .DriverName
	I0401 19:52:29.852863   76835 out.go:177] * Using the kvm2 driver based on existing profile
	I0401 19:52:29.854180   76835 start.go:297] selected driver: kvm2
	I0401 19:52:29.854195   76835 start.go:901] validating driver "kvm2" against &{Name:newest-cni-705837 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0-rc.0 ClusterName:newest-cni-705837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.29 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods
:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:52:29.854302   76835 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 19:52:29.855019   76835 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 19:52:29.855088   76835 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18233-10493/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0401 19:52:29.870647   76835 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0401 19:52:29.871061   76835 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0401 19:52:29.871136   76835 cni.go:84] Creating CNI manager for ""
	I0401 19:52:29.871154   76835 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:52:29.871207   76835 start.go:340] cluster config:
	{Name:newest-cni-705837 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.0 ClusterName:newest-cni-705837 Namespace:default APIS
erverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.29 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress
: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:52:29.871329   76835 iso.go:125] acquiring lock: {Name:mka511ffe42ecd86bd7f46e7a17ddcdd3e5e4327 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 19:52:29.873292   76835 out.go:177] * Starting "newest-cni-705837" primary control-plane node in "newest-cni-705837" cluster
	I0401 19:52:29.874927   76835 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.0 and runtime crio
	I0401 19:52:29.874977   76835 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0401 19:52:29.875023   76835 cache.go:56] Caching tarball of preloaded images
	I0401 19:52:29.875125   76835 preload.go:173] Found /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 19:52:29.875139   76835 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-rc.0 on crio
	I0401 19:52:29.875289   76835 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/newest-cni-705837/config.json ...
	I0401 19:52:29.875498   76835 start.go:360] acquireMachinesLock for newest-cni-705837: {Name:mk6b7472209a8db5f40be4c2f0565da7e0094c19 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 19:52:29.875558   76835 start.go:364] duration metric: took 38.196µs to acquireMachinesLock for "newest-cni-705837"
	I0401 19:52:29.875581   76835 start.go:96] Skipping create...Using existing machine configuration
	I0401 19:52:29.875592   76835 fix.go:54] fixHost starting: 
	I0401 19:52:29.875890   76835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:52:29.875939   76835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:52:29.890541   76835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42189
	I0401 19:52:29.890983   76835 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:52:29.891469   76835 main.go:141] libmachine: Using API Version  1
	I0401 19:52:29.891496   76835 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:52:29.891749   76835 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:52:29.891905   76835 main.go:141] libmachine: (newest-cni-705837) Calling .DriverName
	I0401 19:52:29.892059   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetState
	I0401 19:52:29.893677   76835 fix.go:112] recreateIfNeeded on newest-cni-705837: state=Stopped err=<nil>
	I0401 19:52:29.893702   76835 main.go:141] libmachine: (newest-cni-705837) Calling .DriverName
	W0401 19:52:29.893840   76835 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 19:52:29.896560   76835 out.go:177] * Restarting existing kvm2 VM for "newest-cni-705837" ...
	I0401 19:52:29.897769   76835 main.go:141] libmachine: (newest-cni-705837) Calling .Start
	I0401 19:52:29.897931   76835 main.go:141] libmachine: (newest-cni-705837) Ensuring networks are active...
	I0401 19:52:29.898739   76835 main.go:141] libmachine: (newest-cni-705837) Ensuring network default is active
	I0401 19:52:29.899059   76835 main.go:141] libmachine: (newest-cni-705837) Ensuring network mk-newest-cni-705837 is active
	I0401 19:52:29.899444   76835 main.go:141] libmachine: (newest-cni-705837) Getting domain xml...
	I0401 19:52:29.900279   76835 main.go:141] libmachine: (newest-cni-705837) Creating domain...
	I0401 19:52:31.098299   76835 main.go:141] libmachine: (newest-cni-705837) Waiting to get IP...
	I0401 19:52:31.099235   76835 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:52:31.099620   76835 main.go:141] libmachine: (newest-cni-705837) DBG | unable to find current IP address of domain newest-cni-705837 in network mk-newest-cni-705837
	I0401 19:52:31.099716   76835 main.go:141] libmachine: (newest-cni-705837) DBG | I0401 19:52:31.099624   76870 retry.go:31] will retry after 277.892795ms: waiting for machine to come up
	I0401 19:52:31.379259   76835 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:52:31.379755   76835 main.go:141] libmachine: (newest-cni-705837) DBG | unable to find current IP address of domain newest-cni-705837 in network mk-newest-cni-705837
	I0401 19:52:31.379783   76835 main.go:141] libmachine: (newest-cni-705837) DBG | I0401 19:52:31.379699   76870 retry.go:31] will retry after 350.788718ms: waiting for machine to come up
	I0401 19:52:31.732346   76835 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:52:31.732817   76835 main.go:141] libmachine: (newest-cni-705837) DBG | unable to find current IP address of domain newest-cni-705837 in network mk-newest-cni-705837
	I0401 19:52:31.732847   76835 main.go:141] libmachine: (newest-cni-705837) DBG | I0401 19:52:31.732764   76870 retry.go:31] will retry after 382.369367ms: waiting for machine to come up
	I0401 19:52:32.116334   76835 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:52:32.116808   76835 main.go:141] libmachine: (newest-cni-705837) DBG | unable to find current IP address of domain newest-cni-705837 in network mk-newest-cni-705837
	I0401 19:52:32.116848   76835 main.go:141] libmachine: (newest-cni-705837) DBG | I0401 19:52:32.116772   76870 retry.go:31] will retry after 536.742591ms: waiting for machine to come up
	I0401 19:52:32.655197   76835 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:52:32.655586   76835 main.go:141] libmachine: (newest-cni-705837) DBG | unable to find current IP address of domain newest-cni-705837 in network mk-newest-cni-705837
	I0401 19:52:32.655617   76835 main.go:141] libmachine: (newest-cni-705837) DBG | I0401 19:52:32.655541   76870 retry.go:31] will retry after 466.934918ms: waiting for machine to come up
	I0401 19:52:33.124088   76835 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:52:33.124531   76835 main.go:141] libmachine: (newest-cni-705837) DBG | unable to find current IP address of domain newest-cni-705837 in network mk-newest-cni-705837
	I0401 19:52:33.124564   76835 main.go:141] libmachine: (newest-cni-705837) DBG | I0401 19:52:33.124469   76870 retry.go:31] will retry after 617.09915ms: waiting for machine to come up
	I0401 19:52:33.743207   76835 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:52:33.743653   76835 main.go:141] libmachine: (newest-cni-705837) DBG | unable to find current IP address of domain newest-cni-705837 in network mk-newest-cni-705837
	I0401 19:52:33.743682   76835 main.go:141] libmachine: (newest-cni-705837) DBG | I0401 19:52:33.743592   76870 retry.go:31] will retry after 1.187379681s: waiting for machine to come up
	I0401 19:52:34.932134   76835 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:52:34.932623   76835 main.go:141] libmachine: (newest-cni-705837) DBG | unable to find current IP address of domain newest-cni-705837 in network mk-newest-cni-705837
	I0401 19:52:34.932663   76835 main.go:141] libmachine: (newest-cni-705837) DBG | I0401 19:52:34.932590   76870 retry.go:31] will retry after 1.488115444s: waiting for machine to come up
	I0401 19:52:36.422743   76835 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:52:36.423214   76835 main.go:141] libmachine: (newest-cni-705837) DBG | unable to find current IP address of domain newest-cni-705837 in network mk-newest-cni-705837
	I0401 19:52:36.423264   76835 main.go:141] libmachine: (newest-cni-705837) DBG | I0401 19:52:36.423155   76870 retry.go:31] will retry after 1.858015368s: waiting for machine to come up
	I0401 19:52:38.283068   76835 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:52:38.283510   76835 main.go:141] libmachine: (newest-cni-705837) DBG | unable to find current IP address of domain newest-cni-705837 in network mk-newest-cni-705837
	I0401 19:52:38.283540   76835 main.go:141] libmachine: (newest-cni-705837) DBG | I0401 19:52:38.283451   76870 retry.go:31] will retry after 1.707079456s: waiting for machine to come up
	I0401 19:52:39.992594   76835 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:52:39.993053   76835 main.go:141] libmachine: (newest-cni-705837) DBG | unable to find current IP address of domain newest-cni-705837 in network mk-newest-cni-705837
	I0401 19:52:39.993079   76835 main.go:141] libmachine: (newest-cni-705837) DBG | I0401 19:52:39.993007   76870 retry.go:31] will retry after 2.008895454s: waiting for machine to come up
	I0401 19:52:42.004389   76835 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:52:42.004824   76835 main.go:141] libmachine: (newest-cni-705837) DBG | unable to find current IP address of domain newest-cni-705837 in network mk-newest-cni-705837
	I0401 19:52:42.004910   76835 main.go:141] libmachine: (newest-cni-705837) DBG | I0401 19:52:42.004819   76870 retry.go:31] will retry after 3.585823476s: waiting for machine to come up
	I0401 19:52:45.591758   76835 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:52:45.592181   76835 main.go:141] libmachine: (newest-cni-705837) DBG | unable to find current IP address of domain newest-cni-705837 in network mk-newest-cni-705837
	I0401 19:52:45.592211   76835 main.go:141] libmachine: (newest-cni-705837) DBG | I0401 19:52:45.592140   76870 retry.go:31] will retry after 2.821970622s: waiting for machine to come up
	I0401 19:52:48.416191   76835 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:52:48.416703   76835 main.go:141] libmachine: (newest-cni-705837) Found IP for machine: 192.168.50.29
	I0401 19:52:48.416748   76835 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has current primary IP address 192.168.50.29 and MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:52:48.416758   76835 main.go:141] libmachine: (newest-cni-705837) Reserving static IP address...
	I0401 19:52:48.417169   76835 main.go:141] libmachine: (newest-cni-705837) DBG | found host DHCP lease matching {name: "newest-cni-705837", mac: "52:54:00:27:87:01", ip: "192.168.50.29"} in network mk-newest-cni-705837: {Iface:virbr2 ExpiryTime:2024-04-01 20:52:42 +0000 UTC Type:0 Mac:52:54:00:27:87:01 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:newest-cni-705837 Clientid:01:52:54:00:27:87:01}
	I0401 19:52:48.417213   76835 main.go:141] libmachine: (newest-cni-705837) Reserved static IP address: 192.168.50.29
	I0401 19:52:48.417234   76835 main.go:141] libmachine: (newest-cni-705837) DBG | skip adding static IP to network mk-newest-cni-705837 - found existing host DHCP lease matching {name: "newest-cni-705837", mac: "52:54:00:27:87:01", ip: "192.168.50.29"}
	I0401 19:52:48.417254   76835 main.go:141] libmachine: (newest-cni-705837) DBG | Getting to WaitForSSH function...
	I0401 19:52:48.417270   76835 main.go:141] libmachine: (newest-cni-705837) Waiting for SSH to be available...
	I0401 19:52:48.419349   76835 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:52:48.419609   76835 main.go:141] libmachine: (newest-cni-705837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:87:01", ip: ""} in network mk-newest-cni-705837: {Iface:virbr2 ExpiryTime:2024-04-01 20:52:42 +0000 UTC Type:0 Mac:52:54:00:27:87:01 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:newest-cni-705837 Clientid:01:52:54:00:27:87:01}
	I0401 19:52:48.419651   76835 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined IP address 192.168.50.29 and MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:52:48.419769   76835 main.go:141] libmachine: (newest-cni-705837) DBG | Using SSH client type: external
	I0401 19:52:48.419816   76835 main.go:141] libmachine: (newest-cni-705837) DBG | Using SSH private key: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/newest-cni-705837/id_rsa (-rw-------)
	I0401 19:52:48.419851   76835 main.go:141] libmachine: (newest-cni-705837) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.29 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18233-10493/.minikube/machines/newest-cni-705837/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0401 19:52:48.419882   76835 main.go:141] libmachine: (newest-cni-705837) DBG | About to run SSH command:
	I0401 19:52:48.419894   76835 main.go:141] libmachine: (newest-cni-705837) DBG | exit 0
	I0401 19:52:48.546312   76835 main.go:141] libmachine: (newest-cni-705837) DBG | SSH cmd err, output: <nil>: 
	I0401 19:52:48.546664   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetConfigRaw
	I0401 19:52:48.547291   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetIP
	I0401 19:52:48.550090   76835 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:52:48.550438   76835 main.go:141] libmachine: (newest-cni-705837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:87:01", ip: ""} in network mk-newest-cni-705837: {Iface:virbr2 ExpiryTime:2024-04-01 20:52:42 +0000 UTC Type:0 Mac:52:54:00:27:87:01 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:newest-cni-705837 Clientid:01:52:54:00:27:87:01}
	I0401 19:52:48.550482   76835 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined IP address 192.168.50.29 and MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:52:48.550703   76835 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/newest-cni-705837/config.json ...
	I0401 19:52:48.550943   76835 machine.go:94] provisionDockerMachine start ...
	I0401 19:52:48.550962   76835 main.go:141] libmachine: (newest-cni-705837) Calling .DriverName
	I0401 19:52:48.551170   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHHostname
	I0401 19:52:48.553270   76835 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:52:48.553574   76835 main.go:141] libmachine: (newest-cni-705837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:87:01", ip: ""} in network mk-newest-cni-705837: {Iface:virbr2 ExpiryTime:2024-04-01 20:52:42 +0000 UTC Type:0 Mac:52:54:00:27:87:01 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:newest-cni-705837 Clientid:01:52:54:00:27:87:01}
	I0401 19:52:48.553600   76835 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined IP address 192.168.50.29 and MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:52:48.553753   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHPort
	I0401 19:52:48.553918   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHKeyPath
	I0401 19:52:48.554047   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHKeyPath
	I0401 19:52:48.554174   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHUsername
	I0401 19:52:48.554359   76835 main.go:141] libmachine: Using SSH client type: native
	I0401 19:52:48.554579   76835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.29 22 <nil> <nil>}
	I0401 19:52:48.554593   76835 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 19:52:48.666645   76835 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0401 19:52:48.666672   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetMachineName
	I0401 19:52:48.666905   76835 buildroot.go:166] provisioning hostname "newest-cni-705837"
	I0401 19:52:48.666938   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetMachineName
	I0401 19:52:48.667112   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHHostname
	I0401 19:52:48.669913   76835 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:52:48.670340   76835 main.go:141] libmachine: (newest-cni-705837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:87:01", ip: ""} in network mk-newest-cni-705837: {Iface:virbr2 ExpiryTime:2024-04-01 20:52:42 +0000 UTC Type:0 Mac:52:54:00:27:87:01 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:newest-cni-705837 Clientid:01:52:54:00:27:87:01}
	I0401 19:52:48.670372   76835 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined IP address 192.168.50.29 and MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:52:48.670515   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHPort
	I0401 19:52:48.670701   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHKeyPath
	I0401 19:52:48.670853   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHKeyPath
	I0401 19:52:48.670973   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHUsername
	I0401 19:52:48.671118   76835 main.go:141] libmachine: Using SSH client type: native
	I0401 19:52:48.671319   76835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.29 22 <nil> <nil>}
	I0401 19:52:48.671348   76835 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-705837 && echo "newest-cni-705837" | sudo tee /etc/hostname
	I0401 19:52:48.797715   76835 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-705837
	
	I0401 19:52:48.797740   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHHostname
	I0401 19:52:48.800372   76835 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:52:48.800725   76835 main.go:141] libmachine: (newest-cni-705837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:87:01", ip: ""} in network mk-newest-cni-705837: {Iface:virbr2 ExpiryTime:2024-04-01 20:52:42 +0000 UTC Type:0 Mac:52:54:00:27:87:01 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:newest-cni-705837 Clientid:01:52:54:00:27:87:01}
	I0401 19:52:48.800754   76835 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined IP address 192.168.50.29 and MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:52:48.800922   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHPort
	I0401 19:52:48.801114   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHKeyPath
	I0401 19:52:48.801271   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHKeyPath
	I0401 19:52:48.801428   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHUsername
	I0401 19:52:48.801608   76835 main.go:141] libmachine: Using SSH client type: native
	I0401 19:52:48.801821   76835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.29 22 <nil> <nil>}
	I0401 19:52:48.801838   76835 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-705837' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-705837/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-705837' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 19:52:48.929080   76835 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 19:52:48.929122   76835 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18233-10493/.minikube CaCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18233-10493/.minikube}
	I0401 19:52:48.929153   76835 buildroot.go:174] setting up certificates
	I0401 19:52:48.929171   76835 provision.go:84] configureAuth start
	I0401 19:52:48.929190   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetMachineName
	I0401 19:52:48.929496   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetIP
	I0401 19:52:48.931884   76835 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:52:48.932232   76835 main.go:141] libmachine: (newest-cni-705837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:87:01", ip: ""} in network mk-newest-cni-705837: {Iface:virbr2 ExpiryTime:2024-04-01 20:52:42 +0000 UTC Type:0 Mac:52:54:00:27:87:01 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:newest-cni-705837 Clientid:01:52:54:00:27:87:01}
	I0401 19:52:48.932256   76835 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined IP address 192.168.50.29 and MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:52:48.932424   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHHostname
	I0401 19:52:48.934647   76835 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:52:48.934961   76835 main.go:141] libmachine: (newest-cni-705837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:87:01", ip: ""} in network mk-newest-cni-705837: {Iface:virbr2 ExpiryTime:2024-04-01 20:52:42 +0000 UTC Type:0 Mac:52:54:00:27:87:01 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:newest-cni-705837 Clientid:01:52:54:00:27:87:01}
	I0401 19:52:48.934989   76835 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined IP address 192.168.50.29 and MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:52:48.935124   76835 provision.go:143] copyHostCerts
	I0401 19:52:48.935185   76835 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem, removing ...
	I0401 19:52:48.935203   76835 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem
	I0401 19:52:48.935288   76835 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem (1082 bytes)
	I0401 19:52:48.935414   76835 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem, removing ...
	I0401 19:52:48.935423   76835 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem
	I0401 19:52:48.935449   76835 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem (1123 bytes)
	I0401 19:52:48.935521   76835 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem, removing ...
	I0401 19:52:48.935529   76835 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem
	I0401 19:52:48.935550   76835 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem (1679 bytes)
	I0401 19:52:48.935607   76835 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem org=jenkins.newest-cni-705837 san=[127.0.0.1 192.168.50.29 localhost minikube newest-cni-705837]
	I0401 19:52:49.112868   76835 provision.go:177] copyRemoteCerts
	I0401 19:52:49.112920   76835 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 19:52:49.112946   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHHostname
	I0401 19:52:49.115467   76835 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:52:49.115781   76835 main.go:141] libmachine: (newest-cni-705837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:87:01", ip: ""} in network mk-newest-cni-705837: {Iface:virbr2 ExpiryTime:2024-04-01 20:52:42 +0000 UTC Type:0 Mac:52:54:00:27:87:01 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:newest-cni-705837 Clientid:01:52:54:00:27:87:01}
	I0401 19:52:49.115814   76835 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined IP address 192.168.50.29 and MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:52:49.116003   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHPort
	I0401 19:52:49.116193   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHKeyPath
	I0401 19:52:49.116335   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHUsername
	I0401 19:52:49.116490   76835 sshutil.go:53] new ssh client: &{IP:192.168.50.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/newest-cni-705837/id_rsa Username:docker}
	I0401 19:52:49.200225   76835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 19:52:49.230926   76835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0401 19:52:49.261028   76835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 19:52:49.289445   76835 provision.go:87] duration metric: took 360.257205ms to configureAuth
	I0401 19:52:49.289474   76835 buildroot.go:189] setting minikube options for container-runtime
	I0401 19:52:49.289697   76835 config.go:182] Loaded profile config "newest-cni-705837": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0401 19:52:49.289771   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHHostname
	I0401 19:52:49.292271   76835 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:52:49.292636   76835 main.go:141] libmachine: (newest-cni-705837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:87:01", ip: ""} in network mk-newest-cni-705837: {Iface:virbr2 ExpiryTime:2024-04-01 20:52:42 +0000 UTC Type:0 Mac:52:54:00:27:87:01 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:newest-cni-705837 Clientid:01:52:54:00:27:87:01}
	I0401 19:52:49.292653   76835 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined IP address 192.168.50.29 and MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:52:49.292799   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHPort
	I0401 19:52:49.293006   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHKeyPath
	I0401 19:52:49.293197   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHKeyPath
	I0401 19:52:49.293402   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHUsername
	I0401 19:52:49.293598   76835 main.go:141] libmachine: Using SSH client type: native
	I0401 19:52:49.293832   76835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.29 22 <nil> <nil>}
	I0401 19:52:49.293862   76835 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 19:52:49.597729   76835 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 19:52:49.597754   76835 machine.go:97] duration metric: took 1.046796173s to provisionDockerMachine
	I0401 19:52:49.597768   76835 start.go:293] postStartSetup for "newest-cni-705837" (driver="kvm2")
	I0401 19:52:49.597783   76835 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 19:52:49.597807   76835 main.go:141] libmachine: (newest-cni-705837) Calling .DriverName
	I0401 19:52:49.598169   76835 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 19:52:49.598205   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHHostname
	I0401 19:52:49.600929   76835 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:52:49.601241   76835 main.go:141] libmachine: (newest-cni-705837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:87:01", ip: ""} in network mk-newest-cni-705837: {Iface:virbr2 ExpiryTime:2024-04-01 20:52:42 +0000 UTC Type:0 Mac:52:54:00:27:87:01 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:newest-cni-705837 Clientid:01:52:54:00:27:87:01}
	I0401 19:52:49.601271   76835 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined IP address 192.168.50.29 and MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:52:49.601409   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHPort
	I0401 19:52:49.601612   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHKeyPath
	I0401 19:52:49.601810   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHUsername
	I0401 19:52:49.601931   76835 sshutil.go:53] new ssh client: &{IP:192.168.50.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/newest-cni-705837/id_rsa Username:docker}
	I0401 19:52:49.695002   76835 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 19:52:49.699733   76835 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 19:52:49.699756   76835 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/addons for local assets ...
	I0401 19:52:49.699821   76835 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/files for local assets ...
	I0401 19:52:49.699890   76835 filesync.go:149] local asset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> 177512.pem in /etc/ssl/certs
	I0401 19:52:49.699976   76835 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 19:52:49.710061   76835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:52:49.739319   76835 start.go:296] duration metric: took 141.537653ms for postStartSetup
	I0401 19:52:49.739361   76835 fix.go:56] duration metric: took 19.863769917s for fixHost
	I0401 19:52:49.739384   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHHostname
	I0401 19:52:49.742449   76835 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:52:49.742831   76835 main.go:141] libmachine: (newest-cni-705837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:87:01", ip: ""} in network mk-newest-cni-705837: {Iface:virbr2 ExpiryTime:2024-04-01 20:52:42 +0000 UTC Type:0 Mac:52:54:00:27:87:01 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:newest-cni-705837 Clientid:01:52:54:00:27:87:01}
	I0401 19:52:49.742863   76835 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined IP address 192.168.50.29 and MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:52:49.742995   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHPort
	I0401 19:52:49.743243   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHKeyPath
	I0401 19:52:49.743429   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHKeyPath
	I0401 19:52:49.743582   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHUsername
	I0401 19:52:49.743751   76835 main.go:141] libmachine: Using SSH client type: native
	I0401 19:52:49.743984   76835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.29 22 <nil> <nil>}
	I0401 19:52:49.743998   76835 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 19:52:49.855002   76835 main.go:141] libmachine: SSH cmd err, output: <nil>: 1712001169.823380349
	
	I0401 19:52:49.855021   76835 fix.go:216] guest clock: 1712001169.823380349
	I0401 19:52:49.855030   76835 fix.go:229] Guest: 2024-04-01 19:52:49.823380349 +0000 UTC Remote: 2024-04-01 19:52:49.739365619 +0000 UTC m=+20.016948685 (delta=84.01473ms)
	I0401 19:52:49.855051   76835 fix.go:200] guest clock delta is within tolerance: 84.01473ms
	I0401 19:52:49.855058   76835 start.go:83] releasing machines lock for "newest-cni-705837", held for 19.979488559s
	I0401 19:52:49.855081   76835 main.go:141] libmachine: (newest-cni-705837) Calling .DriverName
	I0401 19:52:49.855354   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetIP
	I0401 19:52:49.858098   76835 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:52:49.858415   76835 main.go:141] libmachine: (newest-cni-705837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:87:01", ip: ""} in network mk-newest-cni-705837: {Iface:virbr2 ExpiryTime:2024-04-01 20:52:42 +0000 UTC Type:0 Mac:52:54:00:27:87:01 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:newest-cni-705837 Clientid:01:52:54:00:27:87:01}
	I0401 19:52:49.858441   76835 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined IP address 192.168.50.29 and MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:52:49.858565   76835 main.go:141] libmachine: (newest-cni-705837) Calling .DriverName
	I0401 19:52:49.859091   76835 main.go:141] libmachine: (newest-cni-705837) Calling .DriverName
	I0401 19:52:49.859264   76835 main.go:141] libmachine: (newest-cni-705837) Calling .DriverName
	I0401 19:52:49.859350   76835 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 19:52:49.859397   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHHostname
	I0401 19:52:49.859500   76835 ssh_runner.go:195] Run: cat /version.json
	I0401 19:52:49.859528   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHHostname
	I0401 19:52:49.862152   76835 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:52:49.862258   76835 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:52:49.862459   76835 main.go:141] libmachine: (newest-cni-705837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:87:01", ip: ""} in network mk-newest-cni-705837: {Iface:virbr2 ExpiryTime:2024-04-01 20:52:42 +0000 UTC Type:0 Mac:52:54:00:27:87:01 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:newest-cni-705837 Clientid:01:52:54:00:27:87:01}
	I0401 19:52:49.862484   76835 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined IP address 192.168.50.29 and MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:52:49.862633   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHPort
	I0401 19:52:49.862737   76835 main.go:141] libmachine: (newest-cni-705837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:87:01", ip: ""} in network mk-newest-cni-705837: {Iface:virbr2 ExpiryTime:2024-04-01 20:52:42 +0000 UTC Type:0 Mac:52:54:00:27:87:01 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:newest-cni-705837 Clientid:01:52:54:00:27:87:01}
	I0401 19:52:49.862762   76835 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined IP address 192.168.50.29 and MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:52:49.862813   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHKeyPath
	I0401 19:52:49.862897   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHPort
	I0401 19:52:49.862983   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHUsername
	I0401 19:52:49.863071   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHKeyPath
	I0401 19:52:49.863120   76835 sshutil.go:53] new ssh client: &{IP:192.168.50.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/newest-cni-705837/id_rsa Username:docker}
	I0401 19:52:49.863214   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHUsername
	I0401 19:52:49.863333   76835 sshutil.go:53] new ssh client: &{IP:192.168.50.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/newest-cni-705837/id_rsa Username:docker}
	I0401 19:52:49.967791   76835 ssh_runner.go:195] Run: systemctl --version
	I0401 19:52:49.974417   76835 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 19:52:50.123304   76835 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 19:52:50.130461   76835 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 19:52:50.130548   76835 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 19:52:50.149231   76835 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 19:52:50.149256   76835 start.go:494] detecting cgroup driver to use...
	I0401 19:52:50.149328   76835 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 19:52:50.167323   76835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 19:52:50.184999   76835 docker.go:217] disabling cri-docker service (if available) ...
	I0401 19:52:50.185064   76835 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 19:52:50.200226   76835 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 19:52:50.215691   76835 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 19:52:50.344433   76835 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 19:52:50.519646   76835 docker.go:233] disabling docker service ...
	I0401 19:52:50.519724   76835 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 19:52:50.537082   76835 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 19:52:50.552846   76835 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 19:52:50.683119   76835 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 19:52:50.827526   76835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 19:52:50.844685   76835 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 19:52:50.866926   76835 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0401 19:52:50.867009   76835 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:52:50.879307   76835 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 19:52:50.879385   76835 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:52:50.891359   76835 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:52:50.903740   76835 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:52:50.915757   76835 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 19:52:50.933226   76835 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:52:50.945712   76835 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:52:50.965563   76835 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:52:50.976953   76835 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 19:52:50.987081   76835 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0401 19:52:50.987133   76835 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0401 19:52:51.001514   76835 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 19:52:51.013220   76835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:52:51.157429   76835 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 19:52:51.310728   76835 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 19:52:51.310800   76835 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 19:52:51.316480   76835 start.go:562] Will wait 60s for crictl version
	I0401 19:52:51.316546   76835 ssh_runner.go:195] Run: which crictl
	I0401 19:52:51.320992   76835 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 19:52:51.359746   76835 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 19:52:51.359825   76835 ssh_runner.go:195] Run: crio --version
	I0401 19:52:51.394083   76835 ssh_runner.go:195] Run: crio --version
	I0401 19:52:51.427233   76835 out.go:177] * Preparing Kubernetes v1.30.0-rc.0 on CRI-O 1.29.1 ...
	I0401 19:52:51.428771   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetIP
	I0401 19:52:51.431594   76835 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:52:51.431958   76835 main.go:141] libmachine: (newest-cni-705837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:87:01", ip: ""} in network mk-newest-cni-705837: {Iface:virbr2 ExpiryTime:2024-04-01 20:52:42 +0000 UTC Type:0 Mac:52:54:00:27:87:01 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:newest-cni-705837 Clientid:01:52:54:00:27:87:01}
	I0401 19:52:51.431989   76835 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined IP address 192.168.50.29 and MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:52:51.432141   76835 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0401 19:52:51.437565   76835 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:52:51.454313   76835 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0401 19:52:51.455695   76835 kubeadm.go:877] updating cluster {Name:newest-cni-705837 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0-rc.0 ClusterName:newest-cni-705837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.29 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHos
tTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 19:52:51.455821   76835 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.0 and runtime crio
	I0401 19:52:51.455880   76835 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:52:51.505393   76835 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0-rc.0". assuming images are not preloaded.
	I0401 19:52:51.505491   76835 ssh_runner.go:195] Run: which lz4
	I0401 19:52:51.510264   76835 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0401 19:52:51.515011   76835 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 19:52:51.515039   76835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394409945 bytes)
	I0401 19:52:53.169385   76835 crio.go:462] duration metric: took 1.65914848s to copy over tarball
	I0401 19:52:53.169472   76835 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 19:52:55.620581   76835 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.451080333s)
	I0401 19:52:55.620604   76835 crio.go:469] duration metric: took 2.451188324s to extract the tarball
	I0401 19:52:55.620611   76835 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 19:52:55.661527   76835 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:52:55.715115   76835 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 19:52:55.715136   76835 cache_images.go:84] Images are preloaded, skipping loading
	I0401 19:52:55.715143   76835 kubeadm.go:928] updating node { 192.168.50.29 8443 v1.30.0-rc.0 crio true true} ...
	I0401 19:52:55.715241   76835 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-705837 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.29
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-rc.0 ClusterName:newest-cni-705837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 19:52:55.715305   76835 ssh_runner.go:195] Run: crio config
	I0401 19:52:55.775255   76835 cni.go:84] Creating CNI manager for ""
	I0401 19:52:55.775273   76835 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:52:55.775282   76835 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0401 19:52:55.775304   76835 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.50.29 APIServerPort:8443 KubernetesVersion:v1.30.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-705837 NodeName:newest-cni-705837 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.29"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs
:map[] NodeIP:192.168.50.29 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 19:52:55.775438   76835 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.29
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-705837"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.29
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.29"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 19:52:55.775497   76835 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.0
	I0401 19:52:55.786763   76835 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 19:52:55.786833   76835 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 19:52:55.798829   76835 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (358 bytes)
	I0401 19:52:55.818097   76835 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0401 19:52:55.837417   76835 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I0401 19:52:55.856923   76835 ssh_runner.go:195] Run: grep 192.168.50.29	control-plane.minikube.internal$ /etc/hosts
	I0401 19:52:55.861391   76835 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.29	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:52:55.875655   76835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:52:56.034058   76835 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:52:56.054349   76835 certs.go:68] Setting up /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/newest-cni-705837 for IP: 192.168.50.29
	I0401 19:52:56.054378   76835 certs.go:194] generating shared ca certs ...
	I0401 19:52:56.054394   76835 certs.go:226] acquiring lock for ca certs: {Name:mk348b3e250c104b662139cd7212c6c6dfda3180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:52:56.054576   76835 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key
	I0401 19:52:56.054627   76835 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key
	I0401 19:52:56.054640   76835 certs.go:256] generating profile certs ...
	I0401 19:52:56.054725   76835 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/newest-cni-705837/client.key
	I0401 19:52:56.054792   76835 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/newest-cni-705837/apiserver.key.552e185d
	I0401 19:52:56.054830   76835 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/newest-cni-705837/proxy-client.key
	I0401 19:52:56.054964   76835 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem (1338 bytes)
	W0401 19:52:56.054996   76835 certs.go:480] ignoring /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751_empty.pem, impossibly tiny 0 bytes
	I0401 19:52:56.055004   76835 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem (1675 bytes)
	I0401 19:52:56.055034   76835 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem (1082 bytes)
	I0401 19:52:56.055069   76835 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem (1123 bytes)
	I0401 19:52:56.055093   76835 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem (1679 bytes)
	I0401 19:52:56.055169   76835 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:52:56.055842   76835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 19:52:56.107194   76835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 19:52:56.147954   76835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 19:52:56.201315   76835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 19:52:56.251849   76835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/newest-cni-705837/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0401 19:52:56.282000   76835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/newest-cni-705837/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 19:52:56.311060   76835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/newest-cni-705837/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 19:52:56.340012   76835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/newest-cni-705837/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 19:52:56.370374   76835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem --> /usr/share/ca-certificates/17751.pem (1338 bytes)
	I0401 19:52:56.398357   76835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /usr/share/ca-certificates/177512.pem (1708 bytes)
	I0401 19:52:56.426762   76835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 19:52:56.456096   76835 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0401 19:52:56.478459   76835 ssh_runner.go:195] Run: openssl version
	I0401 19:52:56.485005   76835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17751.pem && ln -fs /usr/share/ca-certificates/17751.pem /etc/ssl/certs/17751.pem"
	I0401 19:52:56.498936   76835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17751.pem
	I0401 19:52:56.504519   76835 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 18:15 /usr/share/ca-certificates/17751.pem
	I0401 19:52:56.504567   76835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17751.pem
	I0401 19:52:56.511223   76835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17751.pem /etc/ssl/certs/51391683.0"
	I0401 19:52:56.525286   76835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177512.pem && ln -fs /usr/share/ca-certificates/177512.pem /etc/ssl/certs/177512.pem"
	I0401 19:52:56.540551   76835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177512.pem
	I0401 19:52:56.546164   76835 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 18:15 /usr/share/ca-certificates/177512.pem
	I0401 19:52:56.546215   76835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177512.pem
	I0401 19:52:56.552784   76835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177512.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 19:52:56.567337   76835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 19:52:56.581579   76835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:52:56.587046   76835 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 18:07 /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:52:56.587100   76835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:52:56.594422   76835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 19:52:56.610257   76835 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 19:52:56.616062   76835 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 19:52:56.623564   76835 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 19:52:56.630555   76835 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 19:52:56.637595   76835 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 19:52:56.644488   76835 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 19:52:56.651397   76835 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 19:52:56.658484   76835 kubeadm.go:391] StartCluster: {Name:newest-cni-705837 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0-rc.0 ClusterName:newest-cni-705837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.29 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTi
meout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:52:56.658571   76835 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 19:52:56.658634   76835 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:52:56.708092   76835 cri.go:89] found id: ""
	I0401 19:52:56.708154   76835 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0401 19:52:56.721216   76835 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0401 19:52:56.721247   76835 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0401 19:52:56.721255   76835 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0401 19:52:56.721308   76835 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 19:52:56.733777   76835 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 19:52:56.734412   76835 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-705837" does not appear in /home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 19:52:56.734673   76835 kubeconfig.go:62] /home/jenkins/minikube-integration/18233-10493/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-705837" cluster setting kubeconfig missing "newest-cni-705837" context setting]
	I0401 19:52:56.735082   76835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/kubeconfig: {Name:mkbd988e40ba29769e9f8a43c4d876f38e957f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:52:56.820426   76835 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 19:52:56.833420   76835 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.29
	I0401 19:52:56.833457   76835 kubeadm.go:1154] stopping kube-system containers ...
	I0401 19:52:56.833467   76835 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0401 19:52:56.833517   76835 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:52:56.881830   76835 cri.go:89] found id: ""
	I0401 19:52:56.881897   76835 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0401 19:52:56.907400   76835 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:52:56.919260   76835 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:52:56.919277   76835 kubeadm.go:156] found existing configuration files:
	
	I0401 19:52:56.919318   76835 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 19:52:56.929621   76835 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:52:56.929703   76835 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:52:56.940765   76835 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 19:52:56.950793   76835 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:52:56.950843   76835 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:52:56.960915   76835 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 19:52:56.973063   76835 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:52:56.973125   76835 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:52:56.983669   76835 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 19:52:56.993772   76835 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:52:56.993849   76835 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:52:57.005158   76835 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:52:57.016700   76835 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:52:57.150441   76835 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:52:58.107177   76835 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:52:58.354933   76835 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:52:58.436041   76835 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:52:58.512429   76835 api_server.go:52] waiting for apiserver process to appear ...
	I0401 19:52:58.512505   76835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:52:59.013428   76835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:52:59.513150   76835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:52:59.532469   76835 api_server.go:72] duration metric: took 1.020037764s to wait for apiserver process to appear ...
	I0401 19:52:59.532494   76835 api_server.go:88] waiting for apiserver healthz status ...
	I0401 19:52:59.532516   76835 api_server.go:253] Checking apiserver healthz at https://192.168.50.29:8443/healthz ...
	I0401 19:52:59.533056   76835 api_server.go:269] stopped: https://192.168.50.29:8443/healthz: Get "https://192.168.50.29:8443/healthz": dial tcp 192.168.50.29:8443: connect: connection refused
	I0401 19:53:00.032889   76835 api_server.go:253] Checking apiserver healthz at https://192.168.50.29:8443/healthz ...
	I0401 19:53:02.801774   76835 api_server.go:279] https://192.168.50.29:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0401 19:53:02.801819   76835 api_server.go:103] status: https://192.168.50.29:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0401 19:53:02.801855   76835 api_server.go:253] Checking apiserver healthz at https://192.168.50.29:8443/healthz ...
	I0401 19:53:02.825495   76835 api_server.go:279] https://192.168.50.29:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0401 19:53:02.825520   76835 api_server.go:103] status: https://192.168.50.29:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0401 19:53:03.032816   76835 api_server.go:253] Checking apiserver healthz at https://192.168.50.29:8443/healthz ...
	I0401 19:53:03.037458   76835 api_server.go:279] https://192.168.50.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:53:03.037482   76835 api_server.go:103] status: https://192.168.50.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:53:03.533198   76835 api_server.go:253] Checking apiserver healthz at https://192.168.50.29:8443/healthz ...
	I0401 19:53:03.539293   76835 api_server.go:279] https://192.168.50.29:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:53:03.539323   76835 api_server.go:103] status: https://192.168.50.29:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:53:04.032807   76835 api_server.go:253] Checking apiserver healthz at https://192.168.50.29:8443/healthz ...
	I0401 19:53:04.043382   76835 api_server.go:279] https://192.168.50.29:8443/healthz returned 200:
	ok
	I0401 19:53:04.061776   76835 api_server.go:141] control plane version: v1.30.0-rc.0
	I0401 19:53:04.061813   76835 api_server.go:131] duration metric: took 4.529311584s to wait for apiserver health ...
	I0401 19:53:04.061825   76835 cni.go:84] Creating CNI manager for ""
	I0401 19:53:04.061833   76835 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:53:04.063574   76835 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0401 19:53:04.065058   76835 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0401 19:53:04.092818   76835 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0401 19:53:04.173577   76835 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 19:53:04.190125   76835 system_pods.go:59] 8 kube-system pods found
	I0401 19:53:04.190189   76835 system_pods.go:61] "coredns-7db6d8ff4d-ddj9k" [a5a45f70-7458-4528-842b-982bd02dded8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:53:04.190201   76835 system_pods.go:61] "etcd-newest-cni-705837" [4b35d79f-ee12-4903-bb7d-6176544a9679] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0401 19:53:04.190213   76835 system_pods.go:61] "kube-apiserver-newest-cni-705837" [f7ab25ee-77ec-40b2-b52d-1c0b99834d9a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0401 19:53:04.190226   76835 system_pods.go:61] "kube-controller-manager-newest-cni-705837" [ede88145-7a58-4b6c-a9bb-4be5f66769d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0401 19:53:04.190236   76835 system_pods.go:61] "kube-proxy-kffvq" [abca4dcf-9d14-4b07-9b1e-e7330b0ea0cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:53:04.190247   76835 system_pods.go:61] "kube-scheduler-newest-cni-705837" [a53134c2-b75f-4fd6-9063-53951fa95023] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0401 19:53:04.190256   76835 system_pods.go:61] "metrics-server-569cc877fc-x2ppm" [7cc65958-4227-4826-a054-db6800052038] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:53:04.190265   76835 system_pods.go:61] "storage-provisioner" [c02f82e6-2558-4721-917a-203540c4f520] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:53:04.190286   76835 system_pods.go:74] duration metric: took 16.677283ms to wait for pod list to return data ...
	I0401 19:53:04.190299   76835 node_conditions.go:102] verifying NodePressure condition ...
	I0401 19:53:04.199081   76835 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 19:53:04.199117   76835 node_conditions.go:123] node cpu capacity is 2
	I0401 19:53:04.199130   76835 node_conditions.go:105] duration metric: took 8.824284ms to run NodePressure ...
	I0401 19:53:04.199154   76835 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:53:04.518983   76835 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 19:53:04.532007   76835 ops.go:34] apiserver oom_adj: -16
	I0401 19:53:04.532027   76835 kubeadm.go:591] duration metric: took 7.810766839s to restartPrimaryControlPlane
	I0401 19:53:04.532042   76835 kubeadm.go:393] duration metric: took 7.87358015s to StartCluster
	I0401 19:53:04.532061   76835 settings.go:142] acquiring lock: {Name:mk5cd3d9600680d3808ad7ff6310a5e71b09e71d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:53:04.532149   76835 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 19:53:04.532974   76835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/kubeconfig: {Name:mkbd988e40ba29769e9f8a43c4d876f38e957f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:53:04.533209   76835 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.29 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 19:53:04.534953   76835 out.go:177] * Verifying Kubernetes components...
	I0401 19:53:04.533287   76835 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0401 19:53:04.533476   76835 config.go:182] Loaded profile config "newest-cni-705837": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0401 19:53:04.536197   76835 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-705837"
	I0401 19:53:04.536209   76835 addons.go:69] Setting metrics-server=true in profile "newest-cni-705837"
	I0401 19:53:04.536211   76835 addons.go:69] Setting default-storageclass=true in profile "newest-cni-705837"
	I0401 19:53:04.536228   76835 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-705837"
	I0401 19:53:04.536235   76835 addons.go:234] Setting addon metrics-server=true in "newest-cni-705837"
	W0401 19:53:04.536238   76835 addons.go:243] addon storage-provisioner should already be in state true
	I0401 19:53:04.536240   76835 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-705837"
	W0401 19:53:04.536245   76835 addons.go:243] addon metrics-server should already be in state true
	I0401 19:53:04.536265   76835 host.go:66] Checking if "newest-cni-705837" exists ...
	I0401 19:53:04.536277   76835 host.go:66] Checking if "newest-cni-705837" exists ...
	I0401 19:53:04.536200   76835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:53:04.536213   76835 addons.go:69] Setting dashboard=true in profile "newest-cni-705837"
	I0401 19:53:04.536361   76835 addons.go:234] Setting addon dashboard=true in "newest-cni-705837"
	W0401 19:53:04.536378   76835 addons.go:243] addon dashboard should already be in state true
	I0401 19:53:04.536405   76835 host.go:66] Checking if "newest-cni-705837" exists ...
	I0401 19:53:04.536614   76835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:53:04.536642   76835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:53:04.536651   76835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:53:04.536673   76835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:53:04.536690   76835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:53:04.536768   76835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:53:04.536779   76835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:53:04.536804   76835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:53:04.552444   76835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37059
	I0401 19:53:04.552879   76835 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:53:04.553437   76835 main.go:141] libmachine: Using API Version  1
	I0401 19:53:04.553465   76835 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:53:04.553868   76835 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:53:04.554485   76835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:53:04.554533   76835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:53:04.555894   76835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35969
	I0401 19:53:04.556147   76835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38887
	I0401 19:53:04.556539   76835 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:53:04.556552   76835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42811
	I0401 19:53:04.556591   76835 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:53:04.557077   76835 main.go:141] libmachine: Using API Version  1
	I0401 19:53:04.557095   76835 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:53:04.557142   76835 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:53:04.557228   76835 main.go:141] libmachine: Using API Version  1
	I0401 19:53:04.557253   76835 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:53:04.557561   76835 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:53:04.557624   76835 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:53:04.557782   76835 main.go:141] libmachine: Using API Version  1
	I0401 19:53:04.557807   76835 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:53:04.558006   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetState
	I0401 19:53:04.558103   76835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:53:04.558143   76835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:53:04.558182   76835 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:53:04.558703   76835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:53:04.558738   76835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:53:04.560550   76835 addons.go:234] Setting addon default-storageclass=true in "newest-cni-705837"
	W0401 19:53:04.560572   76835 addons.go:243] addon default-storageclass should already be in state true
	I0401 19:53:04.560600   76835 host.go:66] Checking if "newest-cni-705837" exists ...
	I0401 19:53:04.560871   76835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:53:04.560909   76835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:53:04.575360   76835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42895
	I0401 19:53:04.575906   76835 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:53:04.576437   76835 main.go:141] libmachine: Using API Version  1
	I0401 19:53:04.576464   76835 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:53:04.576858   76835 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:53:04.577036   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetState
	I0401 19:53:04.577691   76835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44109
	I0401 19:53:04.578391   76835 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:53:04.578965   76835 main.go:141] libmachine: Using API Version  1
	I0401 19:53:04.578981   76835 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:53:04.579484   76835 main.go:141] libmachine: (newest-cni-705837) Calling .DriverName
	I0401 19:53:04.579702   76835 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:53:04.581690   76835 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0401 19:53:04.580053   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetState
	I0401 19:53:04.580828   76835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46367
	I0401 19:53:04.584591   76835 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0401 19:53:04.585947   76835 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0401 19:53:04.585969   76835 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0401 19:53:04.585991   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHHostname
	I0401 19:53:04.583940   76835 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:53:04.585081   76835 main.go:141] libmachine: (newest-cni-705837) Calling .DriverName
	I0401 19:53:04.587678   76835 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:53:04.586697   76835 main.go:141] libmachine: Using API Version  1
	I0401 19:53:04.589215   76835 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 19:53:04.587709   76835 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:53:04.588670   76835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41401
	I0401 19:53:04.589242   76835 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 19:53:04.589265   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHHostname
	I0401 19:53:04.589382   76835 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:53:04.589790   76835 main.go:141] libmachine: (newest-cni-705837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:87:01", ip: ""} in network mk-newest-cni-705837: {Iface:virbr2 ExpiryTime:2024-04-01 20:52:42 +0000 UTC Type:0 Mac:52:54:00:27:87:01 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:newest-cni-705837 Clientid:01:52:54:00:27:87:01}
	I0401 19:53:04.589840   76835 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined IP address 192.168.50.29 and MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:53:04.590011   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHPort
	I0401 19:53:04.590164   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHKeyPath
	I0401 19:53:04.590304   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHUsername
	I0401 19:53:04.590455   76835 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:53:04.590486   76835 sshutil.go:53] new ssh client: &{IP:192.168.50.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/newest-cni-705837/id_rsa Username:docker}
	I0401 19:53:04.590645   76835 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:53:04.591153   76835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:53:04.591193   76835 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:53:04.591469   76835 main.go:141] libmachine: Using API Version  1
	I0401 19:53:04.591485   76835 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:53:04.591986   76835 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:53:04.592181   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetState
	I0401 19:53:04.592616   76835 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:53:04.593027   76835 main.go:141] libmachine: (newest-cni-705837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:87:01", ip: ""} in network mk-newest-cni-705837: {Iface:virbr2 ExpiryTime:2024-04-01 20:52:42 +0000 UTC Type:0 Mac:52:54:00:27:87:01 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:newest-cni-705837 Clientid:01:52:54:00:27:87:01}
	I0401 19:53:04.593045   76835 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined IP address 192.168.50.29 and MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:53:04.593253   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHPort
	I0401 19:53:04.593498   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHKeyPath
	I0401 19:53:04.593660   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHUsername
	I0401 19:53:04.593856   76835 sshutil.go:53] new ssh client: &{IP:192.168.50.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/newest-cni-705837/id_rsa Username:docker}
	I0401 19:53:04.594114   76835 main.go:141] libmachine: (newest-cni-705837) Calling .DriverName
	I0401 19:53:04.595734   76835 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 19:53:04.597458   76835 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 19:53:04.597473   76835 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 19:53:04.597488   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHHostname
	I0401 19:53:04.600418   76835 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:53:04.600809   76835 main.go:141] libmachine: (newest-cni-705837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:87:01", ip: ""} in network mk-newest-cni-705837: {Iface:virbr2 ExpiryTime:2024-04-01 20:52:42 +0000 UTC Type:0 Mac:52:54:00:27:87:01 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:newest-cni-705837 Clientid:01:52:54:00:27:87:01}
	I0401 19:53:04.600832   76835 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined IP address 192.168.50.29 and MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:53:04.600969   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHPort
	I0401 19:53:04.601137   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHKeyPath
	I0401 19:53:04.601305   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHUsername
	I0401 19:53:04.601425   76835 sshutil.go:53] new ssh client: &{IP:192.168.50.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/newest-cni-705837/id_rsa Username:docker}
	I0401 19:53:04.607072   76835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33571
	I0401 19:53:04.607406   76835 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:53:04.607814   76835 main.go:141] libmachine: Using API Version  1
	I0401 19:53:04.607829   76835 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:53:04.608117   76835 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:53:04.608299   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetState
	I0401 19:53:04.609747   76835 main.go:141] libmachine: (newest-cni-705837) Calling .DriverName
	I0401 19:53:04.609973   76835 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 19:53:04.609982   76835 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 19:53:04.609993   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHHostname
	I0401 19:53:04.612808   76835 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:53:04.613215   76835 main.go:141] libmachine: (newest-cni-705837) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:87:01", ip: ""} in network mk-newest-cni-705837: {Iface:virbr2 ExpiryTime:2024-04-01 20:52:42 +0000 UTC Type:0 Mac:52:54:00:27:87:01 Iaid: IPaddr:192.168.50.29 Prefix:24 Hostname:newest-cni-705837 Clientid:01:52:54:00:27:87:01}
	I0401 19:53:04.613226   76835 main.go:141] libmachine: (newest-cni-705837) DBG | domain newest-cni-705837 has defined IP address 192.168.50.29 and MAC address 52:54:00:27:87:01 in network mk-newest-cni-705837
	I0401 19:53:04.613388   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHPort
	I0401 19:53:04.613540   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHKeyPath
	I0401 19:53:04.613717   76835 main.go:141] libmachine: (newest-cni-705837) Calling .GetSSHUsername
	I0401 19:53:04.613831   76835 sshutil.go:53] new ssh client: &{IP:192.168.50.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/newest-cni-705837/id_rsa Username:docker}
	I0401 19:53:04.788511   76835 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:53:04.812897   76835 api_server.go:52] waiting for apiserver process to appear ...
	I0401 19:53:04.812978   76835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:53:04.844057   76835 api_server.go:72] duration metric: took 310.817098ms to wait for apiserver process to appear ...
	I0401 19:53:04.844086   76835 api_server.go:88] waiting for apiserver healthz status ...
	I0401 19:53:04.844107   76835 api_server.go:253] Checking apiserver healthz at https://192.168.50.29:8443/healthz ...
	I0401 19:53:04.848115   76835 api_server.go:279] https://192.168.50.29:8443/healthz returned 200:
	ok
	I0401 19:53:04.849211   76835 api_server.go:141] control plane version: v1.30.0-rc.0
	I0401 19:53:04.849237   76835 api_server.go:131] duration metric: took 5.143235ms to wait for apiserver health ...
	I0401 19:53:04.849247   76835 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 19:53:04.855841   76835 system_pods.go:59] 8 kube-system pods found
	I0401 19:53:04.855867   76835 system_pods.go:61] "coredns-7db6d8ff4d-ddj9k" [a5a45f70-7458-4528-842b-982bd02dded8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:53:04.855881   76835 system_pods.go:61] "etcd-newest-cni-705837" [4b35d79f-ee12-4903-bb7d-6176544a9679] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0401 19:53:04.855889   76835 system_pods.go:61] "kube-apiserver-newest-cni-705837" [f7ab25ee-77ec-40b2-b52d-1c0b99834d9a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0401 19:53:04.855896   76835 system_pods.go:61] "kube-controller-manager-newest-cni-705837" [ede88145-7a58-4b6c-a9bb-4be5f66769d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0401 19:53:04.855907   76835 system_pods.go:61] "kube-proxy-kffvq" [abca4dcf-9d14-4b07-9b1e-e7330b0ea0cb] Running
	I0401 19:53:04.855912   76835 system_pods.go:61] "kube-scheduler-newest-cni-705837" [a53134c2-b75f-4fd6-9063-53951fa95023] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0401 19:53:04.855924   76835 system_pods.go:61] "metrics-server-569cc877fc-x2ppm" [7cc65958-4227-4826-a054-db6800052038] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:53:04.855928   76835 system_pods.go:61] "storage-provisioner" [c02f82e6-2558-4721-917a-203540c4f520] Running
	I0401 19:53:04.855934   76835 system_pods.go:74] duration metric: took 6.681303ms to wait for pod list to return data ...
	I0401 19:53:04.855943   76835 default_sa.go:34] waiting for default service account to be created ...
	I0401 19:53:04.857916   76835 default_sa.go:45] found service account: "default"
	I0401 19:53:04.857932   76835 default_sa.go:55] duration metric: took 1.981556ms for default service account to be created ...
	I0401 19:53:04.857941   76835 kubeadm.go:576] duration metric: took 324.70776ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0401 19:53:04.857957   76835 node_conditions.go:102] verifying NodePressure condition ...
	I0401 19:53:04.860372   76835 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 19:53:04.860391   76835 node_conditions.go:123] node cpu capacity is 2
	I0401 19:53:04.860399   76835 node_conditions.go:105] duration metric: took 2.437773ms to run NodePressure ...
	I0401 19:53:04.860409   76835 start.go:240] waiting for startup goroutines ...
	I0401 19:53:04.887986   76835 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0401 19:53:04.888006   76835 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0401 19:53:04.911718   76835 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0401 19:53:04.911756   76835 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0401 19:53:04.924398   76835 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 19:53:04.936335   76835 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 19:53:04.952794   76835 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0401 19:53:04.952814   76835 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0401 19:53:05.033202   76835 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0401 19:53:05.033226   76835 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0401 19:53:05.053118   76835 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 19:53:05.053146   76835 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 19:53:05.072433   76835 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0401 19:53:05.072455   76835 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0401 19:53:05.099056   76835 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 19:53:05.099075   76835 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 19:53:05.131162   76835 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0401 19:53:05.131183   76835 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0401 19:53:05.195622   76835 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0401 19:53:05.195648   76835 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0401 19:53:05.217779   76835 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 19:53:05.217809   76835 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0401 19:53:05.280196   76835 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0401 19:53:05.280222   76835 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0401 19:53:05.282233   76835 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 19:53:05.316780   76835 main.go:141] libmachine: Making call to close driver server
	I0401 19:53:05.316798   76835 main.go:141] libmachine: (newest-cni-705837) Calling .Close
	I0401 19:53:05.317088   76835 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:53:05.317144   76835 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:53:05.317158   76835 main.go:141] libmachine: Making call to close driver server
	I0401 19:53:05.317165   76835 main.go:141] libmachine: (newest-cni-705837) Calling .Close
	I0401 19:53:05.317161   76835 main.go:141] libmachine: (newest-cni-705837) DBG | Closing plugin on server side
	I0401 19:53:05.317405   76835 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:53:05.317427   76835 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:53:05.317434   76835 main.go:141] libmachine: (newest-cni-705837) DBG | Closing plugin on server side
	I0401 19:53:05.324224   76835 main.go:141] libmachine: Making call to close driver server
	I0401 19:53:05.324266   76835 main.go:141] libmachine: (newest-cni-705837) Calling .Close
	I0401 19:53:05.324550   76835 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:53:05.324569   76835 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:53:05.362350   76835 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 19:53:05.362372   76835 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0401 19:53:05.393628   76835 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 19:53:06.463468   76835 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.527094442s)
	I0401 19:53:06.463531   76835 main.go:141] libmachine: Making call to close driver server
	I0401 19:53:06.463545   76835 main.go:141] libmachine: (newest-cni-705837) Calling .Close
	I0401 19:53:06.463872   76835 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:53:06.463889   76835 main.go:141] libmachine: (newest-cni-705837) DBG | Closing plugin on server side
	I0401 19:53:06.463892   76835 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:53:06.463914   76835 main.go:141] libmachine: Making call to close driver server
	I0401 19:53:06.463921   76835 main.go:141] libmachine: (newest-cni-705837) Calling .Close
	I0401 19:53:06.464112   76835 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:53:06.464126   76835 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:53:06.464136   76835 main.go:141] libmachine: (newest-cni-705837) DBG | Closing plugin on server side
	I0401 19:53:06.690945   76835 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.408669117s)
	I0401 19:53:06.691016   76835 main.go:141] libmachine: Making call to close driver server
	I0401 19:53:06.691035   76835 main.go:141] libmachine: (newest-cni-705837) Calling .Close
	I0401 19:53:06.691323   76835 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:53:06.691339   76835 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:53:06.691347   76835 main.go:141] libmachine: Making call to close driver server
	I0401 19:53:06.691356   76835 main.go:141] libmachine: (newest-cni-705837) Calling .Close
	I0401 19:53:06.691607   76835 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:53:06.691641   76835 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:53:06.691656   76835 addons.go:470] Verifying addon metrics-server=true in "newest-cni-705837"
	I0401 19:53:06.691609   76835 main.go:141] libmachine: (newest-cni-705837) DBG | Closing plugin on server side
	I0401 19:53:06.772092   76835 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.378407208s)
	I0401 19:53:06.772155   76835 main.go:141] libmachine: Making call to close driver server
	I0401 19:53:06.772165   76835 main.go:141] libmachine: (newest-cni-705837) Calling .Close
	I0401 19:53:06.772432   76835 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:53:06.772477   76835 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:53:06.772485   76835 main.go:141] libmachine: Making call to close driver server
	I0401 19:53:06.772508   76835 main.go:141] libmachine: (newest-cni-705837) Calling .Close
	I0401 19:53:06.772447   76835 main.go:141] libmachine: (newest-cni-705837) DBG | Closing plugin on server side
	I0401 19:53:06.772762   76835 main.go:141] libmachine: (newest-cni-705837) DBG | Closing plugin on server side
	I0401 19:53:06.772762   76835 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:53:06.772788   76835 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:53:06.774425   76835 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-705837 addons enable metrics-server
	
	I0401 19:53:06.775899   76835 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0401 19:53:06.777232   76835 addons.go:505] duration metric: took 2.243954634s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0401 19:53:06.777265   76835 start.go:245] waiting for cluster config update ...
	I0401 19:53:06.777281   76835 start.go:254] writing updated cluster config ...
	I0401 19:53:06.777497   76835 ssh_runner.go:195] Run: rm -f paused
	I0401 19:53:06.827201   76835 start.go:600] kubectl: 1.29.3, cluster: 1.30.0-rc.0 (minor skew: 1)
	I0401 19:53:06.829053   76835 out.go:177] * Done! kubectl is now configured to use "newest-cni-705837" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 01 19:53:13 default-k8s-diff-port-734648 crio[701]: time="2024-04-01 19:53:13.082432476Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712001193082411775,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=17aebb4e-2633-4ad9-8c2f-a5b50b3b3a34 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:53:13 default-k8s-diff-port-734648 crio[701]: time="2024-04-01 19:53:13.083038661Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=606bed99-9087-491b-8d38-2b1cbba74001 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:53:13 default-k8s-diff-port-734648 crio[701]: time="2024-04-01 19:53:13.083122546Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=606bed99-9087-491b-8d38-2b1cbba74001 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:53:13 default-k8s-diff-port-734648 crio[701]: time="2024-04-01 19:53:13.083322751Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6dc0b65d110410481e29207b68fd411d2bd22658f09549893d66d3baea1811b3,PodSandboxId:cd511767fbb1ca284eb254d85f510c9b24e116139b793576308717b0db582200,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712000202977043094,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ws9cc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65660abf-9856-4df4-a07b-854cfd8e3fc6,},Annotations:map[string]string{io.kubernetes.container.hash: 19f45f1d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:169206bebd575d2b244c81fa4c7a04e2731c4120950cb9682db1ac25ecb157eb,PodSandboxId:4a2085bf15f4a78576d864f29f817138f2316d8fe75dbf4b18b7cb9dc613914f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712000202908783013,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p8wrc,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 2f6b37e6-b3f9-44b6-8ff9-e8fd781ef1a3,},Annotations:map[string]string{io.kubernetes.container.hash: 4637946c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43690c88cc3da8e174d4465cd9001ba1e623e51cabaadd6e11de58cc57579c5c,PodSandboxId:56e5910ae49204b21ab793599a00718ba2ba59d72ac3342752d4187443784cf5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712000202848201931,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 8509e661-1b53-4018-b6b0-b6a5e242768d,},Annotations:map[string]string{io.kubernetes.container.hash: d51dde38,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd8b1e6605269a97ce86dc1d3da6272b70b139eaffac1d261e5997ce76baa3d8,PodSandboxId:d50186ea9ce7d412a704fb1b828fe13ffe09f06e259559c513465737817fcefd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712000202820765184,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lwsms,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f432161-c5e3-42fa-8857-
8e61959511b0,},Annotations:map[string]string{io.kubernetes.container.hash: 4bfcc750,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84d63bce9959717a8c3ae86d594587c1e2f33bbff95b4d3e917aa026ef54971b,PodSandboxId:53df00131c7c4f8241bdfcc68ca2a5d6f5f054e4f54812dbc8ed0427699818ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:171200018163675350
8,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-734648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1a481b9900ab0be9d25c3fe5e5d2391,},Annotations:map[string]string{io.kubernetes.container.hash: 905d1f56,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ae9f91816078e0969df5e5c46a0ddfcaa977ab2e24d9b86467993539907c542,PodSandboxId:503c77c8e91fcb1ba507756e6279d3c224f74cc45e1a5b4b6556691b49be7b19,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712000181632649887,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-734648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6daf38c602a88c1e0fb7f5442cfff11,},Annotations:map[string]string{io.kubernetes.container.hash: 99af7f03,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2008ec87c0a6ade1d9facfeca980ba3877ca7b17ab2487c9f2ac1a6ae724592f,PodSandboxId:54c30c5b21cb68ff7ecc0a15a3796630513b6b8babd4136595b48a15c8c0e46a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712000181575330253,Labels:map[string]string{io.kube
rnetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-734648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3e266be05bf5ad064ffb4f6640d02a4,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3afe593e5630743da819e3ffa3d83347db0ea26c75f9962f53299aeb7908971,PodSandboxId:ccbdd6f2242b983fb5c3d9b66cd328655b1df693bebd4d3d791de4ccee015de0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712000181516361047,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-734648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9d8710c71f52a6a07b9c8992a48c4ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:703de106af68a16664634902a8a17d7cab4162929e3bbb227023c02a54aa2ccb,PodSandboxId:3a5d118f886790f3832a5749b7c9c52e926c08a30a4f2bdfe8e9d95fc72d5608,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711999890335692084,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-734648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1a481b9900ab0be9d25c3fe5e5d2391,},Annotations:map[string]string{io.kubernetes.container.hash: 905d1f56,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=606bed99-9087-491b-8d38-2b1cbba74001 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:53:13 default-k8s-diff-port-734648 crio[701]: time="2024-04-01 19:53:13.126023466Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=491f9cd5-9440-427e-8acb-859acd97c96b name=/runtime.v1.RuntimeService/Version
	Apr 01 19:53:13 default-k8s-diff-port-734648 crio[701]: time="2024-04-01 19:53:13.126125332Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=491f9cd5-9440-427e-8acb-859acd97c96b name=/runtime.v1.RuntimeService/Version
	Apr 01 19:53:13 default-k8s-diff-port-734648 crio[701]: time="2024-04-01 19:53:13.127037624Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a6aeb5df-69d8-480b-95a6-a8068294b984 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:53:13 default-k8s-diff-port-734648 crio[701]: time="2024-04-01 19:53:13.127453769Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712001193127431405,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a6aeb5df-69d8-480b-95a6-a8068294b984 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:53:13 default-k8s-diff-port-734648 crio[701]: time="2024-04-01 19:53:13.128620347Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aada7edb-38c3-4e03-b61e-9ea97ac6d590 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:53:13 default-k8s-diff-port-734648 crio[701]: time="2024-04-01 19:53:13.128667756Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aada7edb-38c3-4e03-b61e-9ea97ac6d590 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:53:13 default-k8s-diff-port-734648 crio[701]: time="2024-04-01 19:53:13.128949792Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6dc0b65d110410481e29207b68fd411d2bd22658f09549893d66d3baea1811b3,PodSandboxId:cd511767fbb1ca284eb254d85f510c9b24e116139b793576308717b0db582200,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712000202977043094,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ws9cc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65660abf-9856-4df4-a07b-854cfd8e3fc6,},Annotations:map[string]string{io.kubernetes.container.hash: 19f45f1d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:169206bebd575d2b244c81fa4c7a04e2731c4120950cb9682db1ac25ecb157eb,PodSandboxId:4a2085bf15f4a78576d864f29f817138f2316d8fe75dbf4b18b7cb9dc613914f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712000202908783013,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p8wrc,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 2f6b37e6-b3f9-44b6-8ff9-e8fd781ef1a3,},Annotations:map[string]string{io.kubernetes.container.hash: 4637946c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43690c88cc3da8e174d4465cd9001ba1e623e51cabaadd6e11de58cc57579c5c,PodSandboxId:56e5910ae49204b21ab793599a00718ba2ba59d72ac3342752d4187443784cf5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712000202848201931,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 8509e661-1b53-4018-b6b0-b6a5e242768d,},Annotations:map[string]string{io.kubernetes.container.hash: d51dde38,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd8b1e6605269a97ce86dc1d3da6272b70b139eaffac1d261e5997ce76baa3d8,PodSandboxId:d50186ea9ce7d412a704fb1b828fe13ffe09f06e259559c513465737817fcefd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712000202820765184,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lwsms,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f432161-c5e3-42fa-8857-
8e61959511b0,},Annotations:map[string]string{io.kubernetes.container.hash: 4bfcc750,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84d63bce9959717a8c3ae86d594587c1e2f33bbff95b4d3e917aa026ef54971b,PodSandboxId:53df00131c7c4f8241bdfcc68ca2a5d6f5f054e4f54812dbc8ed0427699818ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:171200018163675350
8,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-734648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1a481b9900ab0be9d25c3fe5e5d2391,},Annotations:map[string]string{io.kubernetes.container.hash: 905d1f56,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ae9f91816078e0969df5e5c46a0ddfcaa977ab2e24d9b86467993539907c542,PodSandboxId:503c77c8e91fcb1ba507756e6279d3c224f74cc45e1a5b4b6556691b49be7b19,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712000181632649887,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-734648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6daf38c602a88c1e0fb7f5442cfff11,},Annotations:map[string]string{io.kubernetes.container.hash: 99af7f03,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2008ec87c0a6ade1d9facfeca980ba3877ca7b17ab2487c9f2ac1a6ae724592f,PodSandboxId:54c30c5b21cb68ff7ecc0a15a3796630513b6b8babd4136595b48a15c8c0e46a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712000181575330253,Labels:map[string]string{io.kube
rnetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-734648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3e266be05bf5ad064ffb4f6640d02a4,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3afe593e5630743da819e3ffa3d83347db0ea26c75f9962f53299aeb7908971,PodSandboxId:ccbdd6f2242b983fb5c3d9b66cd328655b1df693bebd4d3d791de4ccee015de0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712000181516361047,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-734648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9d8710c71f52a6a07b9c8992a48c4ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:703de106af68a16664634902a8a17d7cab4162929e3bbb227023c02a54aa2ccb,PodSandboxId:3a5d118f886790f3832a5749b7c9c52e926c08a30a4f2bdfe8e9d95fc72d5608,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711999890335692084,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-734648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1a481b9900ab0be9d25c3fe5e5d2391,},Annotations:map[string]string{io.kubernetes.container.hash: 905d1f56,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aada7edb-38c3-4e03-b61e-9ea97ac6d590 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:53:13 default-k8s-diff-port-734648 crio[701]: time="2024-04-01 19:53:13.167448161Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c2ebe74f-0078-4cbe-8abc-fd89e26407dd name=/runtime.v1.RuntimeService/Version
	Apr 01 19:53:13 default-k8s-diff-port-734648 crio[701]: time="2024-04-01 19:53:13.167548920Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c2ebe74f-0078-4cbe-8abc-fd89e26407dd name=/runtime.v1.RuntimeService/Version
	Apr 01 19:53:13 default-k8s-diff-port-734648 crio[701]: time="2024-04-01 19:53:13.169710758Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7cfd719c-c1b0-4464-be38-4941c5cd4ed0 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:53:13 default-k8s-diff-port-734648 crio[701]: time="2024-04-01 19:53:13.170209756Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712001193170185734,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7cfd719c-c1b0-4464-be38-4941c5cd4ed0 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:53:13 default-k8s-diff-port-734648 crio[701]: time="2024-04-01 19:53:13.170952941Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5aceb9ee-ef78-4eb2-bfb6-3b1ab1905fd6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:53:13 default-k8s-diff-port-734648 crio[701]: time="2024-04-01 19:53:13.171030250Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5aceb9ee-ef78-4eb2-bfb6-3b1ab1905fd6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:53:13 default-k8s-diff-port-734648 crio[701]: time="2024-04-01 19:53:13.171224522Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6dc0b65d110410481e29207b68fd411d2bd22658f09549893d66d3baea1811b3,PodSandboxId:cd511767fbb1ca284eb254d85f510c9b24e116139b793576308717b0db582200,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712000202977043094,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ws9cc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65660abf-9856-4df4-a07b-854cfd8e3fc6,},Annotations:map[string]string{io.kubernetes.container.hash: 19f45f1d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:169206bebd575d2b244c81fa4c7a04e2731c4120950cb9682db1ac25ecb157eb,PodSandboxId:4a2085bf15f4a78576d864f29f817138f2316d8fe75dbf4b18b7cb9dc613914f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712000202908783013,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p8wrc,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 2f6b37e6-b3f9-44b6-8ff9-e8fd781ef1a3,},Annotations:map[string]string{io.kubernetes.container.hash: 4637946c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43690c88cc3da8e174d4465cd9001ba1e623e51cabaadd6e11de58cc57579c5c,PodSandboxId:56e5910ae49204b21ab793599a00718ba2ba59d72ac3342752d4187443784cf5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712000202848201931,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 8509e661-1b53-4018-b6b0-b6a5e242768d,},Annotations:map[string]string{io.kubernetes.container.hash: d51dde38,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd8b1e6605269a97ce86dc1d3da6272b70b139eaffac1d261e5997ce76baa3d8,PodSandboxId:d50186ea9ce7d412a704fb1b828fe13ffe09f06e259559c513465737817fcefd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712000202820765184,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lwsms,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f432161-c5e3-42fa-8857-
8e61959511b0,},Annotations:map[string]string{io.kubernetes.container.hash: 4bfcc750,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84d63bce9959717a8c3ae86d594587c1e2f33bbff95b4d3e917aa026ef54971b,PodSandboxId:53df00131c7c4f8241bdfcc68ca2a5d6f5f054e4f54812dbc8ed0427699818ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:171200018163675350
8,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-734648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1a481b9900ab0be9d25c3fe5e5d2391,},Annotations:map[string]string{io.kubernetes.container.hash: 905d1f56,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ae9f91816078e0969df5e5c46a0ddfcaa977ab2e24d9b86467993539907c542,PodSandboxId:503c77c8e91fcb1ba507756e6279d3c224f74cc45e1a5b4b6556691b49be7b19,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712000181632649887,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-734648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6daf38c602a88c1e0fb7f5442cfff11,},Annotations:map[string]string{io.kubernetes.container.hash: 99af7f03,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2008ec87c0a6ade1d9facfeca980ba3877ca7b17ab2487c9f2ac1a6ae724592f,PodSandboxId:54c30c5b21cb68ff7ecc0a15a3796630513b6b8babd4136595b48a15c8c0e46a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712000181575330253,Labels:map[string]string{io.kube
rnetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-734648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3e266be05bf5ad064ffb4f6640d02a4,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3afe593e5630743da819e3ffa3d83347db0ea26c75f9962f53299aeb7908971,PodSandboxId:ccbdd6f2242b983fb5c3d9b66cd328655b1df693bebd4d3d791de4ccee015de0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712000181516361047,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-734648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9d8710c71f52a6a07b9c8992a48c4ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:703de106af68a16664634902a8a17d7cab4162929e3bbb227023c02a54aa2ccb,PodSandboxId:3a5d118f886790f3832a5749b7c9c52e926c08a30a4f2bdfe8e9d95fc72d5608,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711999890335692084,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-734648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1a481b9900ab0be9d25c3fe5e5d2391,},Annotations:map[string]string{io.kubernetes.container.hash: 905d1f56,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5aceb9ee-ef78-4eb2-bfb6-3b1ab1905fd6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:53:13 default-k8s-diff-port-734648 crio[701]: time="2024-04-01 19:53:13.212796528Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a9d7bcf7-16ce-4dd8-9883-005325529805 name=/runtime.v1.RuntimeService/Version
	Apr 01 19:53:13 default-k8s-diff-port-734648 crio[701]: time="2024-04-01 19:53:13.212971488Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a9d7bcf7-16ce-4dd8-9883-005325529805 name=/runtime.v1.RuntimeService/Version
	Apr 01 19:53:13 default-k8s-diff-port-734648 crio[701]: time="2024-04-01 19:53:13.214954862Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=60524048-7ced-40f5-8809-a4a345056fe5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:53:13 default-k8s-diff-port-734648 crio[701]: time="2024-04-01 19:53:13.215402586Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712001193215377390,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=60524048-7ced-40f5-8809-a4a345056fe5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:53:13 default-k8s-diff-port-734648 crio[701]: time="2024-04-01 19:53:13.216466654Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9cb93603-4f45-40ed-b0f0-6e418034f095 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:53:13 default-k8s-diff-port-734648 crio[701]: time="2024-04-01 19:53:13.216547560Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9cb93603-4f45-40ed-b0f0-6e418034f095 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:53:13 default-k8s-diff-port-734648 crio[701]: time="2024-04-01 19:53:13.216760688Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6dc0b65d110410481e29207b68fd411d2bd22658f09549893d66d3baea1811b3,PodSandboxId:cd511767fbb1ca284eb254d85f510c9b24e116139b793576308717b0db582200,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712000202977043094,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-ws9cc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65660abf-9856-4df4-a07b-854cfd8e3fc6,},Annotations:map[string]string{io.kubernetes.container.hash: 19f45f1d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:169206bebd575d2b244c81fa4c7a04e2731c4120950cb9682db1ac25ecb157eb,PodSandboxId:4a2085bf15f4a78576d864f29f817138f2316d8fe75dbf4b18b7cb9dc613914f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1712000202908783013,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p8wrc,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 2f6b37e6-b3f9-44b6-8ff9-e8fd781ef1a3,},Annotations:map[string]string{io.kubernetes.container.hash: 4637946c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43690c88cc3da8e174d4465cd9001ba1e623e51cabaadd6e11de58cc57579c5c,PodSandboxId:56e5910ae49204b21ab793599a00718ba2ba59d72ac3342752d4187443784cf5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1712000202848201931,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 8509e661-1b53-4018-b6b0-b6a5e242768d,},Annotations:map[string]string{io.kubernetes.container.hash: d51dde38,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd8b1e6605269a97ce86dc1d3da6272b70b139eaffac1d261e5997ce76baa3d8,PodSandboxId:d50186ea9ce7d412a704fb1b828fe13ffe09f06e259559c513465737817fcefd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712000202820765184,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-lwsms,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f432161-c5e3-42fa-8857-
8e61959511b0,},Annotations:map[string]string{io.kubernetes.container.hash: 4bfcc750,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84d63bce9959717a8c3ae86d594587c1e2f33bbff95b4d3e917aa026ef54971b,PodSandboxId:53df00131c7c4f8241bdfcc68ca2a5d6f5f054e4f54812dbc8ed0427699818ad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:171200018163675350
8,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-734648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1a481b9900ab0be9d25c3fe5e5d2391,},Annotations:map[string]string{io.kubernetes.container.hash: 905d1f56,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ae9f91816078e0969df5e5c46a0ddfcaa977ab2e24d9b86467993539907c542,PodSandboxId:503c77c8e91fcb1ba507756e6279d3c224f74cc45e1a5b4b6556691b49be7b19,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712000181632649887,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-734648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6daf38c602a88c1e0fb7f5442cfff11,},Annotations:map[string]string{io.kubernetes.container.hash: 99af7f03,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2008ec87c0a6ade1d9facfeca980ba3877ca7b17ab2487c9f2ac1a6ae724592f,PodSandboxId:54c30c5b21cb68ff7ecc0a15a3796630513b6b8babd4136595b48a15c8c0e46a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1712000181575330253,Labels:map[string]string{io.kube
rnetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-734648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3e266be05bf5ad064ffb4f6640d02a4,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3afe593e5630743da819e3ffa3d83347db0ea26c75f9962f53299aeb7908971,PodSandboxId:ccbdd6f2242b983fb5c3d9b66cd328655b1df693bebd4d3d791de4ccee015de0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1712000181516361047,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-734648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9d8710c71f52a6a07b9c8992a48c4ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:703de106af68a16664634902a8a17d7cab4162929e3bbb227023c02a54aa2ccb,PodSandboxId:3a5d118f886790f3832a5749b7c9c52e926c08a30a4f2bdfe8e9d95fc72d5608,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1711999890335692084,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-734648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1a481b9900ab0be9d25c3fe5e5d2391,},Annotations:map[string]string{io.kubernetes.container.hash: 905d1f56,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9cb93603-4f45-40ed-b0f0-6e418034f095 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6dc0b65d11041       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 minutes ago      Running             coredns                   0                   cd511767fbb1c       coredns-76f75df574-ws9cc
	169206bebd575       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392   16 minutes ago      Running             kube-proxy                0                   4a2085bf15f4a       kube-proxy-p8wrc
	43690c88cc3da       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   56e5910ae4920       storage-provisioner
	dd8b1e6605269       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 minutes ago      Running             coredns                   0                   d50186ea9ce7d       coredns-76f75df574-lwsms
	84d63bce99597       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   16 minutes ago      Running             kube-apiserver            2                   53df00131c7c4       kube-apiserver-default-k8s-diff-port-734648
	1ae9f91816078       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   16 minutes ago      Running             etcd                      2                   503c77c8e91fc       etcd-default-k8s-diff-port-734648
	2008ec87c0a6a       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b   16 minutes ago      Running             kube-scheduler            2                   54c30c5b21cb6       kube-scheduler-default-k8s-diff-port-734648
	a3afe593e5630       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3   16 minutes ago      Running             kube-controller-manager   2                   ccbdd6f2242b9       kube-controller-manager-default-k8s-diff-port-734648
	703de106af68a       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   21 minutes ago      Exited              kube-apiserver            1                   3a5d118f88679       kube-apiserver-default-k8s-diff-port-734648
	
	
	==> coredns [6dc0b65d110410481e29207b68fd411d2bd22658f09549893d66d3baea1811b3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [dd8b1e6605269a97ce86dc1d3da6272b70b139eaffac1d261e5997ce76baa3d8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-734648
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-734648
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2
	                    minikube.k8s.io/name=default-k8s-diff-port-734648
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_01T19_36_28_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Apr 2024 19:36:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-734648
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Apr 2024 19:53:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Apr 2024 19:52:05 +0000   Mon, 01 Apr 2024 19:36:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Apr 2024 19:52:05 +0000   Mon, 01 Apr 2024 19:36:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Apr 2024 19:52:05 +0000   Mon, 01 Apr 2024 19:36:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Apr 2024 19:52:05 +0000   Mon, 01 Apr 2024 19:36:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.145
	  Hostname:    default-k8s-diff-port-734648
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 013fe2ac987f4bf29814991554d9e27d
	  System UUID:                013fe2ac-987f-4bf2-9814-991554d9e27d
	  Boot ID:                    da921e59-4a04-4b3f-883d-bbec1f31759d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-lwsms                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-76f75df574-ws9cc                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-default-k8s-diff-port-734648                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kube-apiserver-default-k8s-diff-port-734648             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-734648    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-p8wrc                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-default-k8s-diff-port-734648             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 metrics-server-57f55c9bc5-fj5x5                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node default-k8s-diff-port-734648 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node default-k8s-diff-port-734648 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node default-k8s-diff-port-734648 status is now: NodeHasSufficientPID
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m                kubelet          Node default-k8s-diff-port-734648 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                kubelet          Node default-k8s-diff-port-734648 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m                kubelet          Node default-k8s-diff-port-734648 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             16m                kubelet          Node default-k8s-diff-port-734648 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                16m                kubelet          Node default-k8s-diff-port-734648 status is now: NodeReady
	  Normal  RegisteredNode           16m                node-controller  Node default-k8s-diff-port-734648 event: Registered Node default-k8s-diff-port-734648 in Controller
	
	
	==> dmesg <==
	[  +0.052680] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043425] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.742503] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.480087] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.688648] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.417269] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.060406] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063164] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.182298] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.158244] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +0.334524] systemd-fstab-generator[684]: Ignoring "noauto" option for root device
	[  +5.168883] systemd-fstab-generator[785]: Ignoring "noauto" option for root device
	[  +0.081733] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.498645] systemd-fstab-generator[918]: Ignoring "noauto" option for root device
	[  +5.632921] kauditd_printk_skb: 97 callbacks suppressed
	[  +8.783745] kauditd_printk_skb: 74 callbacks suppressed
	[Apr 1 19:36] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.833848] systemd-fstab-generator[3427]: Ignoring "noauto" option for root device
	[  +7.328393] systemd-fstab-generator[3757]: Ignoring "noauto" option for root device
	[  +0.120529] kauditd_printk_skb: 54 callbacks suppressed
	[ +13.333887] systemd-fstab-generator[3950]: Ignoring "noauto" option for root device
	[  +0.085674] kauditd_printk_skb: 12 callbacks suppressed
	[Apr 1 19:37] kauditd_printk_skb: 78 callbacks suppressed
	
	
	==> etcd [1ae9f91816078e0969df5e5c46a0ddfcaa977ab2e24d9b86467993539907c542] <==
	{"level":"info","ts":"2024-04-01T19:36:22.443306Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"209571b9f0ad8882 received MsgVoteResp from 209571b9f0ad8882 at term 2"}
	{"level":"info","ts":"2024-04-01T19:36:22.443436Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"209571b9f0ad8882 became leader at term 2"}
	{"level":"info","ts":"2024-04-01T19:36:22.443465Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 209571b9f0ad8882 elected leader 209571b9f0ad8882 at term 2"}
	{"level":"info","ts":"2024-04-01T19:36:22.448142Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-01T19:36:22.452298Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"209571b9f0ad8882","local-member-attributes":"{Name:default-k8s-diff-port-734648 ClientURLs:[https://192.168.61.145:2379]}","request-path":"/0/members/209571b9f0ad8882/attributes","cluster-id":"2cb522128dbb8e4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-01T19:36:22.452632Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-01T19:36:22.452923Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-01T19:36:22.453574Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-01T19:36:22.456042Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-01T19:36:22.45482Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-01T19:36:22.465433Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.145:2379"}
	{"level":"info","ts":"2024-04-01T19:36:22.488473Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2cb522128dbb8e4","local-member-id":"209571b9f0ad8882","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-01T19:36:22.516966Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-01T19:36:22.517027Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-01T19:46:22.542747Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":707}
	{"level":"info","ts":"2024-04-01T19:46:22.553538Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":707,"took":"10.035792ms","hash":2155796245,"current-db-size-bytes":2220032,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2220032,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-04-01T19:46:22.553636Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2155796245,"revision":707,"compact-revision":-1}
	{"level":"info","ts":"2024-04-01T19:51:22.556135Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":950}
	{"level":"info","ts":"2024-04-01T19:51:22.562127Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":950,"took":"5.534298ms","hash":2725228313,"current-db-size-bytes":2220032,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1560576,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-04-01T19:51:22.562198Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2725228313,"revision":950,"compact-revision":707}
	{"level":"info","ts":"2024-04-01T19:52:56.932004Z","caller":"traceutil/trace.go:171","msg":"trace[1790144583] linearizableReadLoop","detail":"{readStateIndex:1487; appliedIndex:1486; }","duration":"224.267397ms","start":"2024-04-01T19:52:56.707566Z","end":"2024-04-01T19:52:56.931834Z","steps":["trace[1790144583] 'read index received'  (duration: 205.828833ms)","trace[1790144583] 'applied index is now lower than readState.Index'  (duration: 18.438096ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-01T19:52:56.93282Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"225.115012ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-01T19:52:56.932958Z","caller":"traceutil/trace.go:171","msg":"trace[71819958] transaction","detail":"{read_only:false; response_revision:1272; number_of_response:1; }","duration":"521.109941ms","start":"2024-04-01T19:52:56.411827Z","end":"2024-04-01T19:52:56.932937Z","steps":["trace[71819958] 'process raft request'  (duration: 501.689617ms)","trace[71819958] 'compare'  (duration: 18.045644ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-01T19:52:56.933125Z","caller":"traceutil/trace.go:171","msg":"trace[1961640237] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1272; }","duration":"225.582373ms","start":"2024-04-01T19:52:56.707529Z","end":"2024-04-01T19:52:56.933112Z","steps":["trace[1961640237] 'agreement among raft nodes before linearized reading'  (duration: 225.073937ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-01T19:52:56.934169Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-01T19:52:56.411813Z","time spent":"521.409612ms","remote":"127.0.0.1:57788","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":120,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.61.145\" mod_revision:1263 > success:<request_put:<key:\"/registry/masterleases/192.168.61.145\" value_size:67 lease:613209296345783496 >> failure:<request_range:<key:\"/registry/masterleases/192.168.61.145\" > >"}
	
	
	==> kernel <==
	 19:53:13 up 22 min,  0 users,  load average: 0.14, 0.19, 0.17
	Linux default-k8s-diff-port-734648 5.10.207 #1 SMP Wed Mar 27 22:02:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [703de106af68a16664634902a8a17d7cab4162929e3bbb227023c02a54aa2ccb] <==
	W0401 19:36:16.963324       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:36:16.993035       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:36:17.000032       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:36:17.030114       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:36:17.055217       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:36:17.069620       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:36:17.116766       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:36:17.143706       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:36:17.296822       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:36:17.357634       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:36:17.361460       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:36:17.411104       1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:36:17.422261       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:36:17.437300       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:36:17.444120       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:36:17.482460       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:36:17.504736       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:36:17.538005       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:36:17.549107       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:36:17.607183       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:36:17.671099       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:36:17.833721       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:36:17.882769       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:36:17.973813       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0401 19:36:18.272025       1 logging.go:59] [core] [Channel #4 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [84d63bce9959717a8c3ae86d594587c1e2f33bbff95b4d3e917aa026ef54971b] <==
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0401 19:49:25.602700       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0401 19:49:25.603088       1 handler_proxy.go:93] no RequestInfo found in the context
	E0401 19:49:25.603191       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0401 19:49:25.604670       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0401 19:51:24.608434       1 handler_proxy.go:93] no RequestInfo found in the context
	E0401 19:51:24.608591       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0401 19:51:25.608835       1 handler_proxy.go:93] no RequestInfo found in the context
	E0401 19:51:25.608950       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0401 19:51:25.608959       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0401 19:51:25.609114       1 handler_proxy.go:93] no RequestInfo found in the context
	E0401 19:51:25.609227       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0401 19:51:25.610125       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0401 19:52:25.610007       1 handler_proxy.go:93] no RequestInfo found in the context
	W0401 19:52:25.610268       1 handler_proxy.go:93] no RequestInfo found in the context
	E0401 19:52:25.610279       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0401 19:52:25.610373       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0401 19:52:25.610443       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0401 19:52:25.612302       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0401 19:52:56.934737       1 trace.go:236] Trace[1046799793]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.61.145,type:*v1.Endpoints,resource:apiServerIPInfo (01-Apr-2024 19:52:56.384) (total time: 549ms):
	Trace[1046799793]: ---"Txn call completed" 523ms (19:52:56.934)
	Trace[1046799793]: [549.854525ms] [549.854525ms] END
	
	
	==> kube-controller-manager [a3afe593e5630743da819e3ffa3d83347db0ea26c75f9962f53299aeb7908971] <==
	I0401 19:47:50.230377       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="122.613µs"
	E0401 19:48:11.037255       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:48:11.546652       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:48:41.043383       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:48:41.556071       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:49:11.048773       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:49:11.564827       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:49:41.055305       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:49:41.573729       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:50:11.061596       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:50:11.583426       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:50:41.067690       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:50:41.598012       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:51:11.072757       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:51:11.607170       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:51:41.080523       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:51:41.617480       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:52:11.086833       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:52:11.626674       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:52:41.092669       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:52:41.636667       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0401 19:52:48.228006       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="313.172µs"
	I0401 19:52:59.228030       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="228.901µs"
	E0401 19:53:11.099295       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:53:11.645786       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [169206bebd575d2b244c81fa4c7a04e2731c4120950cb9682db1ac25ecb157eb] <==
	I0401 19:36:43.541719       1 server_others.go:72] "Using iptables proxy"
	I0401 19:36:43.590267       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.61.145"]
	I0401 19:36:43.655565       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0401 19:36:43.655584       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0401 19:36:43.655599       1 server_others.go:168] "Using iptables Proxier"
	I0401 19:36:43.658649       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0401 19:36:43.658927       1 server.go:865] "Version info" version="v1.29.3"
	I0401 19:36:43.659006       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 19:36:43.660257       1 config.go:188] "Starting service config controller"
	I0401 19:36:43.660326       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0401 19:36:43.660374       1 config.go:97] "Starting endpoint slice config controller"
	I0401 19:36:43.660391       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0401 19:36:43.661110       1 config.go:315] "Starting node config controller"
	I0401 19:36:43.662230       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0401 19:36:43.761472       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0401 19:36:43.761613       1 shared_informer.go:318] Caches are synced for service config
	I0401 19:36:43.762989       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [2008ec87c0a6ade1d9facfeca980ba3877ca7b17ab2487c9f2ac1a6ae724592f] <==
	W0401 19:36:24.613402       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0401 19:36:24.613414       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0401 19:36:25.466071       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0401 19:36:25.466182       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0401 19:36:25.495536       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0401 19:36:25.496983       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0401 19:36:25.502715       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0401 19:36:25.504178       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0401 19:36:25.580747       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0401 19:36:25.580777       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0401 19:36:25.610091       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0401 19:36:25.610143       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0401 19:36:25.656230       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0401 19:36:25.656298       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0401 19:36:25.723531       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0401 19:36:25.724523       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0401 19:36:25.785742       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0401 19:36:25.785837       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0401 19:36:25.810558       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0401 19:36:25.810618       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0401 19:36:25.856344       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0401 19:36:25.856369       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0401 19:36:25.880810       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0401 19:36:25.881105       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0401 19:36:27.599951       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 01 19:50:47 default-k8s-diff-port-734648 kubelet[3764]: E0401 19:50:47.207094    3764 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fj5x5" podUID="e25fa51c-d80e-4ddc-898f-3b9903746537"
	Apr 01 19:51:02 default-k8s-diff-port-734648 kubelet[3764]: E0401 19:51:02.208341    3764 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fj5x5" podUID="e25fa51c-d80e-4ddc-898f-3b9903746537"
	Apr 01 19:51:13 default-k8s-diff-port-734648 kubelet[3764]: E0401 19:51:13.207487    3764 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fj5x5" podUID="e25fa51c-d80e-4ddc-898f-3b9903746537"
	Apr 01 19:51:28 default-k8s-diff-port-734648 kubelet[3764]: E0401 19:51:28.212697    3764 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fj5x5" podUID="e25fa51c-d80e-4ddc-898f-3b9903746537"
	Apr 01 19:51:28 default-k8s-diff-port-734648 kubelet[3764]: E0401 19:51:28.319738    3764 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 01 19:51:28 default-k8s-diff-port-734648 kubelet[3764]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 01 19:51:28 default-k8s-diff-port-734648 kubelet[3764]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 01 19:51:28 default-k8s-diff-port-734648 kubelet[3764]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 01 19:51:28 default-k8s-diff-port-734648 kubelet[3764]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 01 19:51:39 default-k8s-diff-port-734648 kubelet[3764]: E0401 19:51:39.207558    3764 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fj5x5" podUID="e25fa51c-d80e-4ddc-898f-3b9903746537"
	Apr 01 19:51:51 default-k8s-diff-port-734648 kubelet[3764]: E0401 19:51:51.206952    3764 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fj5x5" podUID="e25fa51c-d80e-4ddc-898f-3b9903746537"
	Apr 01 19:52:05 default-k8s-diff-port-734648 kubelet[3764]: E0401 19:52:05.208335    3764 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fj5x5" podUID="e25fa51c-d80e-4ddc-898f-3b9903746537"
	Apr 01 19:52:19 default-k8s-diff-port-734648 kubelet[3764]: E0401 19:52:19.208595    3764 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fj5x5" podUID="e25fa51c-d80e-4ddc-898f-3b9903746537"
	Apr 01 19:52:28 default-k8s-diff-port-734648 kubelet[3764]: E0401 19:52:28.319918    3764 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 01 19:52:28 default-k8s-diff-port-734648 kubelet[3764]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 01 19:52:28 default-k8s-diff-port-734648 kubelet[3764]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 01 19:52:28 default-k8s-diff-port-734648 kubelet[3764]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 01 19:52:28 default-k8s-diff-port-734648 kubelet[3764]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 01 19:52:34 default-k8s-diff-port-734648 kubelet[3764]: E0401 19:52:34.220338    3764 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Apr 01 19:52:34 default-k8s-diff-port-734648 kubelet[3764]: E0401 19:52:34.220406    3764 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Apr 01 19:52:34 default-k8s-diff-port-734648 kubelet[3764]: E0401 19:52:34.220601    3764 kuberuntime_manager.go:1262] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-4t5ft,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe
:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessa
gePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-fj5x5_kube-system(e25fa51c-d80e-4ddc-898f-3b9903746537): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Apr 01 19:52:34 default-k8s-diff-port-734648 kubelet[3764]: E0401 19:52:34.220641    3764 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-fj5x5" podUID="e25fa51c-d80e-4ddc-898f-3b9903746537"
	Apr 01 19:52:48 default-k8s-diff-port-734648 kubelet[3764]: E0401 19:52:48.206975    3764 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fj5x5" podUID="e25fa51c-d80e-4ddc-898f-3b9903746537"
	Apr 01 19:52:59 default-k8s-diff-port-734648 kubelet[3764]: E0401 19:52:59.209133    3764 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fj5x5" podUID="e25fa51c-d80e-4ddc-898f-3b9903746537"
	Apr 01 19:53:10 default-k8s-diff-port-734648 kubelet[3764]: E0401 19:53:10.206729    3764 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fj5x5" podUID="e25fa51c-d80e-4ddc-898f-3b9903746537"
	
	
	==> storage-provisioner [43690c88cc3da8e174d4465cd9001ba1e623e51cabaadd6e11de58cc57579c5c] <==
	I0401 19:36:43.417923       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0401 19:36:43.527519       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0401 19:36:43.527772       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0401 19:36:43.557206       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0401 19:36:43.557371       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-734648_e60a0bed-0065-437b-ba83-53f20be1a273!
	I0401 19:36:43.558461       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"119f664a-8113-4f88-ae73-d7c294587be6", APIVersion:"v1", ResourceVersion:"443", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-734648_e60a0bed-0065-437b-ba83-53f20be1a273 became leader
	I0401 19:36:43.658487       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-734648_e60a0bed-0065-437b-ba83-53f20be1a273!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-734648 -n default-k8s-diff-port-734648
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-734648 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-fj5x5
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-734648 describe pod metrics-server-57f55c9bc5-fj5x5
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-734648 describe pod metrics-server-57f55c9bc5-fj5x5: exit status 1 (66.107412ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-fj5x5" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-734648 describe pod metrics-server-57f55c9bc5-fj5x5: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (446.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (212.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-472858 -n no-preload-472858
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-04-01 19:51:26.943296078 +0000 UTC m=+6316.398847260
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-472858 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-472858 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.703µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-472858 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-472858 -n no-preload-472858
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-472858 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-472858 logs -n 25: (1.418152864s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p bridge-408543 sudo                                  | bridge-408543                | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | systemctl status crio --all                            |                              |         |                |                     |                     |
	|         | --full --no-pager                                      |                              |         |                |                     |                     |
	| ssh     | -p bridge-408543 sudo                                  | bridge-408543                | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |                |                     |                     |
	| ssh     | -p bridge-408543 sudo find                             | bridge-408543                | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |                |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |                |                     |                     |
	| ssh     | -p bridge-408543 sudo crio                             | bridge-408543                | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | config                                                 |                              |         |                |                     |                     |
	| delete  | -p bridge-408543                                       | bridge-408543                | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	| delete  | -p                                                     | disable-driver-mounts-580301 | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | disable-driver-mounts-580301                           |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-734648 | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:24 UTC |
	|         | default-k8s-diff-port-734648                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p no-preload-472858             | no-preload-472858            | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p no-preload-472858                                   | no-preload-472858            | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-882095            | embed-certs-882095           | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:24 UTC | 01 Apr 24 19:24 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-882095                                  | embed-certs-882095           | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:24 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-734648  | default-k8s-diff-port-734648 | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:25 UTC | 01 Apr 24 19:25 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-734648 | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:25 UTC |                     |
	|         | default-k8s-diff-port-734648                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-472858                  | no-preload-472858            | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p no-preload-472858                                   | no-preload-472858            | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:26 UTC | 01 Apr 24 19:38 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |                |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.0                      |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-163608        | old-k8s-version-163608       | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:26 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-882095                 | embed-certs-882095           | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:26 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-882095                                  | embed-certs-882095           | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:26 UTC | 01 Apr 24 19:36 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-734648       | default-k8s-diff-port-734648 | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:27 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-734648 | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:27 UTC | 01 Apr 24 19:36 UTC |
	|         | default-k8s-diff-port-734648                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-163608                              | old-k8s-version-163608       | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:27 UTC | 01 Apr 24 19:27 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-163608             | old-k8s-version-163608       | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:27 UTC | 01 Apr 24 19:27 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-163608                              | old-k8s-version-163608       | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:27 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	| delete  | -p old-k8s-version-163608                              | old-k8s-version-163608       | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:51 UTC | 01 Apr 24 19:51 UTC |
	| start   | -p newest-cni-705837 --memory=2200 --alsologtostderr   | newest-cni-705837            | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:51 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |                |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |                |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.0                      |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/01 19:51:22
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 19:51:22.604401   75954 out.go:291] Setting OutFile to fd 1 ...
	I0401 19:51:22.604724   75954 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 19:51:22.604737   75954 out.go:304] Setting ErrFile to fd 2...
	I0401 19:51:22.604743   75954 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 19:51:22.605045   75954 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-10493/.minikube/bin
	I0401 19:51:22.605907   75954 out.go:298] Setting JSON to false
	I0401 19:51:22.607176   75954 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":9235,"bootTime":1711991848,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 19:51:22.607244   75954 start.go:139] virtualization: kvm guest
	I0401 19:51:22.609753   75954 out.go:177] * [newest-cni-705837] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 19:51:22.611342   75954 out.go:177]   - MINIKUBE_LOCATION=18233
	I0401 19:51:22.612692   75954 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 19:51:22.611406   75954 notify.go:220] Checking for updates...
	I0401 19:51:22.615020   75954 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 19:51:22.616229   75954 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-10493/.minikube
	I0401 19:51:22.617343   75954 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 19:51:22.618536   75954 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 19:51:22.620250   75954 config.go:182] Loaded profile config "default-k8s-diff-port-734648": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 19:51:22.620371   75954 config.go:182] Loaded profile config "embed-certs-882095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 19:51:22.620485   75954 config.go:182] Loaded profile config "no-preload-472858": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0401 19:51:22.620561   75954 driver.go:392] Setting default libvirt URI to qemu:///system
	I0401 19:51:22.659043   75954 out.go:177] * Using the kvm2 driver based on user configuration
	I0401 19:51:22.660461   75954 start.go:297] selected driver: kvm2
	I0401 19:51:22.660480   75954 start.go:901] validating driver "kvm2" against <nil>
	I0401 19:51:22.660509   75954 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 19:51:22.661496   75954 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 19:51:22.661571   75954 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18233-10493/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0401 19:51:22.677956   75954 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0401 19:51:22.678000   75954 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0401 19:51:22.678027   75954 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0401 19:51:22.678221   75954 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0401 19:51:22.678285   75954 cni.go:84] Creating CNI manager for ""
	I0401 19:51:22.678302   75954 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:51:22.678312   75954 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0401 19:51:22.678386   75954 start.go:340] cluster config:
	{Name:newest-cni-705837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.0 ClusterName:newest-cni-705837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:51:22.678508   75954 iso.go:125] acquiring lock: {Name:mka511ffe42ecd86bd7f46e7a17ddcdd3e5e4327 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 19:51:22.680755   75954 out.go:177] * Starting "newest-cni-705837" primary control-plane node in "newest-cni-705837" cluster
	I0401 19:51:22.681975   75954 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.0 and runtime crio
	I0401 19:51:22.682019   75954 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0401 19:51:22.682035   75954 cache.go:56] Caching tarball of preloaded images
	I0401 19:51:22.682109   75954 preload.go:173] Found /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 19:51:22.682120   75954 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-rc.0 on crio
	I0401 19:51:22.682201   75954 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/newest-cni-705837/config.json ...
	I0401 19:51:22.682224   75954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/newest-cni-705837/config.json: {Name:mk1747df3b632b74720f206e606cf8d8eb1fd247 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:51:22.682355   75954 start.go:360] acquireMachinesLock for newest-cni-705837: {Name:mk6b7472209a8db5f40be4c2f0565da7e0094c19 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 19:51:22.682402   75954 start.go:364] duration metric: took 34.429µs to acquireMachinesLock for "newest-cni-705837"
	I0401 19:51:22.682423   75954 start.go:93] Provisioning new machine with config: &{Name:newest-cni-705837 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.0-rc.0 ClusterName:newest-cni-705837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 19:51:22.682476   75954 start.go:125] createHost starting for "" (driver="kvm2")
	
	
	==> CRI-O <==
	Apr 01 19:51:27 no-preload-472858 crio[702]: time="2024-04-01 19:51:27.636854440Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712001087636823875,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97389,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8c7bc04f-5126-471e-8337-43045d93672b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:51:27 no-preload-472858 crio[702]: time="2024-04-01 19:51:27.637818995Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4a070a60-3723-42ab-9975-a9d1a9b3b53c name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:51:27 no-preload-472858 crio[702]: time="2024-04-01 19:51:27.637922208Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4a070a60-3723-42ab-9975-a9d1a9b3b53c name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:51:27 no-preload-472858 crio[702]: time="2024-04-01 19:51:27.638236942Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0e2c4351e647df49ae68dd6f2fa48f97da0f1ed020146f07c9bbdb71c7322f49,PodSandboxId:4701e6ea238d3a457ae5d4bc391b2accae58745f1cf91ea34bcc52cd75572c95,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712000330615521794,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8285w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c450ac4a-974e-4322-9857-fb65792a142b,},Annotations:map[string]string{io.kubernetes.container.hash: dbadeb1f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd2af3697d6e04bee6abaee3f98618549c39fca9d37eddd9100e9771f5eba7b1,PodSandboxId:3f5d21b0de00d1968a6ebc70f5fb997ad6e4dc10ac8a3026c5fd5168a5cc3c63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712000330510043914,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wmbsp,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 7a73f081-42f4-4854-8785-25e54eb0a391,},Annotations:map[string]string{io.kubernetes.container.hash: 1d4d9764,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46ca3d05f0a38f853d588e5798478f86bb56b88053de18a36ea6b1e870765da7,PodSandboxId:cda23238357a2063801c1abff5e6ad8f29637f887f36a5e983eee1fc766fa94b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,State:CONTAINER_RUNNIN
G,CreatedAt:1712000330235492797,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5dmtl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c243321b-b01a-4fd5-895a-888d18ee8527,},Annotations:map[string]string{io.kubernetes.container.hash: 1e020a5c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9502cf2be2504fa95b6c845c5edbe06eec2770fd8f386b59ff0912c421b5487,PodSandboxId:10ec86b24247f820acd7ac516c02d1aa6ce20c41db4c2edbd2a1132ef78f6beb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:171200033028
7549498,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 844e010a-3bee-4fd1-942f-10fa50306617,},Annotations:map[string]string{io.kubernetes.container.hash: 270324,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d8c76c0d24fb6089861d6209885e1e72f2160d2a54aa2ae20ee28159bf7d04f,PodSandboxId:7e6e848a4b8f86f422c68afd32a16ed2602dcfcff914090100461fbebee7046f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,State:CONTAINER_RUNNING,CreatedAt:1712000308571586900,Labels:map[
string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-472858,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b789cd5a965a93fdde5e5001723f860,},Annotations:map[string]string{io.kubernetes.container.hash: e817c594,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72b29fac8ad2c0ec17d315f60a2c02d84311bda4e914417b34f76337547f7e08,PodSandboxId:3192a1acf8fe3aa65d5c638eb83b366935becfb9224ae3954541ddae7e0c414d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712000308553339506,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-472858,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2756d053566b209913a2136d1c6d31a2,},Annotations:map[string]string{io.kubernetes.container.hash: 99525366,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a23edb2b9de3784d6936a19fdaf8e118994492e2fced9fefada45236cb9557e,PodSandboxId:e74d490fbbd3448b2889e49065c366b4bad295c4c2e353146c37b612926968a1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,State:CONTAINER_RUNNING,CreatedAt:1712000308512361713,Labels:map[string]string{io.kubernetes.container.name: kub
e-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-472858,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f722d8aca3d6408d9cd66a3365e1a4,},Annotations:map[string]string{io.kubernetes.container.hash: e124cbce,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddf976a2ea41c7979ac65b40414e90e16efe03daee69ec3f9ce96f1244b6438c,PodSandboxId:f1639384d1e8344ca240afa1c5d14eace564211fc2c6c7589db56929dc22cb7b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,State:CONTAINER_RUNNING,CreatedAt:1712000308438602838,Labels:map[string]string{io.kubernetes.container.name
: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-472858,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f0e863cc75ae379be03fd049b1c5a0e,},Annotations:map[string]string{io.kubernetes.container.hash: 2d785418,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4a070a60-3723-42ab-9975-a9d1a9b3b53c name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:51:27 no-preload-472858 crio[702]: time="2024-04-01 19:51:27.705337723Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dbfe85bd-e0b9-46e9-bcc9-aa72abf410ce name=/runtime.v1.RuntimeService/Version
	Apr 01 19:51:27 no-preload-472858 crio[702]: time="2024-04-01 19:51:27.705477728Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dbfe85bd-e0b9-46e9-bcc9-aa72abf410ce name=/runtime.v1.RuntimeService/Version
	Apr 01 19:51:27 no-preload-472858 crio[702]: time="2024-04-01 19:51:27.719544958Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f2399066-8d87-459b-a170-0843d7d567ba name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:51:27 no-preload-472858 crio[702]: time="2024-04-01 19:51:27.720450588Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712001087720084553,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97389,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f2399066-8d87-459b-a170-0843d7d567ba name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:51:27 no-preload-472858 crio[702]: time="2024-04-01 19:51:27.721420158Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ef74d27f-be91-4ae3-8e75-e798c61f5ceb name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:51:27 no-preload-472858 crio[702]: time="2024-04-01 19:51:27.721537212Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ef74d27f-be91-4ae3-8e75-e798c61f5ceb name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:51:27 no-preload-472858 crio[702]: time="2024-04-01 19:51:27.721802385Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0e2c4351e647df49ae68dd6f2fa48f97da0f1ed020146f07c9bbdb71c7322f49,PodSandboxId:4701e6ea238d3a457ae5d4bc391b2accae58745f1cf91ea34bcc52cd75572c95,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712000330615521794,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8285w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c450ac4a-974e-4322-9857-fb65792a142b,},Annotations:map[string]string{io.kubernetes.container.hash: dbadeb1f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd2af3697d6e04bee6abaee3f98618549c39fca9d37eddd9100e9771f5eba7b1,PodSandboxId:3f5d21b0de00d1968a6ebc70f5fb997ad6e4dc10ac8a3026c5fd5168a5cc3c63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712000330510043914,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wmbsp,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 7a73f081-42f4-4854-8785-25e54eb0a391,},Annotations:map[string]string{io.kubernetes.container.hash: 1d4d9764,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46ca3d05f0a38f853d588e5798478f86bb56b88053de18a36ea6b1e870765da7,PodSandboxId:cda23238357a2063801c1abff5e6ad8f29637f887f36a5e983eee1fc766fa94b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,State:CONTAINER_RUNNIN
G,CreatedAt:1712000330235492797,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5dmtl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c243321b-b01a-4fd5-895a-888d18ee8527,},Annotations:map[string]string{io.kubernetes.container.hash: 1e020a5c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9502cf2be2504fa95b6c845c5edbe06eec2770fd8f386b59ff0912c421b5487,PodSandboxId:10ec86b24247f820acd7ac516c02d1aa6ce20c41db4c2edbd2a1132ef78f6beb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:171200033028
7549498,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 844e010a-3bee-4fd1-942f-10fa50306617,},Annotations:map[string]string{io.kubernetes.container.hash: 270324,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d8c76c0d24fb6089861d6209885e1e72f2160d2a54aa2ae20ee28159bf7d04f,PodSandboxId:7e6e848a4b8f86f422c68afd32a16ed2602dcfcff914090100461fbebee7046f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,State:CONTAINER_RUNNING,CreatedAt:1712000308571586900,Labels:map[
string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-472858,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b789cd5a965a93fdde5e5001723f860,},Annotations:map[string]string{io.kubernetes.container.hash: e817c594,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72b29fac8ad2c0ec17d315f60a2c02d84311bda4e914417b34f76337547f7e08,PodSandboxId:3192a1acf8fe3aa65d5c638eb83b366935becfb9224ae3954541ddae7e0c414d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712000308553339506,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-472858,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2756d053566b209913a2136d1c6d31a2,},Annotations:map[string]string{io.kubernetes.container.hash: 99525366,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a23edb2b9de3784d6936a19fdaf8e118994492e2fced9fefada45236cb9557e,PodSandboxId:e74d490fbbd3448b2889e49065c366b4bad295c4c2e353146c37b612926968a1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,State:CONTAINER_RUNNING,CreatedAt:1712000308512361713,Labels:map[string]string{io.kubernetes.container.name: kub
e-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-472858,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f722d8aca3d6408d9cd66a3365e1a4,},Annotations:map[string]string{io.kubernetes.container.hash: e124cbce,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddf976a2ea41c7979ac65b40414e90e16efe03daee69ec3f9ce96f1244b6438c,PodSandboxId:f1639384d1e8344ca240afa1c5d14eace564211fc2c6c7589db56929dc22cb7b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,State:CONTAINER_RUNNING,CreatedAt:1712000308438602838,Labels:map[string]string{io.kubernetes.container.name
: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-472858,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f0e863cc75ae379be03fd049b1c5a0e,},Annotations:map[string]string{io.kubernetes.container.hash: 2d785418,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ef74d27f-be91-4ae3-8e75-e798c61f5ceb name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:51:27 no-preload-472858 crio[702]: time="2024-04-01 19:51:27.770655261Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a2b3173a-9ba7-442c-ae00-0a0b8cd2d387 name=/runtime.v1.RuntimeService/Version
	Apr 01 19:51:27 no-preload-472858 crio[702]: time="2024-04-01 19:51:27.770731804Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a2b3173a-9ba7-442c-ae00-0a0b8cd2d387 name=/runtime.v1.RuntimeService/Version
	Apr 01 19:51:27 no-preload-472858 crio[702]: time="2024-04-01 19:51:27.777386717Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=18b31794-aa32-4a75-848b-074473c5dff1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:51:27 no-preload-472858 crio[702]: time="2024-04-01 19:51:27.778952352Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712001087778922286,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97389,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=18b31794-aa32-4a75-848b-074473c5dff1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:51:27 no-preload-472858 crio[702]: time="2024-04-01 19:51:27.780006511Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a092805b-e932-4df8-86e0-078f9c1d6628 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:51:27 no-preload-472858 crio[702]: time="2024-04-01 19:51:27.780061286Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a092805b-e932-4df8-86e0-078f9c1d6628 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:51:27 no-preload-472858 crio[702]: time="2024-04-01 19:51:27.780339401Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0e2c4351e647df49ae68dd6f2fa48f97da0f1ed020146f07c9bbdb71c7322f49,PodSandboxId:4701e6ea238d3a457ae5d4bc391b2accae58745f1cf91ea34bcc52cd75572c95,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712000330615521794,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8285w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c450ac4a-974e-4322-9857-fb65792a142b,},Annotations:map[string]string{io.kubernetes.container.hash: dbadeb1f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd2af3697d6e04bee6abaee3f98618549c39fca9d37eddd9100e9771f5eba7b1,PodSandboxId:3f5d21b0de00d1968a6ebc70f5fb997ad6e4dc10ac8a3026c5fd5168a5cc3c63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712000330510043914,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wmbsp,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 7a73f081-42f4-4854-8785-25e54eb0a391,},Annotations:map[string]string{io.kubernetes.container.hash: 1d4d9764,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46ca3d05f0a38f853d588e5798478f86bb56b88053de18a36ea6b1e870765da7,PodSandboxId:cda23238357a2063801c1abff5e6ad8f29637f887f36a5e983eee1fc766fa94b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,State:CONTAINER_RUNNIN
G,CreatedAt:1712000330235492797,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5dmtl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c243321b-b01a-4fd5-895a-888d18ee8527,},Annotations:map[string]string{io.kubernetes.container.hash: 1e020a5c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9502cf2be2504fa95b6c845c5edbe06eec2770fd8f386b59ff0912c421b5487,PodSandboxId:10ec86b24247f820acd7ac516c02d1aa6ce20c41db4c2edbd2a1132ef78f6beb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:171200033028
7549498,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 844e010a-3bee-4fd1-942f-10fa50306617,},Annotations:map[string]string{io.kubernetes.container.hash: 270324,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d8c76c0d24fb6089861d6209885e1e72f2160d2a54aa2ae20ee28159bf7d04f,PodSandboxId:7e6e848a4b8f86f422c68afd32a16ed2602dcfcff914090100461fbebee7046f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,State:CONTAINER_RUNNING,CreatedAt:1712000308571586900,Labels:map[
string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-472858,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b789cd5a965a93fdde5e5001723f860,},Annotations:map[string]string{io.kubernetes.container.hash: e817c594,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72b29fac8ad2c0ec17d315f60a2c02d84311bda4e914417b34f76337547f7e08,PodSandboxId:3192a1acf8fe3aa65d5c638eb83b366935becfb9224ae3954541ddae7e0c414d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712000308553339506,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-472858,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2756d053566b209913a2136d1c6d31a2,},Annotations:map[string]string{io.kubernetes.container.hash: 99525366,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a23edb2b9de3784d6936a19fdaf8e118994492e2fced9fefada45236cb9557e,PodSandboxId:e74d490fbbd3448b2889e49065c366b4bad295c4c2e353146c37b612926968a1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,State:CONTAINER_RUNNING,CreatedAt:1712000308512361713,Labels:map[string]string{io.kubernetes.container.name: kub
e-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-472858,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f722d8aca3d6408d9cd66a3365e1a4,},Annotations:map[string]string{io.kubernetes.container.hash: e124cbce,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddf976a2ea41c7979ac65b40414e90e16efe03daee69ec3f9ce96f1244b6438c,PodSandboxId:f1639384d1e8344ca240afa1c5d14eace564211fc2c6c7589db56929dc22cb7b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,State:CONTAINER_RUNNING,CreatedAt:1712000308438602838,Labels:map[string]string{io.kubernetes.container.name
: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-472858,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f0e863cc75ae379be03fd049b1c5a0e,},Annotations:map[string]string{io.kubernetes.container.hash: 2d785418,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a092805b-e932-4df8-86e0-078f9c1d6628 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:51:27 no-preload-472858 crio[702]: time="2024-04-01 19:51:27.815766179Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=564fc63e-21f4-4796-b881-bdb3fcfeae3a name=/runtime.v1.RuntimeService/Version
	Apr 01 19:51:27 no-preload-472858 crio[702]: time="2024-04-01 19:51:27.815903366Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=564fc63e-21f4-4796-b881-bdb3fcfeae3a name=/runtime.v1.RuntimeService/Version
	Apr 01 19:51:27 no-preload-472858 crio[702]: time="2024-04-01 19:51:27.817910571Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c0f5661a-ea6f-403c-b5d4-3f724bdffa79 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:51:27 no-preload-472858 crio[702]: time="2024-04-01 19:51:27.818989902Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712001087818959978,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97389,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c0f5661a-ea6f-403c-b5d4-3f724bdffa79 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:51:27 no-preload-472858 crio[702]: time="2024-04-01 19:51:27.820107843Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1796b6a3-cac6-439e-a487-1a48676a6f5c name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:51:27 no-preload-472858 crio[702]: time="2024-04-01 19:51:27.820252674Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1796b6a3-cac6-439e-a487-1a48676a6f5c name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:51:27 no-preload-472858 crio[702]: time="2024-04-01 19:51:27.820445767Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0e2c4351e647df49ae68dd6f2fa48f97da0f1ed020146f07c9bbdb71c7322f49,PodSandboxId:4701e6ea238d3a457ae5d4bc391b2accae58745f1cf91ea34bcc52cd75572c95,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712000330615521794,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-8285w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c450ac4a-974e-4322-9857-fb65792a142b,},Annotations:map[string]string{io.kubernetes.container.hash: dbadeb1f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd2af3697d6e04bee6abaee3f98618549c39fca9d37eddd9100e9771f5eba7b1,PodSandboxId:3f5d21b0de00d1968a6ebc70f5fb997ad6e4dc10ac8a3026c5fd5168a5cc3c63,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1712000330510043914,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wmbsp,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 7a73f081-42f4-4854-8785-25e54eb0a391,},Annotations:map[string]string{io.kubernetes.container.hash: 1d4d9764,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46ca3d05f0a38f853d588e5798478f86bb56b88053de18a36ea6b1e870765da7,PodSandboxId:cda23238357a2063801c1abff5e6ad8f29637f887f36a5e983eee1fc766fa94b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652,State:CONTAINER_RUNNIN
G,CreatedAt:1712000330235492797,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5dmtl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c243321b-b01a-4fd5-895a-888d18ee8527,},Annotations:map[string]string{io.kubernetes.container.hash: 1e020a5c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9502cf2be2504fa95b6c845c5edbe06eec2770fd8f386b59ff0912c421b5487,PodSandboxId:10ec86b24247f820acd7ac516c02d1aa6ce20c41db4c2edbd2a1132ef78f6beb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:171200033028
7549498,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 844e010a-3bee-4fd1-942f-10fa50306617,},Annotations:map[string]string{io.kubernetes.container.hash: 270324,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d8c76c0d24fb6089861d6209885e1e72f2160d2a54aa2ae20ee28159bf7d04f,PodSandboxId:7e6e848a4b8f86f422c68afd32a16ed2602dcfcff914090100461fbebee7046f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5,State:CONTAINER_RUNNING,CreatedAt:1712000308571586900,Labels:map[
string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-472858,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b789cd5a965a93fdde5e5001723f860,},Annotations:map[string]string{io.kubernetes.container.hash: e817c594,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72b29fac8ad2c0ec17d315f60a2c02d84311bda4e914417b34f76337547f7e08,PodSandboxId:3192a1acf8fe3aa65d5c638eb83b366935becfb9224ae3954541ddae7e0c414d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1712000308553339506,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-472858,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2756d053566b209913a2136d1c6d31a2,},Annotations:map[string]string{io.kubernetes.container.hash: 99525366,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a23edb2b9de3784d6936a19fdaf8e118994492e2fced9fefada45236cb9557e,PodSandboxId:e74d490fbbd3448b2889e49065c366b4bad295c4c2e353146c37b612926968a1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a,State:CONTAINER_RUNNING,CreatedAt:1712000308512361713,Labels:map[string]string{io.kubernetes.container.name: kub
e-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-472858,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f722d8aca3d6408d9cd66a3365e1a4,},Annotations:map[string]string{io.kubernetes.container.hash: e124cbce,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddf976a2ea41c7979ac65b40414e90e16efe03daee69ec3f9ce96f1244b6438c,PodSandboxId:f1639384d1e8344ca240afa1c5d14eace564211fc2c6c7589db56929dc22cb7b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3,State:CONTAINER_RUNNING,CreatedAt:1712000308438602838,Labels:map[string]string{io.kubernetes.container.name
: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-472858,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f0e863cc75ae379be03fd049b1c5a0e,},Annotations:map[string]string{io.kubernetes.container.hash: 2d785418,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1796b6a3-cac6-439e-a487-1a48676a6f5c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0e2c4351e647d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   12 minutes ago      Running             coredns                   0                   4701e6ea238d3       coredns-7db6d8ff4d-8285w
	bd2af3697d6e0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   12 minutes ago      Running             coredns                   0                   3f5d21b0de00d       coredns-7db6d8ff4d-wmbsp
	f9502cf2be250       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   12 minutes ago      Running             storage-provisioner       0                   10ec86b24247f       storage-provisioner
	46ca3d05f0a38       33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652   12 minutes ago      Running             kube-proxy                0                   cda23238357a2       kube-proxy-5dmtl
	7d8c76c0d24fb       fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5   12 minutes ago      Running             kube-scheduler            2                   7e6e848a4b8f8       kube-scheduler-no-preload-472858
	72b29fac8ad2c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   12 minutes ago      Running             etcd                      2                   3192a1acf8fe3       etcd-no-preload-472858
	8a23edb2b9de3       ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a   12 minutes ago      Running             kube-controller-manager   3                   e74d490fbbd34       kube-controller-manager-no-preload-472858
	ddf976a2ea41c       e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3   12 minutes ago      Running             kube-apiserver            3                   f1639384d1e83       kube-apiserver-no-preload-472858
	
	
	==> coredns [0e2c4351e647df49ae68dd6f2fa48f97da0f1ed020146f07c9bbdb71c7322f49] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [bd2af3697d6e04bee6abaee3f98618549c39fca9d37eddd9100e9771f5eba7b1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-472858
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-472858
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2
	                    minikube.k8s.io/name=no-preload-472858
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_01T19_38_35_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Apr 2024 19:38:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-472858
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Apr 2024 19:51:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Apr 2024 19:49:07 +0000   Mon, 01 Apr 2024 19:38:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Apr 2024 19:49:07 +0000   Mon, 01 Apr 2024 19:38:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Apr 2024 19:49:07 +0000   Mon, 01 Apr 2024 19:38:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Apr 2024 19:49:07 +0000   Mon, 01 Apr 2024 19:38:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.119
	  Hostname:    no-preload-472858
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f58e7d8dd7b64c348661c23ebbbcfe34
	  System UUID:                f58e7d8d-d7b6-4c34-8661-c23ebbbcfe34
	  Boot ID:                    7413a65d-979c-478f-b26e-c08fd2fd5be2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0-rc.0
	  Kube-Proxy Version:         v1.30.0-rc.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-8285w                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     12m
	  kube-system                 coredns-7db6d8ff4d-wmbsp                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     12m
	  kube-system                 etcd-no-preload-472858                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kube-apiserver-no-preload-472858             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-no-preload-472858    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-5dmtl                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-no-preload-472858             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 metrics-server-569cc877fc-wj2tt              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 12m   kube-proxy       
	  Normal  Starting                 12m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m   kubelet          Node no-preload-472858 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m   kubelet          Node no-preload-472858 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m   kubelet          Node no-preload-472858 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m   node-controller  Node no-preload-472858 event: Registered Node no-preload-472858 in Controller
	
	
	==> dmesg <==
	[  +5.070005] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.515202] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.757516] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr 1 19:32] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.064665] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068468] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.203188] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.132355] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +0.305124] systemd-fstab-generator[687]: Ignoring "noauto" option for root device
	[ +17.314964] systemd-fstab-generator[1196]: Ignoring "noauto" option for root device
	[  +0.073112] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.892653] systemd-fstab-generator[1320]: Ignoring "noauto" option for root device
	[ +22.443307] kauditd_printk_skb: 90 callbacks suppressed
	[Apr 1 19:33] kauditd_printk_skb: 5 callbacks suppressed
	[  +7.039978] kauditd_printk_skb: 30 callbacks suppressed
	[ +30.417040] kauditd_printk_skb: 24 callbacks suppressed
	[Apr 1 19:38] kauditd_printk_skb: 6 callbacks suppressed
	[  +1.757296] systemd-fstab-generator[3945]: Ignoring "noauto" option for root device
	[  +7.563746] systemd-fstab-generator[4273]: Ignoring "noauto" option for root device
	[  +0.083322] kauditd_printk_skb: 57 callbacks suppressed
	[ +13.331040] systemd-fstab-generator[4476]: Ignoring "noauto" option for root device
	[  +0.116770] kauditd_printk_skb: 12 callbacks suppressed
	[Apr 1 19:39] kauditd_printk_skb: 76 callbacks suppressed
	
	
	==> etcd [72b29fac8ad2c0ec17d315f60a2c02d84311bda4e914417b34f76337547f7e08] <==
	{"level":"info","ts":"2024-04-01T19:38:28.979009Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-01T19:38:28.979227Z","caller":"etcdserver/server.go:744","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"a39a7858c1cd6fec","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-04-01T19:38:28.982617Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a39a7858c1cd6fec switched to configuration voters=(11788867297199615980)"}
	{"level":"info","ts":"2024-04-01T19:38:28.98301Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"807e03d5c68d6646","local-member-id":"a39a7858c1cd6fec","added-peer-id":"a39a7858c1cd6fec","added-peer-peer-urls":["https://192.168.72.119:2380"]}
	{"level":"info","ts":"2024-04-01T19:38:29.926235Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a39a7858c1cd6fec is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-01T19:38:29.926338Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a39a7858c1cd6fec became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-01T19:38:29.92639Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a39a7858c1cd6fec received MsgPreVoteResp from a39a7858c1cd6fec at term 1"}
	{"level":"info","ts":"2024-04-01T19:38:29.926426Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a39a7858c1cd6fec became candidate at term 2"}
	{"level":"info","ts":"2024-04-01T19:38:29.92645Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a39a7858c1cd6fec received MsgVoteResp from a39a7858c1cd6fec at term 2"}
	{"level":"info","ts":"2024-04-01T19:38:29.926479Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a39a7858c1cd6fec became leader at term 2"}
	{"level":"info","ts":"2024-04-01T19:38:29.926508Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a39a7858c1cd6fec elected leader a39a7858c1cd6fec at term 2"}
	{"level":"info","ts":"2024-04-01T19:38:29.930492Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"a39a7858c1cd6fec","local-member-attributes":"{Name:no-preload-472858 ClientURLs:[https://192.168.72.119:2379]}","request-path":"/0/members/a39a7858c1cd6fec/attributes","cluster-id":"807e03d5c68d6646","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-01T19:38:29.930888Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-01T19:38:29.931245Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-01T19:38:29.931639Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-01T19:38:29.937775Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.119:2379"}
	{"level":"info","ts":"2024-04-01T19:38:29.939616Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"807e03d5c68d6646","local-member-id":"a39a7858c1cd6fec","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-01T19:38:29.939812Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-01T19:38:29.941269Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-01T19:38:29.941379Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-01T19:38:29.941422Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-01T19:38:29.942737Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-01T19:48:30.025667Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":677}
	{"level":"info","ts":"2024-04-01T19:48:30.03618Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":677,"took":"9.78763ms","hash":905763652,"current-db-size-bytes":2109440,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":2109440,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-04-01T19:48:30.036285Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":905763652,"revision":677,"compact-revision":-1}
	
	
	==> kernel <==
	 19:51:28 up 19 min,  0 users,  load average: 0.11, 0.17, 0.17
	Linux no-preload-472858 5.10.207 #1 SMP Wed Mar 27 22:02:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ddf976a2ea41c7979ac65b40414e90e16efe03daee69ec3f9ce96f1244b6438c] <==
	I0401 19:44:32.890524       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0401 19:46:32.890043       1 handler_proxy.go:93] no RequestInfo found in the context
	E0401 19:46:32.890557       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0401 19:46:32.890593       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0401 19:46:32.890813       1 handler_proxy.go:93] no RequestInfo found in the context
	E0401 19:46:32.890994       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0401 19:46:32.892607       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0401 19:48:31.892779       1 handler_proxy.go:93] no RequestInfo found in the context
	E0401 19:48:31.892990       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0401 19:48:32.893253       1 handler_proxy.go:93] no RequestInfo found in the context
	E0401 19:48:32.893328       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0401 19:48:32.893343       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0401 19:48:32.893523       1 handler_proxy.go:93] no RequestInfo found in the context
	E0401 19:48:32.893632       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0401 19:48:32.894519       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0401 19:49:32.894362       1 handler_proxy.go:93] no RequestInfo found in the context
	E0401 19:49:32.894580       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0401 19:49:32.894594       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0401 19:49:32.894664       1 handler_proxy.go:93] no RequestInfo found in the context
	E0401 19:49:32.894756       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0401 19:49:32.896699       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [8a23edb2b9de3784d6936a19fdaf8e118994492e2fced9fefada45236cb9557e] <==
	I0401 19:45:47.861550       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:46:17.299628       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:46:17.870360       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:46:47.305077       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:46:47.878399       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:47:17.311015       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:47:17.887563       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:47:47.318669       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:47:47.896269       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:48:17.325008       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:48:17.911583       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:48:47.331443       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:48:47.922807       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:49:17.337282       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:49:17.931339       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:49:47.342580       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:49:47.941958       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0401 19:49:56.808386       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="165.739µs"
	I0401 19:50:09.807062       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="82.416µs"
	E0401 19:50:17.349008       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:50:17.951462       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:50:47.354794       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:50:47.960785       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0401 19:51:17.363753       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0401 19:51:17.968770       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [46ca3d05f0a38f853d588e5798478f86bb56b88053de18a36ea6b1e870765da7] <==
	I0401 19:38:50.854926       1 server_linux.go:69] "Using iptables proxy"
	I0401 19:38:50.939277       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.72.119"]
	I0401 19:38:51.062492       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0401 19:38:51.062605       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0401 19:38:51.062720       1 server_linux.go:165] "Using iptables Proxier"
	I0401 19:38:51.066084       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0401 19:38:51.066355       1 server.go:872] "Version info" version="v1.30.0-rc.0"
	I0401 19:38:51.066556       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 19:38:51.067749       1 config.go:192] "Starting service config controller"
	I0401 19:38:51.067809       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0401 19:38:51.068018       1 config.go:101] "Starting endpoint slice config controller"
	I0401 19:38:51.068049       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0401 19:38:51.068803       1 config.go:319] "Starting node config controller"
	I0401 19:38:51.068883       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0401 19:38:51.169017       1 shared_informer.go:320] Caches are synced for node config
	I0401 19:38:51.169108       1 shared_informer.go:320] Caches are synced for service config
	I0401 19:38:51.169408       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [7d8c76c0d24fb6089861d6209885e1e72f2160d2a54aa2ae20ee28159bf7d04f] <==
	E0401 19:38:31.916210       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0401 19:38:31.916354       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0401 19:38:32.734357       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0401 19:38:32.734412       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0401 19:38:32.809526       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0401 19:38:32.809619       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0401 19:38:32.840560       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0401 19:38:32.841290       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0401 19:38:32.872515       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0401 19:38:32.872609       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0401 19:38:32.920055       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0401 19:38:32.920243       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0401 19:38:32.985992       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0401 19:38:32.986063       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0401 19:38:33.025946       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0401 19:38:33.026003       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0401 19:38:33.031506       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0401 19:38:33.031555       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0401 19:38:33.056988       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0401 19:38:33.057046       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0401 19:38:33.191520       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0401 19:38:33.191600       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0401 19:38:33.323205       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0401 19:38:33.323301       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0401 19:38:35.705232       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 01 19:48:50 no-preload-472858 kubelet[4280]: E0401 19:48:50.788744    4280 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wj2tt" podUID="5259722c-3d0b-468f-b941-419806e91177"
	Apr 01 19:49:01 no-preload-472858 kubelet[4280]: E0401 19:49:01.789240    4280 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wj2tt" podUID="5259722c-3d0b-468f-b941-419806e91177"
	Apr 01 19:49:15 no-preload-472858 kubelet[4280]: E0401 19:49:15.787479    4280 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wj2tt" podUID="5259722c-3d0b-468f-b941-419806e91177"
	Apr 01 19:49:29 no-preload-472858 kubelet[4280]: E0401 19:49:29.787089    4280 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wj2tt" podUID="5259722c-3d0b-468f-b941-419806e91177"
	Apr 01 19:49:34 no-preload-472858 kubelet[4280]: E0401 19:49:34.843662    4280 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 01 19:49:34 no-preload-472858 kubelet[4280]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 01 19:49:34 no-preload-472858 kubelet[4280]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 01 19:49:34 no-preload-472858 kubelet[4280]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 01 19:49:34 no-preload-472858 kubelet[4280]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 01 19:49:44 no-preload-472858 kubelet[4280]: E0401 19:49:44.835759    4280 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Apr 01 19:49:44 no-preload-472858 kubelet[4280]: E0401 19:49:44.836841    4280 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Apr 01 19:49:44 no-preload-472858 kubelet[4280]: E0401 19:49:44.837676    4280 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5wsfh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,Recurs
iveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false
,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-wj2tt_kube-system(5259722c-3d0b-468f-b941-419806e91177): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Apr 01 19:49:44 no-preload-472858 kubelet[4280]: E0401 19:49:44.837931    4280 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-wj2tt" podUID="5259722c-3d0b-468f-b941-419806e91177"
	Apr 01 19:49:56 no-preload-472858 kubelet[4280]: E0401 19:49:56.788049    4280 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wj2tt" podUID="5259722c-3d0b-468f-b941-419806e91177"
	Apr 01 19:50:09 no-preload-472858 kubelet[4280]: E0401 19:50:09.786862    4280 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wj2tt" podUID="5259722c-3d0b-468f-b941-419806e91177"
	Apr 01 19:50:23 no-preload-472858 kubelet[4280]: E0401 19:50:23.787079    4280 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wj2tt" podUID="5259722c-3d0b-468f-b941-419806e91177"
	Apr 01 19:50:34 no-preload-472858 kubelet[4280]: E0401 19:50:34.844469    4280 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 01 19:50:34 no-preload-472858 kubelet[4280]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 01 19:50:34 no-preload-472858 kubelet[4280]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 01 19:50:34 no-preload-472858 kubelet[4280]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 01 19:50:34 no-preload-472858 kubelet[4280]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 01 19:50:37 no-preload-472858 kubelet[4280]: E0401 19:50:37.787254    4280 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wj2tt" podUID="5259722c-3d0b-468f-b941-419806e91177"
	Apr 01 19:50:52 no-preload-472858 kubelet[4280]: E0401 19:50:52.786812    4280 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wj2tt" podUID="5259722c-3d0b-468f-b941-419806e91177"
	Apr 01 19:51:04 no-preload-472858 kubelet[4280]: E0401 19:51:04.786887    4280 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wj2tt" podUID="5259722c-3d0b-468f-b941-419806e91177"
	Apr 01 19:51:17 no-preload-472858 kubelet[4280]: E0401 19:51:17.787603    4280 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-wj2tt" podUID="5259722c-3d0b-468f-b941-419806e91177"
	
	
	==> storage-provisioner [f9502cf2be2504fa95b6c845c5edbe06eec2770fd8f386b59ff0912c421b5487] <==
	I0401 19:38:50.540773       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0401 19:38:50.571915       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0401 19:38:50.571995       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0401 19:38:50.600199       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0401 19:38:50.600417       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-472858_8e8fb087-282f-403f-822d-4406a8190986!
	I0401 19:38:50.601184       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"13e18572-9570-426f-89a4-6efeed51df99", APIVersion:"v1", ResourceVersion:"395", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-472858_8e8fb087-282f-403f-822d-4406a8190986 became leader
	I0401 19:38:50.727383       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-472858_8e8fb087-282f-403f-822d-4406a8190986!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-472858 -n no-preload-472858
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-472858 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-wj2tt
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-472858 describe pod metrics-server-569cc877fc-wj2tt
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-472858 describe pod metrics-server-569cc877fc-wj2tt: exit status 1 (76.495322ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-wj2tt" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-472858 describe pod metrics-server-569cc877fc-wj2tt: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (212.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (144.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
E0401 19:49:16.857239   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/functional-784295/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
E0401 19:49:45.495696   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/calico-408543/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
E0401 19:50:06.173111   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/custom-flannel-408543/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
E0401 19:50:58.744378   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/enable-default-cni-408543/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.106:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.106:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-163608 -n old-k8s-version-163608
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-163608 -n old-k8s-version-163608: exit status 2 (256.25942ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-163608" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-163608 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-163608 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.402µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-163608 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-163608 -n old-k8s-version-163608
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-163608 -n old-k8s-version-163608: exit status 2 (237.21349ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-163608 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-163608 logs -n 25: (1.621093359s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p bridge-408543 sudo cat                              | bridge-408543                | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |                |                     |                     |
	| ssh     | -p bridge-408543 sudo                                  | bridge-408543                | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | containerd config dump                                 |                              |         |                |                     |                     |
	| ssh     | -p bridge-408543 sudo                                  | bridge-408543                | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | systemctl status crio --all                            |                              |         |                |                     |                     |
	|         | --full --no-pager                                      |                              |         |                |                     |                     |
	| ssh     | -p bridge-408543 sudo                                  | bridge-408543                | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |                |                     |                     |
	| ssh     | -p bridge-408543 sudo find                             | bridge-408543                | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |                |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |                |                     |                     |
	| ssh     | -p bridge-408543 sudo crio                             | bridge-408543                | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | config                                                 |                              |         |                |                     |                     |
	| delete  | -p bridge-408543                                       | bridge-408543                | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	| delete  | -p                                                     | disable-driver-mounts-580301 | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | disable-driver-mounts-580301                           |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-734648 | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:24 UTC |
	|         | default-k8s-diff-port-734648                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p no-preload-472858             | no-preload-472858            | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC | 01 Apr 24 19:23 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p no-preload-472858                                   | no-preload-472858            | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:23 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-882095            | embed-certs-882095           | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:24 UTC | 01 Apr 24 19:24 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-882095                                  | embed-certs-882095           | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:24 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-734648  | default-k8s-diff-port-734648 | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:25 UTC | 01 Apr 24 19:25 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-734648 | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:25 UTC |                     |
	|         | default-k8s-diff-port-734648                           |                              |         |                |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-472858                  | no-preload-472858            | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:25 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p no-preload-472858                                   | no-preload-472858            | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:26 UTC | 01 Apr 24 19:38 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |                |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.0                      |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-163608        | old-k8s-version-163608       | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:26 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-882095                 | embed-certs-882095           | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:26 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-882095                                  | embed-certs-882095           | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:26 UTC | 01 Apr 24 19:36 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-734648       | default-k8s-diff-port-734648 | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:27 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-734648 | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:27 UTC | 01 Apr 24 19:36 UTC |
	|         | default-k8s-diff-port-734648                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-163608                              | old-k8s-version-163608       | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:27 UTC | 01 Apr 24 19:27 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-163608             | old-k8s-version-163608       | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:27 UTC | 01 Apr 24 19:27 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-163608                              | old-k8s-version-163608       | jenkins | v1.33.0-beta.0 | 01 Apr 24 19:27 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/01 19:27:52
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 19:27:52.967684   71168 out.go:291] Setting OutFile to fd 1 ...
	I0401 19:27:52.967904   71168 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 19:27:52.967912   71168 out.go:304] Setting ErrFile to fd 2...
	I0401 19:27:52.967916   71168 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 19:27:52.968071   71168 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-10493/.minikube/bin
	I0401 19:27:52.968601   71168 out.go:298] Setting JSON to false
	I0401 19:27:52.969458   71168 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7825,"bootTime":1711991848,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 19:27:52.969511   71168 start.go:139] virtualization: kvm guest
	I0401 19:27:52.972337   71168 out.go:177] * [old-k8s-version-163608] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 19:27:52.973728   71168 out.go:177]   - MINIKUBE_LOCATION=18233
	I0401 19:27:52.973774   71168 notify.go:220] Checking for updates...
	I0401 19:27:52.975050   71168 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 19:27:52.976498   71168 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 19:27:52.977880   71168 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-10493/.minikube
	I0401 19:27:52.979140   71168 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 19:27:52.980397   71168 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 19:27:52.982116   71168 config.go:182] Loaded profile config "old-k8s-version-163608": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 19:27:52.982478   71168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:27:52.982569   71168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:27:52.996903   71168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44083
	I0401 19:27:52.997230   71168 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:27:52.997702   71168 main.go:141] libmachine: Using API Version  1
	I0401 19:27:52.997724   71168 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:27:52.998082   71168 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:27:52.998286   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:27:53.000287   71168 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0401 19:27:53.001714   71168 driver.go:392] Setting default libvirt URI to qemu:///system
	I0401 19:27:53.001993   71168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:27:53.002030   71168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:27:53.016155   71168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43947
	I0401 19:27:53.016524   71168 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:27:53.016981   71168 main.go:141] libmachine: Using API Version  1
	I0401 19:27:53.017003   71168 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:27:53.017352   71168 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:27:53.017550   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:27:53.051163   71168 out.go:177] * Using the kvm2 driver based on existing profile
	I0401 19:27:53.052475   71168 start.go:297] selected driver: kvm2
	I0401 19:27:53.052488   71168 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-163608 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-163608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.106 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:27:53.052621   71168 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 19:27:53.053266   71168 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 19:27:53.053349   71168 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18233-10493/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0401 19:27:53.067629   71168 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0401 19:27:53.067994   71168 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 19:27:53.068065   71168 cni.go:84] Creating CNI manager for ""
	I0401 19:27:53.068083   71168 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:27:53.068130   71168 start.go:340] cluster config:
	{Name:old-k8s-version-163608 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-163608 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.106 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:27:53.068640   71168 iso.go:125] acquiring lock: {Name:mka511ffe42ecd86bd7f46e7a17ddcdd3e5e4327 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 19:27:53.070506   71168 out.go:177] * Starting "old-k8s-version-163608" primary control-plane node in "old-k8s-version-163608" cluster
	I0401 19:27:53.071686   71168 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0401 19:27:53.071716   71168 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0401 19:27:53.071726   71168 cache.go:56] Caching tarball of preloaded images
	I0401 19:27:53.071807   71168 preload.go:173] Found /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 19:27:53.071818   71168 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0401 19:27:53.071904   71168 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/config.json ...
	I0401 19:27:53.072076   71168 start.go:360] acquireMachinesLock for old-k8s-version-163608: {Name:mk6b7472209a8db5f40be4c2f0565da7e0094c19 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 19:27:57.821850   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:00.893934   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:06.973950   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:10.045903   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:16.125969   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:19.197902   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:25.277903   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:28.349963   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:34.429888   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:37.501886   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:43.581910   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:46.653871   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:52.733856   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:28:55.805957   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:01.885878   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:04.957919   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:11.037896   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:14.109854   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:20.189885   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:23.261848   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:29.341931   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:32.414013   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:38.493870   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:41.565912   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:47.645887   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:50.717882   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:56.797886   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:29:59.869824   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:30:05.949894   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:30:09.021905   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:30:15.101943   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:30:18.173911   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:30:24.253875   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:30:27.325874   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:30:33.405945   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:30:36.477889   70284 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.119:22: connect: no route to host
	I0401 19:30:39.482773   70687 start.go:364] duration metric: took 3m52.901392005s to acquireMachinesLock for "embed-certs-882095"
	I0401 19:30:39.482825   70687 start.go:96] Skipping create...Using existing machine configuration
	I0401 19:30:39.482831   70687 fix.go:54] fixHost starting: 
	I0401 19:30:39.483206   70687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:30:39.483272   70687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:30:39.498155   70687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43779
	I0401 19:30:39.498587   70687 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:30:39.499013   70687 main.go:141] libmachine: Using API Version  1
	I0401 19:30:39.499032   70687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:30:39.499400   70687 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:30:39.499572   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:30:39.499760   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetState
	I0401 19:30:39.501361   70687 fix.go:112] recreateIfNeeded on embed-certs-882095: state=Stopped err=<nil>
	I0401 19:30:39.501398   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	W0401 19:30:39.501552   70687 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 19:30:39.504183   70687 out.go:177] * Restarting existing kvm2 VM for "embed-certs-882095" ...
	I0401 19:30:39.505410   70687 main.go:141] libmachine: (embed-certs-882095) Calling .Start
	I0401 19:30:39.505549   70687 main.go:141] libmachine: (embed-certs-882095) Ensuring networks are active...
	I0401 19:30:39.506257   70687 main.go:141] libmachine: (embed-certs-882095) Ensuring network default is active
	I0401 19:30:39.506533   70687 main.go:141] libmachine: (embed-certs-882095) Ensuring network mk-embed-certs-882095 is active
	I0401 19:30:39.506892   70687 main.go:141] libmachine: (embed-certs-882095) Getting domain xml...
	I0401 19:30:39.507632   70687 main.go:141] libmachine: (embed-certs-882095) Creating domain...
	I0401 19:30:40.693316   70687 main.go:141] libmachine: (embed-certs-882095) Waiting to get IP...
	I0401 19:30:40.694095   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:40.694551   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:40.694597   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:40.694519   71595 retry.go:31] will retry after 283.185096ms: waiting for machine to come up
	I0401 19:30:40.979028   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:40.979500   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:40.979523   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:40.979452   71595 retry.go:31] will retry after 297.637907ms: waiting for machine to come up
	I0401 19:30:41.279111   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:41.279457   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:41.279479   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:41.279411   71595 retry.go:31] will retry after 366.625363ms: waiting for machine to come up
	I0401 19:30:39.480214   70284 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 19:30:39.480252   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetMachineName
	I0401 19:30:39.480557   70284 buildroot.go:166] provisioning hostname "no-preload-472858"
	I0401 19:30:39.480583   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetMachineName
	I0401 19:30:39.480787   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:30:39.482626   70284 machine.go:97] duration metric: took 4m37.415031648s to provisionDockerMachine
	I0401 19:30:39.482666   70284 fix.go:56] duration metric: took 4m37.43830515s for fixHost
	I0401 19:30:39.482676   70284 start.go:83] releasing machines lock for "no-preload-472858", held for 4m37.438344965s
	W0401 19:30:39.482704   70284 start.go:713] error starting host: provision: host is not running
	W0401 19:30:39.482794   70284 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0401 19:30:39.482805   70284 start.go:728] Will try again in 5 seconds ...
	I0401 19:30:41.647682   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:41.648045   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:41.648097   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:41.648026   71595 retry.go:31] will retry after 373.762437ms: waiting for machine to come up
	I0401 19:30:42.023500   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:42.023868   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:42.023904   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:42.023836   71595 retry.go:31] will retry after 461.430639ms: waiting for machine to come up
	I0401 19:30:42.486384   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:42.486836   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:42.486863   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:42.486784   71595 retry.go:31] will retry after 718.511667ms: waiting for machine to come up
	I0401 19:30:43.206555   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:43.206983   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:43.207006   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:43.206939   71595 retry.go:31] will retry after 907.934415ms: waiting for machine to come up
	I0401 19:30:44.115840   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:44.116223   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:44.116259   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:44.116173   71595 retry.go:31] will retry after 1.178492069s: waiting for machine to come up
	I0401 19:30:45.295704   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:45.296117   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:45.296146   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:45.296071   71595 retry.go:31] will retry after 1.188920707s: waiting for machine to come up
	I0401 19:30:44.484802   70284 start.go:360] acquireMachinesLock for no-preload-472858: {Name:mk6b7472209a8db5f40be4c2f0565da7e0094c19 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0401 19:30:46.486217   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:46.486777   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:46.486816   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:46.486740   71595 retry.go:31] will retry after 2.12728618s: waiting for machine to come up
	I0401 19:30:48.617124   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:48.617521   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:48.617553   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:48.617468   71595 retry.go:31] will retry after 2.867613028s: waiting for machine to come up
	I0401 19:30:51.488009   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:51.491502   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:51.491533   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:51.488532   71595 retry.go:31] will retry after 3.42206094s: waiting for machine to come up
	I0401 19:30:54.911723   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:54.912098   70687 main.go:141] libmachine: (embed-certs-882095) DBG | unable to find current IP address of domain embed-certs-882095 in network mk-embed-certs-882095
	I0401 19:30:54.912127   70687 main.go:141] libmachine: (embed-certs-882095) DBG | I0401 19:30:54.912059   71595 retry.go:31] will retry after 4.263880792s: waiting for machine to come up
	I0401 19:31:00.450770   70962 start.go:364] duration metric: took 3m22.921307899s to acquireMachinesLock for "default-k8s-diff-port-734648"
	I0401 19:31:00.450836   70962 start.go:96] Skipping create...Using existing machine configuration
	I0401 19:31:00.450854   70962 fix.go:54] fixHost starting: 
	I0401 19:31:00.451364   70962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:31:00.451401   70962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:31:00.467219   70962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45255
	I0401 19:31:00.467579   70962 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:31:00.467998   70962 main.go:141] libmachine: Using API Version  1
	I0401 19:31:00.468021   70962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:31:00.468368   70962 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:31:00.468567   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:31:00.468740   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetState
	I0401 19:31:00.470224   70962 fix.go:112] recreateIfNeeded on default-k8s-diff-port-734648: state=Stopped err=<nil>
	I0401 19:31:00.470251   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	W0401 19:31:00.470396   70962 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 19:31:00.472906   70962 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-734648" ...
	I0401 19:30:59.180302   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.180756   70687 main.go:141] libmachine: (embed-certs-882095) Found IP for machine: 192.168.39.190
	I0401 19:30:59.180778   70687 main.go:141] libmachine: (embed-certs-882095) Reserving static IP address...
	I0401 19:30:59.180794   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has current primary IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.181269   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "embed-certs-882095", mac: "52:54:00:8c:f1:a7", ip: "192.168.39.190"} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.181300   70687 main.go:141] libmachine: (embed-certs-882095) DBG | skip adding static IP to network mk-embed-certs-882095 - found existing host DHCP lease matching {name: "embed-certs-882095", mac: "52:54:00:8c:f1:a7", ip: "192.168.39.190"}
	I0401 19:30:59.181311   70687 main.go:141] libmachine: (embed-certs-882095) Reserved static IP address: 192.168.39.190
	I0401 19:30:59.181324   70687 main.go:141] libmachine: (embed-certs-882095) DBG | Getting to WaitForSSH function...
	I0401 19:30:59.181331   70687 main.go:141] libmachine: (embed-certs-882095) Waiting for SSH to be available...
	I0401 19:30:59.183293   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.183599   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.183630   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.183756   70687 main.go:141] libmachine: (embed-certs-882095) DBG | Using SSH client type: external
	I0401 19:30:59.183784   70687 main.go:141] libmachine: (embed-certs-882095) DBG | Using SSH private key: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/embed-certs-882095/id_rsa (-rw-------)
	I0401 19:30:59.183837   70687 main.go:141] libmachine: (embed-certs-882095) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.190 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18233-10493/.minikube/machines/embed-certs-882095/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0401 19:30:59.183863   70687 main.go:141] libmachine: (embed-certs-882095) DBG | About to run SSH command:
	I0401 19:30:59.183924   70687 main.go:141] libmachine: (embed-certs-882095) DBG | exit 0
	I0401 19:30:59.305707   70687 main.go:141] libmachine: (embed-certs-882095) DBG | SSH cmd err, output: <nil>: 
	I0401 19:30:59.306036   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetConfigRaw
	I0401 19:30:59.306679   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetIP
	I0401 19:30:59.309266   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.309680   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.309711   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.309938   70687 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/embed-certs-882095/config.json ...
	I0401 19:30:59.310193   70687 machine.go:94] provisionDockerMachine start ...
	I0401 19:30:59.310219   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:30:59.310435   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:30:59.312549   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.312908   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.312930   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.313088   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:30:59.313247   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:30:59.313385   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:30:59.313502   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:30:59.313721   70687 main.go:141] libmachine: Using SSH client type: native
	I0401 19:30:59.313894   70687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0401 19:30:59.313904   70687 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 19:30:59.418216   70687 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0401 19:30:59.418244   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetMachineName
	I0401 19:30:59.418506   70687 buildroot.go:166] provisioning hostname "embed-certs-882095"
	I0401 19:30:59.418537   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetMachineName
	I0401 19:30:59.418703   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:30:59.421075   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.421411   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.421453   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.421534   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:30:59.421721   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:30:59.421867   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:30:59.421978   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:30:59.422122   70687 main.go:141] libmachine: Using SSH client type: native
	I0401 19:30:59.422317   70687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0401 19:30:59.422332   70687 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-882095 && echo "embed-certs-882095" | sudo tee /etc/hostname
	I0401 19:30:59.541974   70687 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-882095
	
	I0401 19:30:59.542006   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:30:59.544628   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.544992   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.545025   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.545193   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:30:59.545403   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:30:59.545566   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:30:59.545720   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:30:59.545906   70687 main.go:141] libmachine: Using SSH client type: native
	I0401 19:30:59.546060   70687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0401 19:30:59.546077   70687 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-882095' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-882095/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-882095' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 19:30:59.660103   70687 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 19:30:59.660134   70687 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18233-10493/.minikube CaCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18233-10493/.minikube}
	I0401 19:30:59.660161   70687 buildroot.go:174] setting up certificates
	I0401 19:30:59.660172   70687 provision.go:84] configureAuth start
	I0401 19:30:59.660193   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetMachineName
	I0401 19:30:59.660465   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetIP
	I0401 19:30:59.662943   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.663260   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.663302   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.663413   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:30:59.665390   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.665688   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.665719   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.665821   70687 provision.go:143] copyHostCerts
	I0401 19:30:59.665879   70687 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem, removing ...
	I0401 19:30:59.665892   70687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem
	I0401 19:30:59.665956   70687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem (1679 bytes)
	I0401 19:30:59.666041   70687 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem, removing ...
	I0401 19:30:59.666048   70687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem
	I0401 19:30:59.666071   70687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem (1082 bytes)
	I0401 19:30:59.666121   70687 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem, removing ...
	I0401 19:30:59.666128   70687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem
	I0401 19:30:59.666148   70687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem (1123 bytes)
	I0401 19:30:59.666193   70687 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem org=jenkins.embed-certs-882095 san=[127.0.0.1 192.168.39.190 embed-certs-882095 localhost minikube]
	I0401 19:30:59.761975   70687 provision.go:177] copyRemoteCerts
	I0401 19:30:59.762033   70687 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 19:30:59.762058   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:30:59.764277   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.764601   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.764626   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.764832   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:30:59.765006   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:30:59.765155   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:30:59.765250   70687 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/embed-certs-882095/id_rsa Username:docker}
	I0401 19:30:59.848158   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 19:30:59.875879   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 19:30:59.902573   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0401 19:30:59.928757   70687 provision.go:87] duration metric: took 268.570153ms to configureAuth
	I0401 19:30:59.928781   70687 buildroot.go:189] setting minikube options for container-runtime
	I0401 19:30:59.928924   70687 config.go:182] Loaded profile config "embed-certs-882095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 19:30:59.928988   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:30:59.931187   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.931571   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:30:59.931600   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:30:59.931755   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:30:59.931914   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:30:59.932067   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:30:59.932176   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:30:59.932325   70687 main.go:141] libmachine: Using SSH client type: native
	I0401 19:30:59.932506   70687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0401 19:30:59.932530   70687 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 19:31:00.214527   70687 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 19:31:00.214552   70687 machine.go:97] duration metric: took 904.342981ms to provisionDockerMachine
	I0401 19:31:00.214563   70687 start.go:293] postStartSetup for "embed-certs-882095" (driver="kvm2")
	I0401 19:31:00.214574   70687 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 19:31:00.214587   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:31:00.214892   70687 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 19:31:00.214920   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:31:00.217289   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.217580   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:31:00.217608   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.217828   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:31:00.218014   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:31:00.218137   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:31:00.218267   70687 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/embed-certs-882095/id_rsa Username:docker}
	I0401 19:31:00.301379   70687 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 19:31:00.306211   70687 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 19:31:00.306231   70687 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/addons for local assets ...
	I0401 19:31:00.306284   70687 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/files for local assets ...
	I0401 19:31:00.306377   70687 filesync.go:149] local asset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> 177512.pem in /etc/ssl/certs
	I0401 19:31:00.306459   70687 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 19:31:00.316524   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:31:00.342848   70687 start.go:296] duration metric: took 128.272743ms for postStartSetup
	I0401 19:31:00.342887   70687 fix.go:56] duration metric: took 20.860054972s for fixHost
	I0401 19:31:00.342910   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:31:00.345429   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.345883   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:31:00.345915   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.346060   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:31:00.346288   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:31:00.346504   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:31:00.346656   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:31:00.346806   70687 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:00.346961   70687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0401 19:31:00.346972   70687 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 19:31:00.450606   70687 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711999860.420567604
	
	I0401 19:31:00.450627   70687 fix.go:216] guest clock: 1711999860.420567604
	I0401 19:31:00.450635   70687 fix.go:229] Guest: 2024-04-01 19:31:00.420567604 +0000 UTC Remote: 2024-04-01 19:31:00.34289204 +0000 UTC m=+253.905703085 (delta=77.675564ms)
	I0401 19:31:00.450683   70687 fix.go:200] guest clock delta is within tolerance: 77.675564ms
	I0401 19:31:00.450693   70687 start.go:83] releasing machines lock for "embed-certs-882095", held for 20.967887876s
	I0401 19:31:00.450725   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:31:00.451011   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetIP
	I0401 19:31:00.453581   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.453959   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:31:00.453990   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.454112   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:31:00.454613   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:31:00.454788   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:31:00.454844   70687 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 19:31:00.454886   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:31:00.454997   70687 ssh_runner.go:195] Run: cat /version.json
	I0401 19:31:00.455019   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:31:00.457540   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.457811   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.457846   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:31:00.457878   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.458053   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:31:00.458141   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:31:00.458173   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:00.458217   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:31:00.458295   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:31:00.458387   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:31:00.458471   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:31:00.458556   70687 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/embed-certs-882095/id_rsa Username:docker}
	I0401 19:31:00.458602   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:31:00.458741   70687 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/embed-certs-882095/id_rsa Username:docker}
	I0401 19:31:00.569039   70687 ssh_runner.go:195] Run: systemctl --version
	I0401 19:31:00.575452   70687 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 19:31:00.728549   70687 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 19:31:00.735559   70687 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 19:31:00.735642   70687 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 19:31:00.756640   70687 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 19:31:00.756669   70687 start.go:494] detecting cgroup driver to use...
	I0401 19:31:00.756743   70687 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 19:31:00.776638   70687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 19:31:00.793006   70687 docker.go:217] disabling cri-docker service (if available) ...
	I0401 19:31:00.793063   70687 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 19:31:00.809240   70687 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 19:31:00.825245   70687 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 19:31:00.952595   70687 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 19:31:01.109771   70687 docker.go:233] disabling docker service ...
	I0401 19:31:01.109841   70687 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 19:31:01.126814   70687 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 19:31:01.141976   70687 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 19:31:01.301634   70687 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 19:31:01.440350   70687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 19:31:01.458083   70687 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 19:31:01.479653   70687 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0401 19:31:01.479730   70687 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:01.492598   70687 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 19:31:01.492677   70687 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:01.506469   70687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:01.521981   70687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:01.534406   70687 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 19:31:01.546817   70687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:01.558857   70687 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:01.578922   70687 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:01.593381   70687 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 19:31:01.605265   70687 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0401 19:31:01.605341   70687 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0401 19:31:01.621681   70687 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 19:31:01.633336   70687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:31:01.770373   70687 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 19:31:01.927892   70687 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 19:31:01.927952   70687 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 19:31:01.935046   70687 start.go:562] Will wait 60s for crictl version
	I0401 19:31:01.935101   70687 ssh_runner.go:195] Run: which crictl
	I0401 19:31:01.940563   70687 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 19:31:01.986956   70687 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 19:31:01.987030   70687 ssh_runner.go:195] Run: crio --version
	I0401 19:31:02.018567   70687 ssh_runner.go:195] Run: crio --version
	I0401 19:31:02.059077   70687 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0401 19:31:00.474118   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .Start
	I0401 19:31:00.474275   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Ensuring networks are active...
	I0401 19:31:00.474896   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Ensuring network default is active
	I0401 19:31:00.475289   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Ensuring network mk-default-k8s-diff-port-734648 is active
	I0401 19:31:00.475650   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Getting domain xml...
	I0401 19:31:00.476263   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Creating domain...
	I0401 19:31:01.736646   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting to get IP...
	I0401 19:31:01.737490   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:01.737889   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:01.737939   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:01.737867   71724 retry.go:31] will retry after 198.445345ms: waiting for machine to come up
	I0401 19:31:01.938446   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:01.938981   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:01.939012   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:01.938936   71724 retry.go:31] will retry after 320.128802ms: waiting for machine to come up
	I0401 19:31:02.260257   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:02.260673   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:02.260703   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:02.260633   71724 retry.go:31] will retry after 357.316906ms: waiting for machine to come up
	I0401 19:31:02.060343   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetIP
	I0401 19:31:02.063382   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:02.063775   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:31:02.063808   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:31:02.064047   70687 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0401 19:31:02.069227   70687 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:31:02.085344   70687 kubeadm.go:877] updating cluster {Name:embed-certs-882095 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.3 ClusterName:embed-certs-882095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.190 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 19:31:02.085451   70687 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0401 19:31:02.085490   70687 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:31:02.139383   70687 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0401 19:31:02.139454   70687 ssh_runner.go:195] Run: which lz4
	I0401 19:31:02.144331   70687 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0401 19:31:02.149534   70687 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 19:31:02.149561   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0401 19:31:03.954448   70687 crio.go:462] duration metric: took 1.810143668s to copy over tarball
	I0401 19:31:03.954523   70687 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 19:31:06.445735   70687 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.491184732s)
	I0401 19:31:06.445759   70687 crio.go:469] duration metric: took 2.491285648s to extract the tarball
	I0401 19:31:06.445765   70687 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 19:31:02.620250   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:02.620729   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:02.620760   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:02.620666   71724 retry.go:31] will retry after 520.509423ms: waiting for machine to come up
	I0401 19:31:03.142471   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:03.142902   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:03.142930   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:03.142864   71724 retry.go:31] will retry after 714.309176ms: waiting for machine to come up
	I0401 19:31:03.858594   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:03.859071   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:03.859104   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:03.859035   71724 retry.go:31] will retry after 620.601084ms: waiting for machine to come up
	I0401 19:31:04.480923   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:04.481350   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:04.481381   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:04.481313   71724 retry.go:31] will retry after 1.00716549s: waiting for machine to come up
	I0401 19:31:05.489788   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:05.490243   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:05.490273   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:05.490186   71724 retry.go:31] will retry after 1.158564029s: waiting for machine to come up
	I0401 19:31:06.650440   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:06.650969   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:06.650997   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:06.650915   71724 retry.go:31] will retry after 1.172294728s: waiting for machine to come up
	I0401 19:31:06.485475   70687 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:31:06.532426   70687 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 19:31:06.532448   70687 cache_images.go:84] Images are preloaded, skipping loading
	I0401 19:31:06.532455   70687 kubeadm.go:928] updating node { 192.168.39.190 8443 v1.29.3 crio true true} ...
	I0401 19:31:06.532544   70687 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-882095 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.190
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:embed-certs-882095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 19:31:06.532611   70687 ssh_runner.go:195] Run: crio config
	I0401 19:31:06.585119   70687 cni.go:84] Creating CNI manager for ""
	I0401 19:31:06.585144   70687 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:31:06.585158   70687 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 19:31:06.585185   70687 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.190 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-882095 NodeName:embed-certs-882095 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.190"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.190 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 19:31:06.585374   70687 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.190
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-882095"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.190
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.190"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 19:31:06.585473   70687 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0401 19:31:06.596747   70687 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 19:31:06.596818   70687 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 19:31:06.606959   70687 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0401 19:31:06.628202   70687 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 19:31:06.649043   70687 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0401 19:31:06.668400   70687 ssh_runner.go:195] Run: grep 192.168.39.190	control-plane.minikube.internal$ /etc/hosts
	I0401 19:31:06.672469   70687 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.190	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:31:06.685666   70687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:31:06.806186   70687 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:31:06.823315   70687 certs.go:68] Setting up /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/embed-certs-882095 for IP: 192.168.39.190
	I0401 19:31:06.823355   70687 certs.go:194] generating shared ca certs ...
	I0401 19:31:06.823376   70687 certs.go:226] acquiring lock for ca certs: {Name:mk348b3e250c104b662139cd7212c6c6dfda3180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:31:06.823569   70687 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key
	I0401 19:31:06.823645   70687 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key
	I0401 19:31:06.823659   70687 certs.go:256] generating profile certs ...
	I0401 19:31:06.823764   70687 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/embed-certs-882095/client.key
	I0401 19:31:06.823872   70687 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/embed-certs-882095/apiserver.key.c07921ce
	I0401 19:31:06.823945   70687 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/embed-certs-882095/proxy-client.key
	I0401 19:31:06.824092   70687 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem (1338 bytes)
	W0401 19:31:06.824132   70687 certs.go:480] ignoring /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751_empty.pem, impossibly tiny 0 bytes
	I0401 19:31:06.824145   70687 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem (1675 bytes)
	I0401 19:31:06.824183   70687 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem (1082 bytes)
	I0401 19:31:06.824223   70687 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem (1123 bytes)
	I0401 19:31:06.824254   70687 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem (1679 bytes)
	I0401 19:31:06.824309   70687 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:31:06.824942   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 19:31:06.867274   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 19:31:06.907288   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 19:31:06.948328   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 19:31:06.975058   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/embed-certs-882095/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0401 19:31:07.003183   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/embed-certs-882095/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0401 19:31:07.032030   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/embed-certs-882095/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 19:31:07.061612   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/embed-certs-882095/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 19:31:07.090149   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /usr/share/ca-certificates/177512.pem (1708 bytes)
	I0401 19:31:07.116885   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 19:31:07.143296   70687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem --> /usr/share/ca-certificates/17751.pem (1338 bytes)
	I0401 19:31:07.169420   70687 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0401 19:31:07.188908   70687 ssh_runner.go:195] Run: openssl version
	I0401 19:31:07.195591   70687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177512.pem && ln -fs /usr/share/ca-certificates/177512.pem /etc/ssl/certs/177512.pem"
	I0401 19:31:07.211583   70687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177512.pem
	I0401 19:31:07.217049   70687 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 18:15 /usr/share/ca-certificates/177512.pem
	I0401 19:31:07.217110   70687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177512.pem
	I0401 19:31:07.223751   70687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177512.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 19:31:07.237393   70687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 19:31:07.250523   70687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:07.255928   70687 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 18:07 /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:07.255981   70687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:07.262373   70687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 19:31:07.275174   70687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17751.pem && ln -fs /usr/share/ca-certificates/17751.pem /etc/ssl/certs/17751.pem"
	I0401 19:31:07.288039   70687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17751.pem
	I0401 19:31:07.293339   70687 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 18:15 /usr/share/ca-certificates/17751.pem
	I0401 19:31:07.293392   70687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17751.pem
	I0401 19:31:07.299983   70687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17751.pem /etc/ssl/certs/51391683.0"
	I0401 19:31:07.313120   70687 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 19:31:07.318425   70687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 19:31:07.325172   70687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 19:31:07.331674   70687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 19:31:07.338299   70687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 19:31:07.344896   70687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 19:31:07.351424   70687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 19:31:07.357898   70687 kubeadm.go:391] StartCluster: {Name:embed-certs-882095 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.3 ClusterName:embed-certs-882095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.190 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:31:07.357995   70687 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 19:31:07.358047   70687 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:31:07.401268   70687 cri.go:89] found id: ""
	I0401 19:31:07.401326   70687 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0401 19:31:07.414232   70687 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0401 19:31:07.414255   70687 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0401 19:31:07.414262   70687 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0401 19:31:07.414308   70687 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 19:31:07.425972   70687 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 19:31:07.426977   70687 kubeconfig.go:125] found "embed-certs-882095" server: "https://192.168.39.190:8443"
	I0401 19:31:07.428767   70687 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 19:31:07.440164   70687 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.190
	I0401 19:31:07.440191   70687 kubeadm.go:1154] stopping kube-system containers ...
	I0401 19:31:07.440201   70687 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0401 19:31:07.440244   70687 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:31:07.484303   70687 cri.go:89] found id: ""
	I0401 19:31:07.484407   70687 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0401 19:31:07.505186   70687 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:31:07.518316   70687 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:31:07.518342   70687 kubeadm.go:156] found existing configuration files:
	
	I0401 19:31:07.518393   70687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 19:31:07.530759   70687 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:31:07.530832   70687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:31:07.542799   70687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 19:31:07.553972   70687 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:31:07.554031   70687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:31:07.565324   70687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 19:31:07.576244   70687 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:31:07.576318   70687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:31:07.588874   70687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 19:31:07.600440   70687 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:31:07.600526   70687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:31:07.611963   70687 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:31:07.623225   70687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:07.740800   70687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:09.050887   70687 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.310046744s)
	I0401 19:31:09.050920   70687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:09.266170   70687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:09.336585   70687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:09.422513   70687 api_server.go:52] waiting for apiserver process to appear ...
	I0401 19:31:09.422594   70687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:09.923709   70687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:10.422822   70687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:10.922892   70687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:10.946590   70687 api_server.go:72] duration metric: took 1.524076694s to wait for apiserver process to appear ...
	I0401 19:31:10.946627   70687 api_server.go:88] waiting for apiserver healthz status ...
	I0401 19:31:10.946650   70687 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0401 19:31:07.825239   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:07.825629   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:07.825676   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:07.825586   71724 retry.go:31] will retry after 1.412332675s: waiting for machine to come up
	I0401 19:31:09.240010   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:09.240385   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:09.240416   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:09.240327   71724 retry.go:31] will retry after 2.601344034s: waiting for machine to come up
	I0401 19:31:11.843464   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:11.843948   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:11.843976   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:11.843900   71724 retry.go:31] will retry after 3.297720076s: waiting for machine to come up
	I0401 19:31:13.350274   70687 api_server.go:279] https://192.168.39.190:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0401 19:31:13.350309   70687 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0401 19:31:13.350325   70687 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0401 19:31:13.383494   70687 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0401 19:31:13.383543   70687 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0401 19:31:13.447744   70687 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0401 19:31:13.452796   70687 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0401 19:31:13.452852   70687 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0401 19:31:13.946971   70687 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0401 19:31:13.951522   70687 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0401 19:31:13.951554   70687 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0401 19:31:14.447104   70687 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0401 19:31:14.455165   70687 api_server.go:279] https://192.168.39.190:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0401 19:31:14.455204   70687 api_server.go:103] status: https://192.168.39.190:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0401 19:31:14.947278   70687 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0401 19:31:14.951487   70687 api_server.go:279] https://192.168.39.190:8443/healthz returned 200:
	ok
	I0401 19:31:14.958647   70687 api_server.go:141] control plane version: v1.29.3
	I0401 19:31:14.958670   70687 api_server.go:131] duration metric: took 4.012036456s to wait for apiserver health ...
	I0401 19:31:14.958687   70687 cni.go:84] Creating CNI manager for ""
	I0401 19:31:14.958693   70687 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:31:14.960494   70687 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0401 19:31:14.961899   70687 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0401 19:31:14.973709   70687 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0401 19:31:14.998105   70687 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 19:31:15.008481   70687 system_pods.go:59] 8 kube-system pods found
	I0401 19:31:15.008525   70687 system_pods.go:61] "coredns-76f75df574-nvcq4" [663bd69b-6da8-4a66-b20f-ea1eb507096a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:31:15.008536   70687 system_pods.go:61] "etcd-embed-certs-882095" [2b56dddc-b309-4965-811e-459c59b86dac] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0401 19:31:15.008551   70687 system_pods.go:61] "kube-apiserver-embed-certs-882095" [2e376ce4-504c-441a-baf8-0184a17e5bf4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0401 19:31:15.008561   70687 system_pods.go:61] "kube-controller-manager-embed-certs-882095" [e6bf3b2f-289b-4719-86f7-43e873fe8d85] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0401 19:31:15.008571   70687 system_pods.go:61] "kube-proxy-td6jk" [275536ff-4ec0-4d2c-8658-57aadda367b2] Running
	I0401 19:31:15.008580   70687 system_pods.go:61] "kube-scheduler-embed-certs-882095" [4551eb2a-9560-4d4f-aac0-9cfe6c790649] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0401 19:31:15.008591   70687 system_pods.go:61] "metrics-server-57f55c9bc5-g6z6c" [dc8aee6a-f101-4109-a259-351fddbddd44] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:31:15.008599   70687 system_pods.go:61] "storage-provisioner" [82a76833-c874-45d8-8ba7-1a483c15a997] Running
	I0401 19:31:15.008609   70687 system_pods.go:74] duration metric: took 10.480741ms to wait for pod list to return data ...
	I0401 19:31:15.008622   70687 node_conditions.go:102] verifying NodePressure condition ...
	I0401 19:31:15.012256   70687 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 19:31:15.012289   70687 node_conditions.go:123] node cpu capacity is 2
	I0401 19:31:15.012303   70687 node_conditions.go:105] duration metric: took 3.672159ms to run NodePressure ...
	I0401 19:31:15.012327   70687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:15.288861   70687 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0401 19:31:15.293731   70687 kubeadm.go:733] kubelet initialised
	I0401 19:31:15.293750   70687 kubeadm.go:734] duration metric: took 4.868595ms waiting for restarted kubelet to initialise ...
	I0401 19:31:15.293758   70687 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:31:15.298657   70687 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-nvcq4" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:15.304795   70687 pod_ready.go:97] node "embed-certs-882095" hosting pod "coredns-76f75df574-nvcq4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-882095" has status "Ready":"False"
	I0401 19:31:15.304813   70687 pod_ready.go:81] duration metric: took 6.134849ms for pod "coredns-76f75df574-nvcq4" in "kube-system" namespace to be "Ready" ...
	E0401 19:31:15.304822   70687 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-882095" hosting pod "coredns-76f75df574-nvcq4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-882095" has status "Ready":"False"
	I0401 19:31:15.304827   70687 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:15.309184   70687 pod_ready.go:97] node "embed-certs-882095" hosting pod "etcd-embed-certs-882095" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-882095" has status "Ready":"False"
	I0401 19:31:15.309204   70687 pod_ready.go:81] duration metric: took 4.369325ms for pod "etcd-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	E0401 19:31:15.309213   70687 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-882095" hosting pod "etcd-embed-certs-882095" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-882095" has status "Ready":"False"
	I0401 19:31:15.309221   70687 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:15.313737   70687 pod_ready.go:97] node "embed-certs-882095" hosting pod "kube-apiserver-embed-certs-882095" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-882095" has status "Ready":"False"
	I0401 19:31:15.313755   70687 pod_ready.go:81] duration metric: took 4.525801ms for pod "kube-apiserver-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	E0401 19:31:15.313764   70687 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-882095" hosting pod "kube-apiserver-embed-certs-882095" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-882095" has status "Ready":"False"
	I0401 19:31:15.313771   70687 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:15.401827   70687 pod_ready.go:97] node "embed-certs-882095" hosting pod "kube-controller-manager-embed-certs-882095" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-882095" has status "Ready":"False"
	I0401 19:31:15.401857   70687 pod_ready.go:81] duration metric: took 88.077915ms for pod "kube-controller-manager-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	E0401 19:31:15.401871   70687 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-882095" hosting pod "kube-controller-manager-embed-certs-882095" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-882095" has status "Ready":"False"
	I0401 19:31:15.401878   70687 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-td6jk" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:15.802462   70687 pod_ready.go:92] pod "kube-proxy-td6jk" in "kube-system" namespace has status "Ready":"True"
	I0401 19:31:15.802484   70687 pod_ready.go:81] duration metric: took 400.599194ms for pod "kube-proxy-td6jk" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:15.802494   70687 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:15.142653   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:15.143000   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | unable to find current IP address of domain default-k8s-diff-port-734648 in network mk-default-k8s-diff-port-734648
	I0401 19:31:15.143062   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | I0401 19:31:15.142972   71724 retry.go:31] will retry after 3.764823961s: waiting for machine to come up
	I0401 19:31:20.350903   71168 start.go:364] duration metric: took 3m27.278785625s to acquireMachinesLock for "old-k8s-version-163608"
	I0401 19:31:20.350993   71168 start.go:96] Skipping create...Using existing machine configuration
	I0401 19:31:20.351010   71168 fix.go:54] fixHost starting: 
	I0401 19:31:20.351490   71168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:31:20.351571   71168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:31:20.368575   71168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38247
	I0401 19:31:20.368936   71168 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:31:20.369448   71168 main.go:141] libmachine: Using API Version  1
	I0401 19:31:20.369469   71168 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:31:20.369822   71168 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:31:20.370033   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:31:20.370195   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetState
	I0401 19:31:20.371625   71168 fix.go:112] recreateIfNeeded on old-k8s-version-163608: state=Stopped err=<nil>
	I0401 19:31:20.371681   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	W0401 19:31:20.371842   71168 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 19:31:20.374328   71168 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-163608" ...
	I0401 19:31:17.809256   70687 pod_ready.go:102] pod "kube-scheduler-embed-certs-882095" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:19.809947   70687 pod_ready.go:102] pod "kube-scheduler-embed-certs-882095" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:20.818455   70687 pod_ready.go:92] pod "kube-scheduler-embed-certs-882095" in "kube-system" namespace has status "Ready":"True"
	I0401 19:31:20.818481   70687 pod_ready.go:81] duration metric: took 5.015979611s for pod "kube-scheduler-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:20.818493   70687 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:18.910798   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:18.911231   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Found IP for machine: 192.168.61.145
	I0401 19:31:18.911266   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has current primary IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:18.911277   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Reserving static IP address...
	I0401 19:31:18.911761   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-734648", mac: "52:54:00:49:dc:50", ip: "192.168.61.145"} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:18.911795   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | skip adding static IP to network mk-default-k8s-diff-port-734648 - found existing host DHCP lease matching {name: "default-k8s-diff-port-734648", mac: "52:54:00:49:dc:50", ip: "192.168.61.145"}
	I0401 19:31:18.911819   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Reserved static IP address: 192.168.61.145
	I0401 19:31:18.911835   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Waiting for SSH to be available...
	I0401 19:31:18.911869   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | Getting to WaitForSSH function...
	I0401 19:31:18.913767   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:18.914054   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:18.914082   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:18.914207   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | Using SSH client type: external
	I0401 19:31:18.914236   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | Using SSH private key: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/default-k8s-diff-port-734648/id_rsa (-rw-------)
	I0401 19:31:18.914278   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.145 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18233-10493/.minikube/machines/default-k8s-diff-port-734648/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0401 19:31:18.914300   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | About to run SSH command:
	I0401 19:31:18.914313   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | exit 0
	I0401 19:31:19.037713   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | SSH cmd err, output: <nil>: 
	I0401 19:31:19.038080   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetConfigRaw
	I0401 19:31:19.038767   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetIP
	I0401 19:31:19.042390   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.043249   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:19.043311   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.043949   70962 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/default-k8s-diff-port-734648/config.json ...
	I0401 19:31:19.044504   70962 machine.go:94] provisionDockerMachine start ...
	I0401 19:31:19.044554   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:31:19.044916   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:19.047637   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.047908   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:19.047941   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.048088   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:31:19.048265   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:19.048408   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:19.048522   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:31:19.048636   70962 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:19.048790   70962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.145 22 <nil> <nil>}
	I0401 19:31:19.048800   70962 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 19:31:19.154415   70962 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0401 19:31:19.154444   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetMachineName
	I0401 19:31:19.154683   70962 buildroot.go:166] provisioning hostname "default-k8s-diff-port-734648"
	I0401 19:31:19.154713   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetMachineName
	I0401 19:31:19.154887   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:19.157442   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.157867   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:19.157896   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.158041   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:31:19.158237   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:19.158402   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:19.158540   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:31:19.158713   70962 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:19.158905   70962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.145 22 <nil> <nil>}
	I0401 19:31:19.158920   70962 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-734648 && echo "default-k8s-diff-port-734648" | sudo tee /etc/hostname
	I0401 19:31:19.276129   70962 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-734648
	
	I0401 19:31:19.276160   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:19.278657   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.278918   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:19.278940   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.279158   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:31:19.279353   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:19.279523   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:19.279671   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:31:19.279831   70962 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:19.280057   70962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.145 22 <nil> <nil>}
	I0401 19:31:19.280082   70962 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-734648' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-734648/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-734648' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 19:31:19.395730   70962 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 19:31:19.395755   70962 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18233-10493/.minikube CaCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18233-10493/.minikube}
	I0401 19:31:19.395779   70962 buildroot.go:174] setting up certificates
	I0401 19:31:19.395788   70962 provision.go:84] configureAuth start
	I0401 19:31:19.395798   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetMachineName
	I0401 19:31:19.396046   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetIP
	I0401 19:31:19.398668   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.399036   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:19.399065   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.399219   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:19.401309   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.401611   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:19.401656   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.401750   70962 provision.go:143] copyHostCerts
	I0401 19:31:19.401812   70962 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem, removing ...
	I0401 19:31:19.401822   70962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem
	I0401 19:31:19.401876   70962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem (1082 bytes)
	I0401 19:31:19.401978   70962 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem, removing ...
	I0401 19:31:19.401988   70962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem
	I0401 19:31:19.402015   70962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem (1123 bytes)
	I0401 19:31:19.402121   70962 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem, removing ...
	I0401 19:31:19.402129   70962 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem
	I0401 19:31:19.402147   70962 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem (1679 bytes)
	I0401 19:31:19.402205   70962 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-734648 san=[127.0.0.1 192.168.61.145 default-k8s-diff-port-734648 localhost minikube]
	I0401 19:31:19.655203   70962 provision.go:177] copyRemoteCerts
	I0401 19:31:19.655256   70962 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 19:31:19.655281   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:19.658194   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.658512   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:19.658540   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.658693   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:31:19.658896   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:19.659039   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:31:19.659187   70962 sshutil.go:53] new ssh client: &{IP:192.168.61.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/default-k8s-diff-port-734648/id_rsa Username:docker}
	I0401 19:31:19.743131   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 19:31:19.771327   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0401 19:31:19.797350   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 19:31:19.824244   70962 provision.go:87] duration metric: took 428.444366ms to configureAuth
	I0401 19:31:19.824274   70962 buildroot.go:189] setting minikube options for container-runtime
	I0401 19:31:19.824473   70962 config.go:182] Loaded profile config "default-k8s-diff-port-734648": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 19:31:19.824563   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:19.827376   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.827798   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:19.827838   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:19.827984   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:31:19.828184   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:19.828352   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:19.828496   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:31:19.828653   70962 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:19.828827   70962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.145 22 <nil> <nil>}
	I0401 19:31:19.828865   70962 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 19:31:20.107291   70962 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 19:31:20.107320   70962 machine.go:97] duration metric: took 1.062788118s to provisionDockerMachine
	I0401 19:31:20.107333   70962 start.go:293] postStartSetup for "default-k8s-diff-port-734648" (driver="kvm2")
	I0401 19:31:20.107347   70962 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 19:31:20.107369   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:31:20.107671   70962 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 19:31:20.107693   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:20.110380   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.110739   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:20.110780   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.110895   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:31:20.111075   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:20.111218   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:31:20.111353   70962 sshutil.go:53] new ssh client: &{IP:192.168.61.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/default-k8s-diff-port-734648/id_rsa Username:docker}
	I0401 19:31:20.193908   70962 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 19:31:20.198544   70962 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 19:31:20.198572   70962 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/addons for local assets ...
	I0401 19:31:20.198639   70962 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/files for local assets ...
	I0401 19:31:20.198704   70962 filesync.go:149] local asset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> 177512.pem in /etc/ssl/certs
	I0401 19:31:20.198788   70962 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 19:31:20.209866   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:31:20.240362   70962 start.go:296] duration metric: took 133.016405ms for postStartSetup
	I0401 19:31:20.240399   70962 fix.go:56] duration metric: took 19.789546756s for fixHost
	I0401 19:31:20.240418   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:20.243069   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.243448   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:20.243479   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.243657   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:31:20.243865   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:20.244061   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:20.244209   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:31:20.244399   70962 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:20.244600   70962 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.145 22 <nil> <nil>}
	I0401 19:31:20.244616   70962 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 19:31:20.350752   70962 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711999880.326440079
	
	I0401 19:31:20.350779   70962 fix.go:216] guest clock: 1711999880.326440079
	I0401 19:31:20.350789   70962 fix.go:229] Guest: 2024-04-01 19:31:20.326440079 +0000 UTC Remote: 2024-04-01 19:31:20.240403038 +0000 UTC m=+222.858311555 (delta=86.037041ms)
	I0401 19:31:20.350808   70962 fix.go:200] guest clock delta is within tolerance: 86.037041ms
	I0401 19:31:20.350812   70962 start.go:83] releasing machines lock for "default-k8s-diff-port-734648", held for 19.899997669s
	I0401 19:31:20.350838   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:31:20.351118   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetIP
	I0401 19:31:20.354040   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.354395   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:20.354413   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.354595   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:31:20.355068   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:31:20.355238   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:31:20.355317   70962 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 19:31:20.355356   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:20.355530   70962 ssh_runner.go:195] Run: cat /version.json
	I0401 19:31:20.355557   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:31:20.357970   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.358372   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:20.358405   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.358430   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.358585   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:31:20.358766   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:20.358807   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:20.358834   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:20.358957   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:31:20.359013   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:31:20.359150   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:31:20.359203   70962 sshutil.go:53] new ssh client: &{IP:192.168.61.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/default-k8s-diff-port-734648/id_rsa Username:docker}
	I0401 19:31:20.359292   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:31:20.359439   70962 sshutil.go:53] new ssh client: &{IP:192.168.61.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/default-k8s-diff-port-734648/id_rsa Username:docker}
	I0401 19:31:20.466422   70962 ssh_runner.go:195] Run: systemctl --version
	I0401 19:31:20.472949   70962 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 19:31:20.626069   70962 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 19:31:20.633425   70962 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 19:31:20.633497   70962 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 19:31:20.658883   70962 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 19:31:20.658910   70962 start.go:494] detecting cgroup driver to use...
	I0401 19:31:20.658979   70962 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 19:31:20.686302   70962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 19:31:20.704507   70962 docker.go:217] disabling cri-docker service (if available) ...
	I0401 19:31:20.704583   70962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 19:31:20.725216   70962 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 19:31:20.740635   70962 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 19:31:20.864184   70962 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 19:31:21.010752   70962 docker.go:233] disabling docker service ...
	I0401 19:31:21.010821   70962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 19:31:21.030718   70962 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 19:31:21.047787   70962 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 19:31:21.194455   70962 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 19:31:21.337547   70962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 19:31:21.357144   70962 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 19:31:21.381709   70962 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0401 19:31:21.381782   70962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:21.393160   70962 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 19:31:21.393229   70962 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:21.405047   70962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:21.416810   70962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:21.428947   70962 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 19:31:21.440886   70962 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:21.452872   70962 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:21.473096   70962 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:21.484427   70962 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 19:31:21.494121   70962 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0401 19:31:21.494190   70962 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0401 19:31:21.509859   70962 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 19:31:21.520329   70962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:31:21.671075   70962 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 19:31:21.818822   70962 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 19:31:21.818892   70962 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 19:31:21.825189   70962 start.go:562] Will wait 60s for crictl version
	I0401 19:31:21.825260   70962 ssh_runner.go:195] Run: which crictl
	I0401 19:31:21.830058   70962 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 19:31:21.869617   70962 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 19:31:21.869721   70962 ssh_runner.go:195] Run: crio --version
	I0401 19:31:21.906091   70962 ssh_runner.go:195] Run: crio --version
	I0401 19:31:21.946240   70962 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0401 19:31:21.947653   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetIP
	I0401 19:31:21.950691   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:21.951156   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:31:21.951201   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:31:21.951445   70962 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0401 19:31:21.959376   70962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:31:21.974226   70962 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-734648 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:default-k8s-diff-port-734648 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.145 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 19:31:21.974348   70962 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0401 19:31:21.974426   70962 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:31:22.011856   70962 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0401 19:31:22.011930   70962 ssh_runner.go:195] Run: which lz4
	I0401 19:31:22.016672   70962 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0401 19:31:22.021864   70962 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 19:31:22.021893   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0401 19:31:20.375755   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .Start
	I0401 19:31:20.375932   71168 main.go:141] libmachine: (old-k8s-version-163608) Ensuring networks are active...
	I0401 19:31:20.376713   71168 main.go:141] libmachine: (old-k8s-version-163608) Ensuring network default is active
	I0401 19:31:20.377858   71168 main.go:141] libmachine: (old-k8s-version-163608) Ensuring network mk-old-k8s-version-163608 is active
	I0401 19:31:20.378278   71168 main.go:141] libmachine: (old-k8s-version-163608) Getting domain xml...
	I0401 19:31:20.378972   71168 main.go:141] libmachine: (old-k8s-version-163608) Creating domain...
	I0401 19:31:21.643237   71168 main.go:141] libmachine: (old-k8s-version-163608) Waiting to get IP...
	I0401 19:31:21.644082   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:21.644468   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:21.644535   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:21.644446   71902 retry.go:31] will retry after 208.251344ms: waiting for machine to come up
	I0401 19:31:21.854070   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:21.854545   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:21.854593   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:21.854527   71902 retry.go:31] will retry after 240.466964ms: waiting for machine to come up
	I0401 19:31:22.096940   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:22.097447   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:22.097470   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:22.097405   71902 retry.go:31] will retry after 480.217755ms: waiting for machine to come up
	I0401 19:31:22.579111   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:22.579596   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:22.579628   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:22.579518   71902 retry.go:31] will retry after 581.713487ms: waiting for machine to come up
	I0401 19:31:22.826723   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:25.326165   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:23.813558   70962 crio.go:462] duration metric: took 1.796902191s to copy over tarball
	I0401 19:31:23.813619   70962 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 19:31:26.447802   70962 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.634145928s)
	I0401 19:31:26.447840   70962 crio.go:469] duration metric: took 2.634257029s to extract the tarball
	I0401 19:31:26.447849   70962 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 19:31:26.488228   70962 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:31:26.535741   70962 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 19:31:26.535770   70962 cache_images.go:84] Images are preloaded, skipping loading
	I0401 19:31:26.535780   70962 kubeadm.go:928] updating node { 192.168.61.145 8444 v1.29.3 crio true true} ...
	I0401 19:31:26.535931   70962 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-734648 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.145
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-734648 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 19:31:26.536019   70962 ssh_runner.go:195] Run: crio config
	I0401 19:31:26.590211   70962 cni.go:84] Creating CNI manager for ""
	I0401 19:31:26.590239   70962 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:31:26.590254   70962 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 19:31:26.590282   70962 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.145 APIServerPort:8444 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-734648 NodeName:default-k8s-diff-port-734648 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.145"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.145 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 19:31:26.590459   70962 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.145
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-734648"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.145
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.145"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 19:31:26.590533   70962 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0401 19:31:26.602186   70962 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 19:31:26.602264   70962 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 19:31:26.616193   70962 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0401 19:31:26.636634   70962 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 19:31:26.660339   70962 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0401 19:31:26.687935   70962 ssh_runner.go:195] Run: grep 192.168.61.145	control-plane.minikube.internal$ /etc/hosts
	I0401 19:31:26.693966   70962 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.145	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:31:26.709876   70962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:31:26.854990   70962 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:31:26.877303   70962 certs.go:68] Setting up /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/default-k8s-diff-port-734648 for IP: 192.168.61.145
	I0401 19:31:26.877327   70962 certs.go:194] generating shared ca certs ...
	I0401 19:31:26.877350   70962 certs.go:226] acquiring lock for ca certs: {Name:mk348b3e250c104b662139cd7212c6c6dfda3180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:31:26.877578   70962 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key
	I0401 19:31:26.877621   70962 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key
	I0401 19:31:26.877637   70962 certs.go:256] generating profile certs ...
	I0401 19:31:26.877777   70962 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/default-k8s-diff-port-734648/client.key
	I0401 19:31:26.877864   70962 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/default-k8s-diff-port-734648/apiserver.key.e4671486
	I0401 19:31:26.877909   70962 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/default-k8s-diff-port-734648/proxy-client.key
	I0401 19:31:26.878007   70962 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem (1338 bytes)
	W0401 19:31:26.878049   70962 certs.go:480] ignoring /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751_empty.pem, impossibly tiny 0 bytes
	I0401 19:31:26.878062   70962 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem (1675 bytes)
	I0401 19:31:26.878094   70962 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem (1082 bytes)
	I0401 19:31:26.878128   70962 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem (1123 bytes)
	I0401 19:31:26.878153   70962 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem (1679 bytes)
	I0401 19:31:26.878203   70962 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:31:26.879101   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 19:31:26.917600   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 19:31:26.968606   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 19:31:27.012527   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 19:31:27.078525   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/default-k8s-diff-port-734648/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0401 19:31:27.125195   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/default-k8s-diff-port-734648/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 19:31:27.157190   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/default-k8s-diff-port-734648/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 19:31:27.185434   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/default-k8s-diff-port-734648/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 19:31:27.215215   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 19:31:27.246938   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem --> /usr/share/ca-certificates/17751.pem (1338 bytes)
	I0401 19:31:27.277210   70962 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /usr/share/ca-certificates/177512.pem (1708 bytes)
	I0401 19:31:27.307099   70962 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0401 19:31:27.326664   70962 ssh_runner.go:195] Run: openssl version
	I0401 19:31:27.333292   70962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 19:31:27.344724   70962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:27.350096   70962 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 18:07 /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:27.350146   70962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:27.356421   70962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 19:31:27.368124   70962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17751.pem && ln -fs /usr/share/ca-certificates/17751.pem /etc/ssl/certs/17751.pem"
	I0401 19:31:27.379331   70962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17751.pem
	I0401 19:31:27.384465   70962 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 18:15 /usr/share/ca-certificates/17751.pem
	I0401 19:31:27.384518   70962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17751.pem
	I0401 19:31:27.391192   70962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17751.pem /etc/ssl/certs/51391683.0"
	I0401 19:31:27.403898   70962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177512.pem && ln -fs /usr/share/ca-certificates/177512.pem /etc/ssl/certs/177512.pem"
	I0401 19:31:27.418676   70962 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177512.pem
	I0401 19:31:27.424254   70962 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 18:15 /usr/share/ca-certificates/177512.pem
	I0401 19:31:27.424308   70962 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177512.pem
	I0401 19:31:23.163331   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:23.163803   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:23.163838   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:23.163770   71902 retry.go:31] will retry after 737.12898ms: waiting for machine to come up
	I0401 19:31:23.902739   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:23.903192   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:23.903222   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:23.903139   71902 retry.go:31] will retry after 718.826495ms: waiting for machine to come up
	I0401 19:31:24.624169   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:24.624620   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:24.624648   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:24.624574   71902 retry.go:31] will retry after 1.020701715s: waiting for machine to come up
	I0401 19:31:25.647470   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:25.647957   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:25.647988   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:25.647921   71902 retry.go:31] will retry after 1.318891306s: waiting for machine to come up
	I0401 19:31:26.968134   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:26.968588   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:26.968613   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:26.968535   71902 retry.go:31] will retry after 1.465864517s: waiting for machine to come up
	I0401 19:31:27.752110   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:29.827324   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:27.431798   70962 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177512.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 19:31:27.749367   70962 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 19:31:27.757123   70962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 19:31:27.768626   70962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 19:31:27.778119   70962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 19:31:27.786893   70962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 19:31:27.797129   70962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 19:31:27.804804   70962 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 19:31:27.813194   70962 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-734648 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.29.3 ClusterName:default-k8s-diff-port-734648 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.145 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:31:27.813274   70962 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 19:31:27.813325   70962 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:31:27.864565   70962 cri.go:89] found id: ""
	I0401 19:31:27.864637   70962 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0401 19:31:27.876745   70962 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0401 19:31:27.876789   70962 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0401 19:31:27.876797   70962 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0401 19:31:27.876862   70962 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 19:31:27.887494   70962 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 19:31:27.888632   70962 kubeconfig.go:125] found "default-k8s-diff-port-734648" server: "https://192.168.61.145:8444"
	I0401 19:31:27.890729   70962 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 19:31:27.900847   70962 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.145
	I0401 19:31:27.900877   70962 kubeadm.go:1154] stopping kube-system containers ...
	I0401 19:31:27.900889   70962 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0401 19:31:27.900936   70962 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:31:27.952874   70962 cri.go:89] found id: ""
	I0401 19:31:27.952954   70962 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0401 19:31:27.971647   70962 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:31:27.982541   70962 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:31:27.982576   70962 kubeadm.go:156] found existing configuration files:
	
	I0401 19:31:27.982612   70962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0401 19:31:27.992341   70962 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:31:27.992414   70962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:31:28.002685   70962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0401 19:31:28.012599   70962 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:31:28.012658   70962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:31:28.022731   70962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0401 19:31:28.033584   70962 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:31:28.033661   70962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:31:28.044940   70962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0401 19:31:28.055832   70962 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:31:28.055886   70962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:31:28.066919   70962 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:31:28.078715   70962 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:28.212251   70962 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:29.214190   70962 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.001904972s)
	I0401 19:31:29.214224   70962 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:29.444484   70962 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:29.536112   70962 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:29.664087   70962 api_server.go:52] waiting for apiserver process to appear ...
	I0401 19:31:29.664201   70962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:30.165117   70962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:30.664872   70962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:30.707251   70962 api_server.go:72] duration metric: took 1.04316448s to wait for apiserver process to appear ...
	I0401 19:31:30.707280   70962 api_server.go:88] waiting for apiserver healthz status ...
	I0401 19:31:30.707297   70962 api_server.go:253] Checking apiserver healthz at https://192.168.61.145:8444/healthz ...
	I0401 19:31:30.707881   70962 api_server.go:269] stopped: https://192.168.61.145:8444/healthz: Get "https://192.168.61.145:8444/healthz": dial tcp 192.168.61.145:8444: connect: connection refused
	I0401 19:31:31.207434   70962 api_server.go:253] Checking apiserver healthz at https://192.168.61.145:8444/healthz ...
	I0401 19:31:28.435890   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:28.436304   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:28.436334   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:28.436255   71902 retry.go:31] will retry after 2.062597688s: waiting for machine to come up
	I0401 19:31:30.500523   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:30.500999   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:30.501027   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:30.500954   71902 retry.go:31] will retry after 2.068480339s: waiting for machine to come up
	I0401 19:31:32.571229   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:32.571603   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:32.571635   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:32.571550   71902 retry.go:31] will retry after 3.355965883s: waiting for machine to come up
	I0401 19:31:33.707613   70962 api_server.go:279] https://192.168.61.145:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0401 19:31:33.707647   70962 api_server.go:103] status: https://192.168.61.145:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0401 19:31:33.707663   70962 api_server.go:253] Checking apiserver healthz at https://192.168.61.145:8444/healthz ...
	I0401 19:31:33.728509   70962 api_server.go:279] https://192.168.61.145:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0401 19:31:33.728582   70962 api_server.go:103] status: https://192.168.61.145:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0401 19:31:34.208163   70962 api_server.go:253] Checking apiserver healthz at https://192.168.61.145:8444/healthz ...
	I0401 19:31:34.212754   70962 api_server.go:279] https://192.168.61.145:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0401 19:31:34.212784   70962 api_server.go:103] status: https://192.168.61.145:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0401 19:31:34.708282   70962 api_server.go:253] Checking apiserver healthz at https://192.168.61.145:8444/healthz ...
	I0401 19:31:34.715268   70962 api_server.go:279] https://192.168.61.145:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0401 19:31:34.715294   70962 api_server.go:103] status: https://192.168.61.145:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0401 19:31:35.207460   70962 api_server.go:253] Checking apiserver healthz at https://192.168.61.145:8444/healthz ...
	I0401 19:31:35.212542   70962 api_server.go:279] https://192.168.61.145:8444/healthz returned 200:
	ok
	I0401 19:31:35.219264   70962 api_server.go:141] control plane version: v1.29.3
	I0401 19:31:35.219287   70962 api_server.go:131] duration metric: took 4.512000334s to wait for apiserver health ...
	I0401 19:31:35.219294   70962 cni.go:84] Creating CNI manager for ""
	I0401 19:31:35.219309   70962 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:31:35.221080   70962 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0401 19:31:31.828694   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:34.325740   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:35.222800   70962 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0401 19:31:35.238787   70962 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0401 19:31:35.286002   70962 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 19:31:35.302379   70962 system_pods.go:59] 8 kube-system pods found
	I0401 19:31:35.302420   70962 system_pods.go:61] "coredns-76f75df574-tdwrh" [c1d3b591-fa81-46dd-847c-ffdfc22937fa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:31:35.302437   70962 system_pods.go:61] "etcd-default-k8s-diff-port-734648" [e977793d-ec92-40b8-a0fe-1b2400fb1af6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0401 19:31:35.302447   70962 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-734648" [2d0eae31-35c3-40aa-9d28-a2f51849c15d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0401 19:31:35.302469   70962 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-734648" [cded1171-2e1b-4d70-9f26-d1d3a6558da1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0401 19:31:35.302483   70962 system_pods.go:61] "kube-proxy-mn546" [f9b6366f-7095-418c-ba24-529c0555f438] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:31:35.302493   70962 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-734648" [c1518ece-8cbf-49fe-9091-15b38dc1bd62] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0401 19:31:35.302504   70962 system_pods.go:61] "metrics-server-57f55c9bc5-g7mg2" [d1ede79a-a7e6-42bd-a799-197ffc7c7939] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:31:35.302519   70962 system_pods.go:61] "storage-provisioner" [bd55f9c8-580c-4eb1-adbc-020d5bbedce9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:31:35.302532   70962 system_pods.go:74] duration metric: took 16.508651ms to wait for pod list to return data ...
	I0401 19:31:35.302545   70962 node_conditions.go:102] verifying NodePressure condition ...
	I0401 19:31:35.305826   70962 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 19:31:35.305862   70962 node_conditions.go:123] node cpu capacity is 2
	I0401 19:31:35.305876   70962 node_conditions.go:105] duration metric: took 3.322577ms to run NodePressure ...
	I0401 19:31:35.305895   70962 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:35.603225   70962 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0401 19:31:35.608584   70962 kubeadm.go:733] kubelet initialised
	I0401 19:31:35.608611   70962 kubeadm.go:734] duration metric: took 5.361549ms waiting for restarted kubelet to initialise ...
	I0401 19:31:35.608620   70962 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:31:35.615252   70962 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-tdwrh" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:35.620605   70962 pod_ready.go:97] node "default-k8s-diff-port-734648" hosting pod "coredns-76f75df574-tdwrh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-734648" has status "Ready":"False"
	I0401 19:31:35.620627   70962 pod_ready.go:81] duration metric: took 5.353257ms for pod "coredns-76f75df574-tdwrh" in "kube-system" namespace to be "Ready" ...
	E0401 19:31:35.620634   70962 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-734648" hosting pod "coredns-76f75df574-tdwrh" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-734648" has status "Ready":"False"
	I0401 19:31:35.620641   70962 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:35.625280   70962 pod_ready.go:97] node "default-k8s-diff-port-734648" hosting pod "etcd-default-k8s-diff-port-734648" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-734648" has status "Ready":"False"
	I0401 19:31:35.625297   70962 pod_ready.go:81] duration metric: took 4.646748ms for pod "etcd-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	E0401 19:31:35.625311   70962 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-734648" hosting pod "etcd-default-k8s-diff-port-734648" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-734648" has status "Ready":"False"
	I0401 19:31:35.625325   70962 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:35.630150   70962 pod_ready.go:97] node "default-k8s-diff-port-734648" hosting pod "kube-apiserver-default-k8s-diff-port-734648" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-734648" has status "Ready":"False"
	I0401 19:31:35.630170   70962 pod_ready.go:81] duration metric: took 4.83409ms for pod "kube-apiserver-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	E0401 19:31:35.630178   70962 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-734648" hosting pod "kube-apiserver-default-k8s-diff-port-734648" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-734648" has status "Ready":"False"
	I0401 19:31:35.630184   70962 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:35.693865   70962 pod_ready.go:97] node "default-k8s-diff-port-734648" hosting pod "kube-controller-manager-default-k8s-diff-port-734648" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-734648" has status "Ready":"False"
	I0401 19:31:35.693890   70962 pod_ready.go:81] duration metric: took 63.697397ms for pod "kube-controller-manager-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	E0401 19:31:35.693901   70962 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-734648" hosting pod "kube-controller-manager-default-k8s-diff-port-734648" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-734648" has status "Ready":"False"
	I0401 19:31:35.693908   70962 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mn546" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:36.090904   70962 pod_ready.go:92] pod "kube-proxy-mn546" in "kube-system" namespace has status "Ready":"True"
	I0401 19:31:36.090928   70962 pod_ready.go:81] duration metric: took 397.013717ms for pod "kube-proxy-mn546" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:36.090938   70962 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:35.929498   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:35.930010   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | unable to find current IP address of domain old-k8s-version-163608 in network mk-old-k8s-version-163608
	I0401 19:31:35.930042   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | I0401 19:31:35.929963   71902 retry.go:31] will retry after 3.806123644s: waiting for machine to come up
	I0401 19:31:41.203538   70284 start.go:364] duration metric: took 56.718693538s to acquireMachinesLock for "no-preload-472858"
	I0401 19:31:41.203592   70284 start.go:96] Skipping create...Using existing machine configuration
	I0401 19:31:41.203607   70284 fix.go:54] fixHost starting: 
	I0401 19:31:41.204096   70284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:31:41.204143   70284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:31:41.221574   70284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42471
	I0401 19:31:41.222045   70284 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:31:41.222527   70284 main.go:141] libmachine: Using API Version  1
	I0401 19:31:41.222547   70284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:31:41.222856   70284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:31:41.223051   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:31:41.223209   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetState
	I0401 19:31:41.224801   70284 fix.go:112] recreateIfNeeded on no-preload-472858: state=Stopped err=<nil>
	I0401 19:31:41.224827   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	W0401 19:31:41.224979   70284 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 19:31:41.226937   70284 out.go:177] * Restarting existing kvm2 VM for "no-preload-472858" ...
	I0401 19:31:36.824790   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:38.824976   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:40.827269   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:41.228315   70284 main.go:141] libmachine: (no-preload-472858) Calling .Start
	I0401 19:31:41.228509   70284 main.go:141] libmachine: (no-preload-472858) Ensuring networks are active...
	I0401 19:31:41.229206   70284 main.go:141] libmachine: (no-preload-472858) Ensuring network default is active
	I0401 19:31:41.229603   70284 main.go:141] libmachine: (no-preload-472858) Ensuring network mk-no-preload-472858 is active
	I0401 19:31:41.229999   70284 main.go:141] libmachine: (no-preload-472858) Getting domain xml...
	I0401 19:31:41.230682   70284 main.go:141] libmachine: (no-preload-472858) Creating domain...
	I0401 19:31:38.097417   70962 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:40.098187   70962 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:42.099891   70962 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:39.739700   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.740313   71168 main.go:141] libmachine: (old-k8s-version-163608) Found IP for machine: 192.168.50.106
	I0401 19:31:39.740369   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has current primary IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.740386   71168 main.go:141] libmachine: (old-k8s-version-163608) Reserving static IP address...
	I0401 19:31:39.740767   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "old-k8s-version-163608", mac: "52:54:00:fe:1b:e7", ip: "192.168.50.106"} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:39.740798   71168 main.go:141] libmachine: (old-k8s-version-163608) Reserved static IP address: 192.168.50.106
	I0401 19:31:39.740818   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | skip adding static IP to network mk-old-k8s-version-163608 - found existing host DHCP lease matching {name: "old-k8s-version-163608", mac: "52:54:00:fe:1b:e7", ip: "192.168.50.106"}
	I0401 19:31:39.740839   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | Getting to WaitForSSH function...
	I0401 19:31:39.740857   71168 main.go:141] libmachine: (old-k8s-version-163608) Waiting for SSH to be available...
	I0401 19:31:39.743023   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.743417   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:39.743447   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.743589   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | Using SSH client type: external
	I0401 19:31:39.743614   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | Using SSH private key: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/old-k8s-version-163608/id_rsa (-rw-------)
	I0401 19:31:39.743648   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.106 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18233-10493/.minikube/machines/old-k8s-version-163608/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0401 19:31:39.743662   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | About to run SSH command:
	I0401 19:31:39.743676   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | exit 0
	I0401 19:31:39.877699   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | SSH cmd err, output: <nil>: 
	I0401 19:31:39.878044   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetConfigRaw
	I0401 19:31:39.878611   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetIP
	I0401 19:31:39.880733   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.881074   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:39.881107   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.881352   71168 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/config.json ...
	I0401 19:31:39.881510   71168 machine.go:94] provisionDockerMachine start ...
	I0401 19:31:39.881529   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:31:39.881766   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:39.883980   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.884318   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:39.884360   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.884483   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:39.884675   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:39.884877   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:39.885029   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:39.885175   71168 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:39.885339   71168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.106 22 <nil> <nil>}
	I0401 19:31:39.885349   71168 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 19:31:39.994935   71168 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0401 19:31:39.994971   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetMachineName
	I0401 19:31:39.995213   71168 buildroot.go:166] provisioning hostname "old-k8s-version-163608"
	I0401 19:31:39.995241   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetMachineName
	I0401 19:31:39.995472   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:39.998179   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.998490   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:39.998525   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:39.998656   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:39.998805   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:39.998949   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:39.999054   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:39.999183   71168 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:39.999372   71168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.106 22 <nil> <nil>}
	I0401 19:31:39.999390   71168 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-163608 && echo "old-k8s-version-163608" | sudo tee /etc/hostname
	I0401 19:31:40.128852   71168 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-163608
	
	I0401 19:31:40.128880   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:40.131508   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.131817   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:40.131874   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.131987   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:40.132188   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:40.132365   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:40.132503   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:40.132693   71168 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:40.132890   71168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.106 22 <nil> <nil>}
	I0401 19:31:40.132908   71168 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-163608' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-163608/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-163608' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 19:31:40.252693   71168 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 19:31:40.252727   71168 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18233-10493/.minikube CaCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18233-10493/.minikube}
	I0401 19:31:40.252749   71168 buildroot.go:174] setting up certificates
	I0401 19:31:40.252759   71168 provision.go:84] configureAuth start
	I0401 19:31:40.252767   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetMachineName
	I0401 19:31:40.253030   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetIP
	I0401 19:31:40.255827   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.256183   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:40.256210   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.256418   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:40.259041   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.259388   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:40.259418   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.259540   71168 provision.go:143] copyHostCerts
	I0401 19:31:40.259592   71168 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem, removing ...
	I0401 19:31:40.259602   71168 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem
	I0401 19:31:40.259654   71168 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem (1082 bytes)
	I0401 19:31:40.259745   71168 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem, removing ...
	I0401 19:31:40.259754   71168 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem
	I0401 19:31:40.259773   71168 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem (1123 bytes)
	I0401 19:31:40.259822   71168 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem, removing ...
	I0401 19:31:40.259830   71168 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem
	I0401 19:31:40.259846   71168 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem (1679 bytes)
	I0401 19:31:40.259891   71168 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-163608 san=[127.0.0.1 192.168.50.106 localhost minikube old-k8s-version-163608]
	I0401 19:31:40.465177   71168 provision.go:177] copyRemoteCerts
	I0401 19:31:40.465241   71168 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 19:31:40.465265   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:40.467676   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.468040   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:40.468070   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.468272   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:40.468456   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:40.468622   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:40.468767   71168 sshutil.go:53] new ssh client: &{IP:192.168.50.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/old-k8s-version-163608/id_rsa Username:docker}
	I0401 19:31:40.557764   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 19:31:40.585326   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0401 19:31:40.611671   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 19:31:40.639265   71168 provision.go:87] duration metric: took 386.497023ms to configureAuth
	I0401 19:31:40.639296   71168 buildroot.go:189] setting minikube options for container-runtime
	I0401 19:31:40.639521   71168 config.go:182] Loaded profile config "old-k8s-version-163608": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 19:31:40.639590   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:40.642321   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.642733   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:40.642762   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.642921   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:40.643122   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:40.643294   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:40.643442   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:40.643647   71168 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:40.643802   71168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.106 22 <nil> <nil>}
	I0401 19:31:40.643819   71168 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 19:31:40.940619   71168 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 19:31:40.940647   71168 machine.go:97] duration metric: took 1.059122816s to provisionDockerMachine
	I0401 19:31:40.940661   71168 start.go:293] postStartSetup for "old-k8s-version-163608" (driver="kvm2")
	I0401 19:31:40.940672   71168 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 19:31:40.940687   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:31:40.940955   71168 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 19:31:40.940981   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:40.943787   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.944159   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:40.944197   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:40.944347   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:40.944556   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:40.944700   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:40.944834   71168 sshutil.go:53] new ssh client: &{IP:192.168.50.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/old-k8s-version-163608/id_rsa Username:docker}
	I0401 19:31:41.035824   71168 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 19:31:41.040975   71168 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 19:31:41.041007   71168 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/addons for local assets ...
	I0401 19:31:41.041085   71168 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/files for local assets ...
	I0401 19:31:41.041165   71168 filesync.go:149] local asset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> 177512.pem in /etc/ssl/certs
	I0401 19:31:41.041255   71168 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 19:31:41.052356   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:31:41.080699   71168 start.go:296] duration metric: took 140.024653ms for postStartSetup
	I0401 19:31:41.080737   71168 fix.go:56] duration metric: took 20.729726297s for fixHost
	I0401 19:31:41.080759   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:41.083664   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:41.084045   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:41.084075   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:41.084202   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:41.084405   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:41.084599   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:41.084796   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:41.084971   71168 main.go:141] libmachine: Using SSH client type: native
	I0401 19:31:41.085169   71168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.106 22 <nil> <nil>}
	I0401 19:31:41.085180   71168 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 19:31:41.203392   71168 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711999901.182365994
	
	I0401 19:31:41.203412   71168 fix.go:216] guest clock: 1711999901.182365994
	I0401 19:31:41.203419   71168 fix.go:229] Guest: 2024-04-01 19:31:41.182365994 +0000 UTC Remote: 2024-04-01 19:31:41.080741553 +0000 UTC m=+228.159955492 (delta=101.624441ms)
	I0401 19:31:41.203437   71168 fix.go:200] guest clock delta is within tolerance: 101.624441ms
	I0401 19:31:41.203442   71168 start.go:83] releasing machines lock for "old-k8s-version-163608", held for 20.852486097s
	I0401 19:31:41.203462   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:31:41.203744   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetIP
	I0401 19:31:41.206582   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:41.206952   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:41.206973   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:41.207151   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:31:41.207701   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:31:41.207891   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .DriverName
	I0401 19:31:41.207954   71168 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 19:31:41.207996   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:41.208096   71168 ssh_runner.go:195] Run: cat /version.json
	I0401 19:31:41.208127   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHHostname
	I0401 19:31:41.210731   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:41.210928   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:41.211107   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:41.211132   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:41.211317   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:41.211446   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:41.211488   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:41.211491   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:41.211636   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:41.211692   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHPort
	I0401 19:31:41.211783   71168 sshutil.go:53] new ssh client: &{IP:192.168.50.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/old-k8s-version-163608/id_rsa Username:docker}
	I0401 19:31:41.211891   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHKeyPath
	I0401 19:31:41.212031   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetSSHUsername
	I0401 19:31:41.212187   71168 sshutil.go:53] new ssh client: &{IP:192.168.50.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/old-k8s-version-163608/id_rsa Username:docker}
	I0401 19:31:41.296330   71168 ssh_runner.go:195] Run: systemctl --version
	I0401 19:31:41.326247   71168 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 19:31:41.479411   71168 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 19:31:41.486996   71168 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 19:31:41.487063   71168 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 19:31:41.507840   71168 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 19:31:41.507870   71168 start.go:494] detecting cgroup driver to use...
	I0401 19:31:41.507942   71168 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 19:31:41.533063   71168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 19:31:41.551699   71168 docker.go:217] disabling cri-docker service (if available) ...
	I0401 19:31:41.551754   71168 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 19:31:41.568078   71168 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 19:31:41.584278   71168 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 19:31:41.726884   71168 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 19:31:41.882514   71168 docker.go:233] disabling docker service ...
	I0401 19:31:41.882587   71168 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 19:31:41.901235   71168 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 19:31:41.919787   71168 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 19:31:42.082420   71168 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 19:31:42.248527   71168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 19:31:42.266610   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 19:31:42.295677   71168 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0401 19:31:42.295740   71168 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:42.313855   71168 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 19:31:42.313920   71168 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:42.327176   71168 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:42.339527   71168 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:31:42.351220   71168 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 19:31:42.363716   71168 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 19:31:42.379911   71168 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0401 19:31:42.379971   71168 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0401 19:31:42.395282   71168 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 19:31:42.407713   71168 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:31:42.579648   71168 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 19:31:42.764748   71168 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 19:31:42.764858   71168 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 19:31:42.771038   71168 start.go:562] Will wait 60s for crictl version
	I0401 19:31:42.771125   71168 ssh_runner.go:195] Run: which crictl
	I0401 19:31:42.775871   71168 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 19:31:42.823135   71168 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 19:31:42.823218   71168 ssh_runner.go:195] Run: crio --version
	I0401 19:31:42.863748   71168 ssh_runner.go:195] Run: crio --version
	I0401 19:31:42.900263   71168 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0401 19:31:42.901631   71168 main.go:141] libmachine: (old-k8s-version-163608) Calling .GetIP
	I0401 19:31:42.904464   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:42.904773   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:1b:e7", ip: ""} in network mk-old-k8s-version-163608: {Iface:virbr2 ExpiryTime:2024-04-01 20:31:33 +0000 UTC Type:0 Mac:52:54:00:fe:1b:e7 Iaid: IPaddr:192.168.50.106 Prefix:24 Hostname:old-k8s-version-163608 Clientid:01:52:54:00:fe:1b:e7}
	I0401 19:31:42.904812   71168 main.go:141] libmachine: (old-k8s-version-163608) DBG | domain old-k8s-version-163608 has defined IP address 192.168.50.106 and MAC address 52:54:00:fe:1b:e7 in network mk-old-k8s-version-163608
	I0401 19:31:42.905048   71168 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0401 19:31:42.910117   71168 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:31:42.925313   71168 kubeadm.go:877] updating cluster {Name:old-k8s-version-163608 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-163608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.106 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 19:31:42.925475   71168 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0401 19:31:42.925542   71168 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:31:42.828772   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:44.829527   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:42.553437   70284 main.go:141] libmachine: (no-preload-472858) Waiting to get IP...
	I0401 19:31:42.554422   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:42.554810   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:42.554907   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:42.554806   72041 retry.go:31] will retry after 237.823736ms: waiting for machine to come up
	I0401 19:31:42.794546   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:42.795159   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:42.795205   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:42.795117   72041 retry.go:31] will retry after 326.387674ms: waiting for machine to come up
	I0401 19:31:43.123632   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:43.124306   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:43.124342   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:43.124244   72041 retry.go:31] will retry after 455.262949ms: waiting for machine to come up
	I0401 19:31:43.580752   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:43.581420   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:43.581440   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:43.581375   72041 retry.go:31] will retry after 520.307316ms: waiting for machine to come up
	I0401 19:31:44.103924   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:44.104407   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:44.104431   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:44.104361   72041 retry.go:31] will retry after 491.638031ms: waiting for machine to come up
	I0401 19:31:44.598440   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:44.598990   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:44.599015   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:44.598901   72041 retry.go:31] will retry after 652.234963ms: waiting for machine to come up
	I0401 19:31:45.252362   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:45.252901   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:45.252933   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:45.252853   72041 retry.go:31] will retry after 1.047335678s: waiting for machine to come up
	I0401 19:31:46.301894   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:46.302324   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:46.302349   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:46.302281   72041 retry.go:31] will retry after 1.303326069s: waiting for machine to come up
	I0401 19:31:44.101042   70962 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:46.099803   70962 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace has status "Ready":"True"
	I0401 19:31:46.099828   70962 pod_ready.go:81] duration metric: took 10.008882274s for pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:46.099843   70962 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace to be "Ready" ...
	I0401 19:31:42.974220   71168 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0401 19:31:42.974307   71168 ssh_runner.go:195] Run: which lz4
	I0401 19:31:42.979179   71168 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0401 19:31:42.984204   71168 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 19:31:42.984236   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0401 19:31:45.108131   71168 crio.go:462] duration metric: took 2.128988098s to copy over tarball
	I0401 19:31:45.108232   71168 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 19:31:47.328534   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:49.827306   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:47.606907   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:47.607392   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:47.607419   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:47.607356   72041 retry.go:31] will retry after 1.729010443s: waiting for machine to come up
	I0401 19:31:49.338200   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:49.338722   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:49.338751   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:49.338667   72041 retry.go:31] will retry after 2.069036941s: waiting for machine to come up
	I0401 19:31:51.409458   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:51.409945   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:51.409976   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:51.409894   72041 retry.go:31] will retry after 2.405834741s: waiting for machine to come up
	I0401 19:31:48.108234   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:50.607720   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:48.581824   71168 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.473552916s)
	I0401 19:31:48.581871   71168 crio.go:469] duration metric: took 3.473700991s to extract the tarball
	I0401 19:31:48.581881   71168 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 19:31:48.630609   71168 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:31:48.673027   71168 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0401 19:31:48.673048   71168 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0401 19:31:48.673085   71168 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:31:48.673129   71168 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 19:31:48.673155   71168 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 19:31:48.673190   71168 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 19:31:48.673133   71168 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0401 19:31:48.673273   71168 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0401 19:31:48.673143   71168 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0401 19:31:48.673336   71168 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 19:31:48.675068   71168 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:31:48.675073   71168 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 19:31:48.675068   71168 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 19:31:48.675093   71168 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0401 19:31:48.675072   71168 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0401 19:31:48.675073   71168 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 19:31:48.675115   71168 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 19:31:48.675096   71168 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0401 19:31:48.827947   71168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0401 19:31:48.846025   71168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0401 19:31:48.848769   71168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0401 19:31:48.858366   71168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0401 19:31:48.858613   71168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0401 19:31:48.859241   71168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 19:31:48.862047   71168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0401 19:31:48.912299   71168 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0401 19:31:48.912346   71168 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0401 19:31:48.912399   71168 ssh_runner.go:195] Run: which crictl
	I0401 19:31:49.030117   71168 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0401 19:31:49.030357   71168 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 19:31:49.030122   71168 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0401 19:31:49.030433   71168 ssh_runner.go:195] Run: which crictl
	I0401 19:31:49.030460   71168 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 19:31:49.030526   71168 ssh_runner.go:195] Run: which crictl
	I0401 19:31:49.062211   71168 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0401 19:31:49.062327   71168 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0401 19:31:49.062234   71168 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0401 19:31:49.062415   71168 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0401 19:31:49.062396   71168 ssh_runner.go:195] Run: which crictl
	I0401 19:31:49.062461   71168 ssh_runner.go:195] Run: which crictl
	I0401 19:31:49.078249   71168 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0401 19:31:49.078308   71168 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 19:31:49.078323   71168 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0401 19:31:49.078358   71168 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 19:31:49.078379   71168 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0401 19:31:49.078398   71168 ssh_runner.go:195] Run: which crictl
	I0401 19:31:49.078426   71168 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0401 19:31:49.078440   71168 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0401 19:31:49.078362   71168 ssh_runner.go:195] Run: which crictl
	I0401 19:31:49.078466   71168 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0401 19:31:49.078494   71168 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0401 19:31:49.225060   71168 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0401 19:31:49.225137   71168 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0401 19:31:49.225160   71168 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0401 19:31:49.225199   71168 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0401 19:31:49.225250   71168 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0401 19:31:49.225252   71168 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 19:31:49.225326   71168 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0401 19:31:49.280782   71168 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0401 19:31:49.281709   71168 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0401 19:31:49.299218   71168 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:31:49.465497   71168 cache_images.go:92] duration metric: took 792.432136ms to LoadCachedImages
	W0401 19:31:49.465595   71168 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0401 19:31:49.465613   71168 kubeadm.go:928] updating node { 192.168.50.106 8443 v1.20.0 crio true true} ...
	I0401 19:31:49.465768   71168 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-163608 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.106
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-163608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 19:31:49.465862   71168 ssh_runner.go:195] Run: crio config
	I0401 19:31:49.529730   71168 cni.go:84] Creating CNI manager for ""
	I0401 19:31:49.529757   71168 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:31:49.529771   71168 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 19:31:49.529799   71168 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.106 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-163608 NodeName:old-k8s-version-163608 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.106"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.106 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0401 19:31:49.529969   71168 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.106
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-163608"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.106
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.106"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 19:31:49.530037   71168 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0401 19:31:49.542642   71168 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 19:31:49.542724   71168 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 19:31:49.557001   71168 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0401 19:31:49.579568   71168 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 19:31:49.599692   71168 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0401 19:31:49.619780   71168 ssh_runner.go:195] Run: grep 192.168.50.106	control-plane.minikube.internal$ /etc/hosts
	I0401 19:31:49.625597   71168 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.106	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:31:49.643862   71168 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:31:49.791391   71168 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:31:49.814470   71168 certs.go:68] Setting up /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608 for IP: 192.168.50.106
	I0401 19:31:49.814497   71168 certs.go:194] generating shared ca certs ...
	I0401 19:31:49.814516   71168 certs.go:226] acquiring lock for ca certs: {Name:mk348b3e250c104b662139cd7212c6c6dfda3180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:31:49.814680   71168 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key
	I0401 19:31:49.814736   71168 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key
	I0401 19:31:49.814745   71168 certs.go:256] generating profile certs ...
	I0401 19:31:49.814852   71168 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/client.key
	I0401 19:31:49.814916   71168 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/apiserver.key.f2de0982
	I0401 19:31:49.814964   71168 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/proxy-client.key
	I0401 19:31:49.815119   71168 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem (1338 bytes)
	W0401 19:31:49.815178   71168 certs.go:480] ignoring /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751_empty.pem, impossibly tiny 0 bytes
	I0401 19:31:49.815195   71168 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem (1675 bytes)
	I0401 19:31:49.815224   71168 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem (1082 bytes)
	I0401 19:31:49.815266   71168 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem (1123 bytes)
	I0401 19:31:49.815299   71168 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem (1679 bytes)
	I0401 19:31:49.815362   71168 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:31:49.816196   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 19:31:49.866842   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 19:31:49.913788   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 19:31:49.953223   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 19:31:50.004313   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0401 19:31:50.046972   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 19:31:50.086990   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 19:31:50.134907   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/old-k8s-version-163608/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 19:31:50.163395   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /usr/share/ca-certificates/177512.pem (1708 bytes)
	I0401 19:31:50.191901   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 19:31:50.221196   71168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem --> /usr/share/ca-certificates/17751.pem (1338 bytes)
	I0401 19:31:50.253024   71168 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0401 19:31:50.275781   71168 ssh_runner.go:195] Run: openssl version
	I0401 19:31:50.282795   71168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177512.pem && ln -fs /usr/share/ca-certificates/177512.pem /etc/ssl/certs/177512.pem"
	I0401 19:31:50.296952   71168 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177512.pem
	I0401 19:31:50.303868   71168 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 18:15 /usr/share/ca-certificates/177512.pem
	I0401 19:31:50.303950   71168 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177512.pem
	I0401 19:31:50.312249   71168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177512.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 19:31:50.328985   71168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 19:31:50.345917   71168 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:50.352041   71168 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 18:07 /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:50.352103   71168 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:31:50.358752   71168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 19:31:50.371702   71168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17751.pem && ln -fs /usr/share/ca-certificates/17751.pem /etc/ssl/certs/17751.pem"
	I0401 19:31:50.384633   71168 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17751.pem
	I0401 19:31:50.391229   71168 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 18:15 /usr/share/ca-certificates/17751.pem
	I0401 19:31:50.391277   71168 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17751.pem
	I0401 19:31:50.397980   71168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17751.pem /etc/ssl/certs/51391683.0"
	I0401 19:31:50.412674   71168 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 19:31:50.418084   71168 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 19:31:50.425102   71168 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 19:31:50.431949   71168 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 19:31:50.438665   71168 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 19:31:50.446633   71168 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 19:31:50.454688   71168 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 19:31:50.462805   71168 kubeadm.go:391] StartCluster: {Name:old-k8s-version-163608 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-163608 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.106 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:31:50.462922   71168 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 19:31:50.462956   71168 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:31:50.505702   71168 cri.go:89] found id: ""
	I0401 19:31:50.505788   71168 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0401 19:31:50.517916   71168 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0401 19:31:50.517934   71168 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0401 19:31:50.517940   71168 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0401 19:31:50.517995   71168 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 19:31:50.529459   71168 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 19:31:50.530408   71168 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-163608" does not appear in /home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 19:31:50.531055   71168 kubeconfig.go:62] /home/jenkins/minikube-integration/18233-10493/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-163608" cluster setting kubeconfig missing "old-k8s-version-163608" context setting]
	I0401 19:31:50.532369   71168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/kubeconfig: {Name:mkbd988e40ba29769e9f8a43c4d876f38e957f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:31:50.534578   71168 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 19:31:50.546275   71168 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.106
	I0401 19:31:50.546309   71168 kubeadm.go:1154] stopping kube-system containers ...
	I0401 19:31:50.546328   71168 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0401 19:31:50.546371   71168 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:31:50.588826   71168 cri.go:89] found id: ""
	I0401 19:31:50.588881   71168 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0401 19:31:50.610933   71168 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:31:50.622201   71168 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:31:50.622221   71168 kubeadm.go:156] found existing configuration files:
	
	I0401 19:31:50.622266   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 19:31:50.634006   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:31:50.634071   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:31:50.647891   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 19:31:50.662548   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:31:50.662596   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:31:50.674627   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 19:31:50.686739   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:31:50.686825   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:31:50.700400   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 19:31:50.712952   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:31:50.713014   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:31:50.725616   71168 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:31:50.739130   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:50.874552   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:51.568640   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:51.850288   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:52.009607   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:31:52.122887   71168 api_server.go:52] waiting for apiserver process to appear ...
	I0401 19:31:52.122962   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:52.623084   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:51.827968   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:54.325686   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:56.325892   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:53.817748   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:53.818158   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:53.818184   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:53.818122   72041 retry.go:31] will retry after 2.747390243s: waiting for machine to come up
	I0401 19:31:56.567288   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:31:56.567711   70284 main.go:141] libmachine: (no-preload-472858) DBG | unable to find current IP address of domain no-preload-472858 in network mk-no-preload-472858
	I0401 19:31:56.567742   70284 main.go:141] libmachine: (no-preload-472858) DBG | I0401 19:31:56.567657   72041 retry.go:31] will retry after 3.904473051s: waiting for machine to come up
	I0401 19:31:53.107786   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:55.108974   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:53.123783   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:53.623248   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:54.124004   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:54.623873   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:55.123458   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:55.623923   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:56.123441   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:56.623192   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:57.123012   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:57.624010   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:58.325934   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:00.825343   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:00.476692   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.477192   70284 main.go:141] libmachine: (no-preload-472858) Found IP for machine: 192.168.72.119
	I0401 19:32:00.477217   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has current primary IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.477223   70284 main.go:141] libmachine: (no-preload-472858) Reserving static IP address...
	I0401 19:32:00.477672   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "no-preload-472858", mac: "52:54:00:0a:2e:03", ip: "192.168.72.119"} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:00.477708   70284 main.go:141] libmachine: (no-preload-472858) DBG | skip adding static IP to network mk-no-preload-472858 - found existing host DHCP lease matching {name: "no-preload-472858", mac: "52:54:00:0a:2e:03", ip: "192.168.72.119"}
	I0401 19:32:00.477726   70284 main.go:141] libmachine: (no-preload-472858) Reserved static IP address: 192.168.72.119
	I0401 19:32:00.477742   70284 main.go:141] libmachine: (no-preload-472858) Waiting for SSH to be available...
	I0401 19:32:00.477770   70284 main.go:141] libmachine: (no-preload-472858) DBG | Getting to WaitForSSH function...
	I0401 19:32:00.479949   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.480306   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:00.480334   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.480475   70284 main.go:141] libmachine: (no-preload-472858) DBG | Using SSH client type: external
	I0401 19:32:00.480508   70284 main.go:141] libmachine: (no-preload-472858) DBG | Using SSH private key: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/no-preload-472858/id_rsa (-rw-------)
	I0401 19:32:00.480538   70284 main.go:141] libmachine: (no-preload-472858) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.119 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18233-10493/.minikube/machines/no-preload-472858/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0401 19:32:00.480554   70284 main.go:141] libmachine: (no-preload-472858) DBG | About to run SSH command:
	I0401 19:32:00.480566   70284 main.go:141] libmachine: (no-preload-472858) DBG | exit 0
	I0401 19:32:00.610108   70284 main.go:141] libmachine: (no-preload-472858) DBG | SSH cmd err, output: <nil>: 
	I0401 19:32:00.610458   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetConfigRaw
	I0401 19:32:00.611059   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetIP
	I0401 19:32:00.613496   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.613872   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:00.613906   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.614179   70284 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/no-preload-472858/config.json ...
	I0401 19:32:00.614363   70284 machine.go:94] provisionDockerMachine start ...
	I0401 19:32:00.614382   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:32:00.614593   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:00.617019   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.617404   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:00.617430   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.617585   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:32:00.617780   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:00.617953   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:00.618098   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:32:00.618260   70284 main.go:141] libmachine: Using SSH client type: native
	I0401 19:32:00.618451   70284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.119 22 <nil> <nil>}
	I0401 19:32:00.618462   70284 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 19:32:00.730438   70284 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0401 19:32:00.730473   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetMachineName
	I0401 19:32:00.730725   70284 buildroot.go:166] provisioning hostname "no-preload-472858"
	I0401 19:32:00.730754   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetMachineName
	I0401 19:32:00.730994   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:00.733932   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.734274   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:00.734308   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.734419   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:32:00.734591   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:00.734752   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:00.734918   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:32:00.735092   70284 main.go:141] libmachine: Using SSH client type: native
	I0401 19:32:00.735296   70284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.119 22 <nil> <nil>}
	I0401 19:32:00.735313   70284 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-472858 && echo "no-preload-472858" | sudo tee /etc/hostname
	I0401 19:32:00.865664   70284 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-472858
	
	I0401 19:32:00.865702   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:00.868247   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.868619   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:00.868649   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.868845   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:32:00.869037   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:00.869244   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:00.869420   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:32:00.869671   70284 main.go:141] libmachine: Using SSH client type: native
	I0401 19:32:00.869840   70284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.119 22 <nil> <nil>}
	I0401 19:32:00.869859   70284 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-472858' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-472858/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-472858' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 19:32:00.991430   70284 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 19:32:00.991460   70284 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18233-10493/.minikube CaCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18233-10493/.minikube}
	I0401 19:32:00.991484   70284 buildroot.go:174] setting up certificates
	I0401 19:32:00.991493   70284 provision.go:84] configureAuth start
	I0401 19:32:00.991504   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetMachineName
	I0401 19:32:00.991748   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetIP
	I0401 19:32:00.994239   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.994566   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:00.994596   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.994722   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:00.996735   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.997064   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:00.997090   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:00.997212   70284 provision.go:143] copyHostCerts
	I0401 19:32:00.997265   70284 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem, removing ...
	I0401 19:32:00.997281   70284 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem
	I0401 19:32:00.997346   70284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/ca.pem (1082 bytes)
	I0401 19:32:00.997493   70284 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem, removing ...
	I0401 19:32:00.997507   70284 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem
	I0401 19:32:00.997533   70284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/cert.pem (1123 bytes)
	I0401 19:32:00.997619   70284 exec_runner.go:144] found /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem, removing ...
	I0401 19:32:00.997629   70284 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem
	I0401 19:32:00.997667   70284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18233-10493/.minikube/key.pem (1679 bytes)
	I0401 19:32:00.997733   70284 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem org=jenkins.no-preload-472858 san=[127.0.0.1 192.168.72.119 localhost minikube no-preload-472858]
	I0401 19:32:01.212397   70284 provision.go:177] copyRemoteCerts
	I0401 19:32:01.212453   70284 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 19:32:01.212473   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:01.214810   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.215170   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:01.215198   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.215398   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:32:01.215603   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:01.215761   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:32:01.215903   70284 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/no-preload-472858/id_rsa Username:docker}
	I0401 19:32:01.303113   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0401 19:32:01.331807   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 19:32:01.358429   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0401 19:32:01.384521   70284 provision.go:87] duration metric: took 393.005717ms to configureAuth
	I0401 19:32:01.384559   70284 buildroot.go:189] setting minikube options for container-runtime
	I0401 19:32:01.384748   70284 config.go:182] Loaded profile config "no-preload-472858": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0401 19:32:01.384862   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:01.387446   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.387828   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:01.387866   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.387966   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:32:01.388168   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:01.388356   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:01.388509   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:32:01.388663   70284 main.go:141] libmachine: Using SSH client type: native
	I0401 19:32:01.388847   70284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.119 22 <nil> <nil>}
	I0401 19:32:01.388867   70284 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 19:32:01.692586   70284 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 19:32:01.692615   70284 machine.go:97] duration metric: took 1.078237975s to provisionDockerMachine
	I0401 19:32:01.692628   70284 start.go:293] postStartSetup for "no-preload-472858" (driver="kvm2")
	I0401 19:32:01.692644   70284 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 19:32:01.692668   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:32:01.692988   70284 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 19:32:01.693012   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:01.696033   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.696405   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:01.696450   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.696603   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:32:01.696763   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:01.696901   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:32:01.697089   70284 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/no-preload-472858/id_rsa Username:docker}
	I0401 19:32:01.786626   70284 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 19:32:01.791703   70284 info.go:137] Remote host: Buildroot 2023.02.9
	I0401 19:32:01.791726   70284 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/addons for local assets ...
	I0401 19:32:01.791802   70284 filesync.go:126] Scanning /home/jenkins/minikube-integration/18233-10493/.minikube/files for local assets ...
	I0401 19:32:01.791901   70284 filesync.go:149] local asset: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem -> 177512.pem in /etc/ssl/certs
	I0401 19:32:01.791991   70284 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 19:32:01.803733   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:32:01.831768   70284 start.go:296] duration metric: took 139.126077ms for postStartSetup
	I0401 19:32:01.831804   70284 fix.go:56] duration metric: took 20.628199635s for fixHost
	I0401 19:32:01.831823   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:01.834218   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.834548   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:01.834574   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.834725   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:32:01.834901   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:01.835066   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:01.835188   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:32:01.835327   70284 main.go:141] libmachine: Using SSH client type: native
	I0401 19:32:01.835544   70284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.119 22 <nil> <nil>}
	I0401 19:32:01.835558   70284 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0401 19:31:57.607923   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:59.608857   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:02.106942   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:31:58.123200   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:58.624028   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:59.123026   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:31:59.623993   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:00.123039   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:00.623632   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:01.123204   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:01.623162   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:02.123264   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:02.623788   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:01.947198   70284 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711999921.892647753
	
	I0401 19:32:01.947267   70284 fix.go:216] guest clock: 1711999921.892647753
	I0401 19:32:01.947279   70284 fix.go:229] Guest: 2024-04-01 19:32:01.892647753 +0000 UTC Remote: 2024-04-01 19:32:01.831808507 +0000 UTC m=+359.938807685 (delta=60.839246ms)
	I0401 19:32:01.947305   70284 fix.go:200] guest clock delta is within tolerance: 60.839246ms
	I0401 19:32:01.947317   70284 start.go:83] releasing machines lock for "no-preload-472858", held for 20.743748352s
	I0401 19:32:01.947347   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:32:01.947621   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetIP
	I0401 19:32:01.950387   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.950719   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:01.950750   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.950940   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:32:01.951438   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:32:01.951631   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:32:01.951681   70284 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 19:32:01.951737   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:01.951854   70284 ssh_runner.go:195] Run: cat /version.json
	I0401 19:32:01.951881   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:32:01.954468   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.954603   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.954780   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:01.954815   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.954932   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:01.954960   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:01.954984   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:32:01.955193   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:01.955230   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:32:01.955341   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:32:01.955388   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:32:01.955510   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:32:01.955501   70284 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/no-preload-472858/id_rsa Username:docker}
	I0401 19:32:01.955670   70284 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/no-preload-472858/id_rsa Username:docker}
	I0401 19:32:02.035332   70284 ssh_runner.go:195] Run: systemctl --version
	I0401 19:32:02.061178   70284 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 19:32:02.220309   70284 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0401 19:32:02.227811   70284 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0401 19:32:02.227885   70284 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 19:32:02.247605   70284 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0401 19:32:02.247634   70284 start.go:494] detecting cgroup driver to use...
	I0401 19:32:02.247690   70284 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 19:32:02.265463   70284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 19:32:02.280175   70284 docker.go:217] disabling cri-docker service (if available) ...
	I0401 19:32:02.280246   70284 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 19:32:02.295003   70284 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 19:32:02.315072   70284 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 19:32:02.449108   70284 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 19:32:02.627772   70284 docker.go:233] disabling docker service ...
	I0401 19:32:02.627850   70284 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 19:32:02.642924   70284 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 19:32:02.657038   70284 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 19:32:02.787085   70284 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 19:32:02.918355   70284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 19:32:02.934828   70284 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 19:32:02.955495   70284 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0401 19:32:02.955548   70284 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:32:02.966690   70284 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 19:32:02.966754   70284 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:32:02.977812   70284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:32:02.989329   70284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:32:03.000727   70284 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 19:32:03.012341   70284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:32:03.023305   70284 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:32:03.044213   70284 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:32:03.055614   70284 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 19:32:03.065880   70284 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0401 19:32:03.065927   70284 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0401 19:32:03.080514   70284 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 19:32:03.090798   70284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:32:03.224199   70284 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 19:32:03.389414   70284 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 19:32:03.389482   70284 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 19:32:03.395493   70284 start.go:562] Will wait 60s for crictl version
	I0401 19:32:03.395539   70284 ssh_runner.go:195] Run: which crictl
	I0401 19:32:03.399739   70284 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 19:32:03.441020   70284 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0401 19:32:03.441114   70284 ssh_runner.go:195] Run: crio --version
	I0401 19:32:03.474572   70284 ssh_runner.go:195] Run: crio --version
	I0401 19:32:03.511681   70284 out.go:177] * Preparing Kubernetes v1.30.0-rc.0 on CRI-O 1.29.1 ...
	I0401 19:32:02.825628   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:04.825973   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:03.513067   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetIP
	I0401 19:32:03.515901   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:03.516281   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:32:03.516315   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:32:03.516523   70284 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0401 19:32:03.521197   70284 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:32:03.536333   70284 kubeadm.go:877] updating cluster {Name:no-preload-472858 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0-rc.0 ClusterName:no-preload-472858 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.119 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 19:32:03.536459   70284 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.0 and runtime crio
	I0401 19:32:03.536507   70284 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:32:03.582858   70284 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0-rc.0". assuming images are not preloaded.
	I0401 19:32:03.582887   70284 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0-rc.0 registry.k8s.io/kube-controller-manager:v1.30.0-rc.0 registry.k8s.io/kube-scheduler:v1.30.0-rc.0 registry.k8s.io/kube-proxy:v1.30.0-rc.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0401 19:32:03.582970   70284 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:32:03.583026   70284 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0401 19:32:03.583032   70284 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0401 19:32:03.583071   70284 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0401 19:32:03.583161   70284 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0401 19:32:03.582997   70284 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0401 19:32:03.583238   70284 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0401 19:32:03.583388   70284 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0401 19:32:03.584618   70284 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0401 19:32:03.584626   70284 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0401 19:32:03.584630   70284 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:32:03.584619   70284 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0401 19:32:03.584640   70284 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0401 19:32:03.584626   70284 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0401 19:32:03.584701   70284 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0401 19:32:03.584856   70284 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0401 19:32:03.730086   70284 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0401 19:32:03.752217   70284 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0401 19:32:03.765621   70284 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0401 19:32:03.766526   70284 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0401 19:32:03.770748   70284 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0401 19:32:03.777614   70284 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0401 19:32:03.777672   70284 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0401 19:32:03.777699   70284 ssh_runner.go:195] Run: which crictl
	I0401 19:32:03.840814   70284 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0401 19:32:03.852416   70284 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0401 19:32:03.869889   70284 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0-rc.0" does not exist at hash "e840fbdc464ca4dc4404dc42a6cd48601001cbf15f11fbfafe6980127b2da4b3" in container runtime
	I0401 19:32:03.869929   70284 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0401 19:32:03.869979   70284 ssh_runner.go:195] Run: which crictl
	I0401 19:32:03.874654   70284 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0-rc.0" does not exist at hash "ff908ab55cece12bd6dc022580f7f3d1f6c3fe296c80225f4f4327f5c000e99a" in container runtime
	I0401 19:32:03.874693   70284 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0401 19:32:03.874737   70284 ssh_runner.go:195] Run: which crictl
	I0401 19:32:03.899207   70284 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:32:03.906139   70284 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0-rc.0" does not exist at hash "fcfa8f01023265988284655c0c6e073c44cce782e77560e76c44bcd480fd35f5" in container runtime
	I0401 19:32:03.906182   70284 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0401 19:32:03.906227   70284 ssh_runner.go:195] Run: which crictl
	I0401 19:32:03.996916   70284 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0401 19:32:03.996987   70284 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0-rc.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0-rc.0" does not exist at hash "33c8c4837aeafa60657bc3e64d4d4c75c99239311b8437b65ba9a95fb7db6652" in container runtime
	I0401 19:32:03.997022   70284 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0401 19:32:03.997045   70284 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0401 19:32:03.997053   70284 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0401 19:32:03.997054   70284 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0-rc.0
	I0401 19:32:03.997089   70284 ssh_runner.go:195] Run: which crictl
	I0401 19:32:03.997128   70284 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0-rc.0
	I0401 19:32:03.997142   70284 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0401 19:32:03.997090   70284 ssh_runner.go:195] Run: which crictl
	I0401 19:32:03.997164   70284 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:32:03.997194   70284 ssh_runner.go:195] Run: which crictl
	I0401 19:32:03.997211   70284 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0-rc.0
	I0401 19:32:04.090272   70284 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0401 19:32:04.090548   70284 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0401 19:32:04.090639   70284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0401 19:32:04.102041   70284 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0-rc.0
	I0401 19:32:04.102130   70284 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.0
	I0401 19:32:04.102168   70284 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.0
	I0401 19:32:04.102226   70284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0
	I0401 19:32:04.102241   70284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0
	I0401 19:32:04.102278   70284 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:32:04.108100   70284 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.0
	I0401 19:32:04.108192   70284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0
	I0401 19:32:04.182707   70284 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0401 19:32:04.182747   70284 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0401 19:32:04.182759   70284 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0401 19:32:04.182815   70284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0401 19:32:04.182820   70284 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0401 19:32:04.182883   70284 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.0
	I0401 19:32:04.182988   70284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0
	I0401 19:32:04.186135   70284 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0 (exists)
	I0401 19:32:04.186175   70284 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0 (exists)
	I0401 19:32:04.186221   70284 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0 (exists)
	I0401 19:32:04.186242   70284 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0401 19:32:04.186324   70284 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0401 19:32:06.352362   70284 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.169442796s)
	I0401 19:32:06.352398   70284 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0401 19:32:06.352419   70284 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0
	I0401 19:32:06.352416   70284 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0: (2.16957379s)
	I0401 19:32:06.352443   70284 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0401 19:32:06.352465   70284 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0
	I0401 19:32:06.352465   70284 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0: (2.16945688s)
	I0401 19:32:06.352479   70284 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.166139431s)
	I0401 19:32:06.352490   70284 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0401 19:32:06.352491   70284 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0 (exists)
	I0401 19:32:04.109989   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:06.294038   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:03.123452   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:03.623784   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:04.123649   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:04.623076   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:05.123822   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:05.623487   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:06.123635   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:06.623689   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:07.123919   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:07.623237   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:06.826244   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:09.326937   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:09.261547   70284 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.0: (2.909056315s)
	I0401 19:32:09.261572   70284 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.0 from cache
	I0401 19:32:09.261600   70284 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0
	I0401 19:32:09.261668   70284 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0
	I0401 19:32:11.739636   70284 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.0: (2.477945807s)
	I0401 19:32:11.739667   70284 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.0 from cache
	I0401 19:32:11.739702   70284 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0
	I0401 19:32:11.739761   70284 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0
	I0401 19:32:08.609901   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:11.114752   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:08.123689   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:08.623160   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:09.124002   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:09.623090   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:10.123049   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:10.623111   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:11.123042   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:11.623980   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:12.123074   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:12.623530   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:11.826409   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:13.828437   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:16.326097   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:13.195232   70284 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.0: (1.455440816s)
	I0401 19:32:13.195267   70284 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.0 from cache
	I0401 19:32:13.195299   70284 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0401 19:32:13.195350   70284 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0401 19:32:13.607042   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:16.107993   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:13.123428   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:13.623899   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:14.123324   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:14.623889   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:15.123496   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:15.623779   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:16.124012   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:16.623620   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:17.123867   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:17.623014   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:18.326127   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:20.326575   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:17.202247   70284 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.006869591s)
	I0401 19:32:17.202284   70284 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0401 19:32:17.202315   70284 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0401 19:32:17.202364   70284 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0401 19:32:17.962735   70284 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0401 19:32:17.962785   70284 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0
	I0401 19:32:17.962850   70284 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0
	I0401 19:32:20.235136   70284 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.0: (2.272262595s)
	I0401 19:32:20.235161   70284 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18233-10493/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.0 from cache
	I0401 19:32:20.235193   70284 cache_images.go:123] Successfully loaded all cached images
	I0401 19:32:20.235197   70284 cache_images.go:92] duration metric: took 16.652290938s to LoadCachedImages
	I0401 19:32:20.235205   70284 kubeadm.go:928] updating node { 192.168.72.119 8443 v1.30.0-rc.0 crio true true} ...
	I0401 19:32:20.235332   70284 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-472858 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.119
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-rc.0 ClusterName:no-preload-472858 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 19:32:20.235402   70284 ssh_runner.go:195] Run: crio config
	I0401 19:32:20.296015   70284 cni.go:84] Creating CNI manager for ""
	I0401 19:32:20.296039   70284 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:32:20.296050   70284 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 19:32:20.296074   70284 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.119 APIServerPort:8443 KubernetesVersion:v1.30.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-472858 NodeName:no-preload-472858 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.119"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.119 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 19:32:20.296217   70284 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.119
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-472858"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.119
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.119"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 19:32:20.296275   70284 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.0
	I0401 19:32:20.307937   70284 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 19:32:20.308009   70284 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 19:32:20.318571   70284 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0401 19:32:20.339284   70284 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0401 19:32:20.358601   70284 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0401 19:32:20.379394   70284 ssh_runner.go:195] Run: grep 192.168.72.119	control-plane.minikube.internal$ /etc/hosts
	I0401 19:32:20.383948   70284 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.119	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:32:20.397559   70284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:32:20.549147   70284 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:32:20.568027   70284 certs.go:68] Setting up /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/no-preload-472858 for IP: 192.168.72.119
	I0401 19:32:20.568051   70284 certs.go:194] generating shared ca certs ...
	I0401 19:32:20.568070   70284 certs.go:226] acquiring lock for ca certs: {Name:mk348b3e250c104b662139cd7212c6c6dfda3180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:32:20.568273   70284 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key
	I0401 19:32:20.568337   70284 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key
	I0401 19:32:20.568352   70284 certs.go:256] generating profile certs ...
	I0401 19:32:20.568453   70284 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/no-preload-472858/client.key
	I0401 19:32:20.568534   70284 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/no-preload-472858/apiserver.key.bfc8ff8f
	I0401 19:32:20.568586   70284 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/no-preload-472858/proxy-client.key
	I0401 19:32:20.568691   70284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem (1338 bytes)
	W0401 19:32:20.568718   70284 certs.go:480] ignoring /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751_empty.pem, impossibly tiny 0 bytes
	I0401 19:32:20.568728   70284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca-key.pem (1675 bytes)
	I0401 19:32:20.568747   70284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/ca.pem (1082 bytes)
	I0401 19:32:20.568773   70284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/cert.pem (1123 bytes)
	I0401 19:32:20.568795   70284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/certs/key.pem (1679 bytes)
	I0401 19:32:20.568830   70284 certs.go:484] found cert: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem (1708 bytes)
	I0401 19:32:20.569519   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 19:32:20.605218   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 19:32:20.650321   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 19:32:20.676884   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 19:32:20.705378   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/no-preload-472858/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0401 19:32:20.733068   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/no-preload-472858/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 19:32:20.767387   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/no-preload-472858/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 19:32:20.793543   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/no-preload-472858/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 19:32:20.820843   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/ssl/certs/177512.pem --> /usr/share/ca-certificates/177512.pem (1708 bytes)
	I0401 19:32:20.848364   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 19:32:20.877551   70284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18233-10493/.minikube/certs/17751.pem --> /usr/share/ca-certificates/17751.pem (1338 bytes)
	I0401 19:32:20.904650   70284 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0401 19:32:20.922876   70284 ssh_runner.go:195] Run: openssl version
	I0401 19:32:20.929441   70284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 19:32:20.942496   70284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:32:20.948011   70284 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 18:07 /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:32:20.948080   70284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:32:20.954320   70284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 19:32:20.968060   70284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17751.pem && ln -fs /usr/share/ca-certificates/17751.pem /etc/ssl/certs/17751.pem"
	I0401 19:32:20.981591   70284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17751.pem
	I0401 19:32:20.986660   70284 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 18:15 /usr/share/ca-certificates/17751.pem
	I0401 19:32:20.986706   70284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17751.pem
	I0401 19:32:20.993394   70284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17751.pem /etc/ssl/certs/51391683.0"
	I0401 19:32:21.006530   70284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/177512.pem && ln -fs /usr/share/ca-certificates/177512.pem /etc/ssl/certs/177512.pem"
	I0401 19:32:21.020014   70284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/177512.pem
	I0401 19:32:21.025507   70284 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 18:15 /usr/share/ca-certificates/177512.pem
	I0401 19:32:21.025560   70284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/177512.pem
	I0401 19:32:21.032433   70284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/177512.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 19:32:21.047002   70284 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 19:32:21.052551   70284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 19:32:21.059875   70284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 19:32:21.067243   70284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 19:32:21.074304   70284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 19:32:21.080978   70284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 19:32:21.088051   70284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 19:32:21.095219   70284 kubeadm.go:391] StartCluster: {Name:no-preload-472858 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0-rc.0 ClusterName:no-preload-472858 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.119 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:32:21.095325   70284 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 19:32:21.095403   70284 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:32:21.144103   70284 cri.go:89] found id: ""
	I0401 19:32:21.144187   70284 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0401 19:32:21.157222   70284 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0401 19:32:21.157241   70284 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0401 19:32:21.157246   70284 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0401 19:32:21.157290   70284 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 19:32:21.169027   70284 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 19:32:21.170123   70284 kubeconfig.go:125] found "no-preload-472858" server: "https://192.168.72.119:8443"
	I0401 19:32:21.172523   70284 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 19:32:21.183801   70284 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.119
	I0401 19:32:21.183838   70284 kubeadm.go:1154] stopping kube-system containers ...
	I0401 19:32:21.183847   70284 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0401 19:32:21.183892   70284 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:32:21.229279   70284 cri.go:89] found id: ""
	I0401 19:32:21.229357   70284 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0401 19:32:21.249719   70284 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:32:21.261894   70284 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:32:21.261929   70284 kubeadm.go:156] found existing configuration files:
	
	I0401 19:32:21.261984   70284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 19:32:21.273961   70284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:32:21.274026   70284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:32:21.286746   70284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 19:32:21.297920   70284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:32:21.297986   70284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:32:21.308793   70284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 19:32:21.319612   70284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:32:21.319658   70284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:32:21.332730   70284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 19:32:21.344752   70284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:32:21.344810   70284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:32:21.355821   70284 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:32:21.366649   70284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:32:21.482208   70284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:32:18.607685   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:20.607824   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:18.123795   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:18.623529   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:19.123446   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:19.623223   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:20.123133   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:20.623058   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:21.123302   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:21.623115   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:22.123810   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:22.623878   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:22.826056   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:24.826357   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:22.312148   70284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:32:22.533156   70284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:32:22.620390   70284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:32:22.704948   70284 api_server.go:52] waiting for apiserver process to appear ...
	I0401 19:32:22.705039   70284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:23.205114   70284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:23.706000   70284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:23.725209   70284 api_server.go:72] duration metric: took 1.020261742s to wait for apiserver process to appear ...
	I0401 19:32:23.725243   70284 api_server.go:88] waiting for apiserver healthz status ...
	I0401 19:32:23.725264   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:23.725749   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": dial tcp 192.168.72.119:8443: connect: connection refused
	I0401 19:32:24.226383   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:23.107450   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:25.109899   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:23.123507   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:23.623244   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:24.123444   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:24.623346   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:25.123834   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:25.623814   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:26.124028   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:26.623428   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:27.123592   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:27.623451   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:27.327961   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:29.826272   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:29.226831   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0401 19:32:29.226876   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:27.607575   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:29.608427   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:32.106668   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:28.123454   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:28.623502   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:29.123265   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:29.623449   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:30.123525   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:30.623634   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:31.123972   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:31.623023   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:32.123346   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:32.623839   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:32.325638   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:34.325777   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:36.326510   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:34.227668   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0401 19:32:34.227723   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:34.606929   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:36.607515   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:33.123673   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:33.623088   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:34.123230   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:34.623967   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:35.123420   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:35.623499   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:36.123152   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:36.623963   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:37.123682   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:37.623536   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:38.829585   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:41.325607   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:39.228117   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0401 19:32:39.228164   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:39.107473   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:41.607043   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:38.123238   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:38.623831   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:39.123180   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:39.623801   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:40.123478   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:40.623651   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:41.123687   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:41.624016   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:42.123891   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:42.623493   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:43.326457   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:45.827310   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:44.228934   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0401 19:32:44.228982   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:44.259601   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": read tcp 192.168.72.1:37026->192.168.72.119:8443: read: connection reset by peer
	I0401 19:32:44.726186   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:44.726759   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": dial tcp 192.168.72.119:8443: connect: connection refused
	I0401 19:32:45.226347   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:43.607936   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:46.106775   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:43.123504   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:43.623527   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:44.124016   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:44.623931   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:45.123188   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:45.623649   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:46.123570   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:46.623179   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:47.123273   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:47.623842   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:48.325252   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:50.327365   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:50.226859   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0401 19:32:50.226907   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:48.109152   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:50.607327   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:48.123759   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:48.623092   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:49.123174   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:49.623986   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:50.123301   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:50.623694   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:51.123466   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:51.623618   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:52.123073   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:32:52.123172   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:32:52.164635   71168 cri.go:89] found id: ""
	I0401 19:32:52.164656   71168 logs.go:276] 0 containers: []
	W0401 19:32:52.164663   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:32:52.164669   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:32:52.164738   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:32:52.202531   71168 cri.go:89] found id: ""
	I0401 19:32:52.202560   71168 logs.go:276] 0 containers: []
	W0401 19:32:52.202572   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:32:52.202580   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:32:52.202653   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:32:52.247667   71168 cri.go:89] found id: ""
	I0401 19:32:52.247693   71168 logs.go:276] 0 containers: []
	W0401 19:32:52.247703   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:32:52.247714   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:32:52.247774   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:32:52.289029   71168 cri.go:89] found id: ""
	I0401 19:32:52.289054   71168 logs.go:276] 0 containers: []
	W0401 19:32:52.289062   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:32:52.289068   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:32:52.289114   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:32:52.326820   71168 cri.go:89] found id: ""
	I0401 19:32:52.326864   71168 logs.go:276] 0 containers: []
	W0401 19:32:52.326875   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:32:52.326882   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:32:52.326944   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:32:52.362793   71168 cri.go:89] found id: ""
	I0401 19:32:52.362827   71168 logs.go:276] 0 containers: []
	W0401 19:32:52.362838   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:32:52.362845   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:32:52.362950   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:32:52.400174   71168 cri.go:89] found id: ""
	I0401 19:32:52.400204   71168 logs.go:276] 0 containers: []
	W0401 19:32:52.400215   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:32:52.400222   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:32:52.400282   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:32:52.436027   71168 cri.go:89] found id: ""
	I0401 19:32:52.436056   71168 logs.go:276] 0 containers: []
	W0401 19:32:52.436066   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:32:52.436085   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:32:52.436099   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:32:52.477246   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:32:52.477272   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:32:52.529215   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:32:52.529247   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:32:52.544695   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:32:52.544724   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:32:52.677816   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:32:52.677849   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:32:52.677877   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:32:52.825288   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:54.826043   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:55.228105   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0401 19:32:55.228139   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:53.106774   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:55.107668   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:55.241224   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:55.256975   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:32:55.257045   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:32:55.298280   71168 cri.go:89] found id: ""
	I0401 19:32:55.298307   71168 logs.go:276] 0 containers: []
	W0401 19:32:55.298319   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:32:55.298326   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:32:55.298397   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:32:55.337707   71168 cri.go:89] found id: ""
	I0401 19:32:55.337732   71168 logs.go:276] 0 containers: []
	W0401 19:32:55.337739   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:32:55.337745   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:32:55.337791   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:32:55.381455   71168 cri.go:89] found id: ""
	I0401 19:32:55.381479   71168 logs.go:276] 0 containers: []
	W0401 19:32:55.381490   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:32:55.381496   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:32:55.381557   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:32:55.420715   71168 cri.go:89] found id: ""
	I0401 19:32:55.420739   71168 logs.go:276] 0 containers: []
	W0401 19:32:55.420749   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:32:55.420756   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:32:55.420820   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:32:55.459546   71168 cri.go:89] found id: ""
	I0401 19:32:55.459575   71168 logs.go:276] 0 containers: []
	W0401 19:32:55.459583   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:32:55.459588   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:32:55.459634   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:32:55.504240   71168 cri.go:89] found id: ""
	I0401 19:32:55.504267   71168 logs.go:276] 0 containers: []
	W0401 19:32:55.504277   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:32:55.504285   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:32:55.504368   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:32:55.539399   71168 cri.go:89] found id: ""
	I0401 19:32:55.539426   71168 logs.go:276] 0 containers: []
	W0401 19:32:55.539437   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:32:55.539443   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:32:55.539509   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:32:55.583823   71168 cri.go:89] found id: ""
	I0401 19:32:55.583861   71168 logs.go:276] 0 containers: []
	W0401 19:32:55.583872   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:32:55.583881   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:32:55.583895   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:32:55.645489   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:32:55.645523   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:32:55.712883   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:32:55.712920   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:32:55.734890   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:32:55.734923   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:32:55.853068   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:32:55.853089   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:32:55.853102   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:32:57.325965   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:59.827753   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:00.228533   70284 api_server.go:269] stopped: https://192.168.72.119:8443/healthz: Get "https://192.168.72.119:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0401 19:33:00.228582   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:32:57.607203   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:59.610732   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:02.108676   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:32:58.435925   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:32:58.450910   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:32:58.450980   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:32:58.487470   71168 cri.go:89] found id: ""
	I0401 19:32:58.487495   71168 logs.go:276] 0 containers: []
	W0401 19:32:58.487506   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:32:58.487514   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:32:58.487562   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:32:58.529513   71168 cri.go:89] found id: ""
	I0401 19:32:58.529534   71168 logs.go:276] 0 containers: []
	W0401 19:32:58.529543   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:32:58.529547   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:32:58.529592   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:32:58.574170   71168 cri.go:89] found id: ""
	I0401 19:32:58.574197   71168 logs.go:276] 0 containers: []
	W0401 19:32:58.574205   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:32:58.574211   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:32:58.574258   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:32:58.615379   71168 cri.go:89] found id: ""
	I0401 19:32:58.615405   71168 logs.go:276] 0 containers: []
	W0401 19:32:58.615414   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:32:58.615419   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:32:58.615468   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:32:58.655496   71168 cri.go:89] found id: ""
	I0401 19:32:58.655523   71168 logs.go:276] 0 containers: []
	W0401 19:32:58.655534   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:32:58.655542   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:32:58.655593   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:32:58.697199   71168 cri.go:89] found id: ""
	I0401 19:32:58.697229   71168 logs.go:276] 0 containers: []
	W0401 19:32:58.697238   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:32:58.697246   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:32:58.697312   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:32:58.735618   71168 cri.go:89] found id: ""
	I0401 19:32:58.735643   71168 logs.go:276] 0 containers: []
	W0401 19:32:58.735651   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:32:58.735656   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:32:58.735701   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:32:58.780583   71168 cri.go:89] found id: ""
	I0401 19:32:58.780613   71168 logs.go:276] 0 containers: []
	W0401 19:32:58.780624   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:32:58.780635   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:32:58.780649   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:32:58.829717   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:32:58.829743   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:32:58.844836   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:32:58.844866   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:32:58.923138   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:32:58.923157   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:32:58.923172   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:32:58.993680   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:32:58.993713   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:01.538920   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:01.556943   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:01.557017   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:01.608397   71168 cri.go:89] found id: ""
	I0401 19:33:01.608417   71168 logs.go:276] 0 containers: []
	W0401 19:33:01.608425   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:01.608430   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:01.608490   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:01.666573   71168 cri.go:89] found id: ""
	I0401 19:33:01.666599   71168 logs.go:276] 0 containers: []
	W0401 19:33:01.666609   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:01.666615   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:01.666674   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:01.726308   71168 cri.go:89] found id: ""
	I0401 19:33:01.726331   71168 logs.go:276] 0 containers: []
	W0401 19:33:01.726341   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:01.726347   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:01.726412   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:01.773095   71168 cri.go:89] found id: ""
	I0401 19:33:01.773118   71168 logs.go:276] 0 containers: []
	W0401 19:33:01.773125   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:01.773131   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:01.773189   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:01.813011   71168 cri.go:89] found id: ""
	I0401 19:33:01.813034   71168 logs.go:276] 0 containers: []
	W0401 19:33:01.813042   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:01.813048   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:01.813096   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:01.859124   71168 cri.go:89] found id: ""
	I0401 19:33:01.859151   71168 logs.go:276] 0 containers: []
	W0401 19:33:01.859161   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:01.859169   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:01.859228   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:01.904491   71168 cri.go:89] found id: ""
	I0401 19:33:01.904519   71168 logs.go:276] 0 containers: []
	W0401 19:33:01.904530   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:01.904537   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:01.904596   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:01.946768   71168 cri.go:89] found id: ""
	I0401 19:33:01.946794   71168 logs.go:276] 0 containers: []
	W0401 19:33:01.946804   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:01.946815   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:01.946829   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:02.026315   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:02.026362   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:02.072861   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:02.072893   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:02.132064   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:02.132105   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:02.151545   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:02.151575   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:02.234059   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:02.325806   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:04.327258   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:03.215901   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0401 19:33:03.215933   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0401 19:33:03.215947   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:03.264913   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0401 19:33:03.264946   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0401 19:33:03.264961   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:03.272548   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0401 19:33:03.272580   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0401 19:33:03.726254   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:03.731022   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:03.731050   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:04.225595   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:04.237757   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:04.237783   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:04.725330   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:04.734019   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:04.734047   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:05.225303   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:05.242774   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:05.242811   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:05.726350   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:05.730775   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:05.730838   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:06.225345   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:06.229749   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:06.229793   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:06.725687   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:06.730607   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:06.730640   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:04.112109   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:06.606160   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:04.734559   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:04.755071   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:04.755130   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:04.798316   71168 cri.go:89] found id: ""
	I0401 19:33:04.798345   71168 logs.go:276] 0 containers: []
	W0401 19:33:04.798358   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:04.798366   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:04.798426   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:04.840011   71168 cri.go:89] found id: ""
	I0401 19:33:04.840032   71168 logs.go:276] 0 containers: []
	W0401 19:33:04.840043   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:04.840050   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:04.840106   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:04.883686   71168 cri.go:89] found id: ""
	I0401 19:33:04.883713   71168 logs.go:276] 0 containers: []
	W0401 19:33:04.883725   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:04.883733   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:04.883795   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:04.933810   71168 cri.go:89] found id: ""
	I0401 19:33:04.933844   71168 logs.go:276] 0 containers: []
	W0401 19:33:04.933855   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:04.933863   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:04.933925   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:04.983118   71168 cri.go:89] found id: ""
	I0401 19:33:04.983139   71168 logs.go:276] 0 containers: []
	W0401 19:33:04.983146   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:04.983151   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:04.983207   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:05.036146   71168 cri.go:89] found id: ""
	I0401 19:33:05.036169   71168 logs.go:276] 0 containers: []
	W0401 19:33:05.036179   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:05.036186   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:05.036242   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:05.086269   71168 cri.go:89] found id: ""
	I0401 19:33:05.086296   71168 logs.go:276] 0 containers: []
	W0401 19:33:05.086308   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:05.086315   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:05.086378   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:05.140893   71168 cri.go:89] found id: ""
	I0401 19:33:05.140914   71168 logs.go:276] 0 containers: []
	W0401 19:33:05.140922   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:05.140931   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:05.140946   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:05.161222   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:05.161249   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:05.262254   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:05.262276   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:05.262289   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:05.352880   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:05.352908   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:05.400720   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:05.400748   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:07.954227   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:07.225774   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:07.230656   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:07.230684   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:07.726299   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:07.731793   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:07.731830   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:08.225362   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:08.229716   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0401 19:33:08.229755   70284 api_server.go:103] status: https://192.168.72.119:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0401 19:33:08.725315   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:33:08.733428   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 200:
	ok
	I0401 19:33:08.739761   70284 api_server.go:141] control plane version: v1.30.0-rc.0
	I0401 19:33:08.739788   70284 api_server.go:131] duration metric: took 45.014537527s to wait for apiserver health ...
	I0401 19:33:08.739796   70284 cni.go:84] Creating CNI manager for ""
	I0401 19:33:08.739802   70284 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:33:08.741701   70284 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0401 19:33:06.825165   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:08.829987   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:11.327172   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:08.743011   70284 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0401 19:33:08.758184   70284 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0401 19:33:08.778975   70284 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 19:33:08.789725   70284 system_pods.go:59] 8 kube-system pods found
	I0401 19:33:08.789763   70284 system_pods.go:61] "coredns-7db6d8ff4d-gdml5" [039c8887-dff0-40e5-b8b5-00ef2f4a21cc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:33:08.789771   70284 system_pods.go:61] "etcd-no-preload-472858" [09086659-e20f-40da-b01f-3690e110ffeb] Running
	I0401 19:33:08.789781   70284 system_pods.go:61] "kube-apiserver-no-preload-472858" [5139434c-3d23-4736-86ad-28253c89f7da] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0401 19:33:08.789794   70284 system_pods.go:61] "kube-controller-manager-no-preload-472858" [965d600a-612e-4625-b883-7105f9166503] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0401 19:33:08.789806   70284 system_pods.go:61] "kube-proxy-7c22p" [903412f5-252c-41f3-81ac-1ae47522b403] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:33:08.789820   70284 system_pods.go:61] "kube-scheduler-no-preload-472858" [936981be-fc5e-4865-811c-936fab59f37b] Running
	I0401 19:33:08.789832   70284 system_pods.go:61] "metrics-server-569cc877fc-wlr7k" [14010e9a-9662-46c9-bc46-cc6d19c0cddf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:33:08.789839   70284 system_pods.go:61] "storage-provisioner" [2e5d9f78-e74c-4b3b-8878-e4bd8ce34108] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:33:08.789861   70284 system_pods.go:74] duration metric: took 10.868458ms to wait for pod list to return data ...
	I0401 19:33:08.789874   70284 node_conditions.go:102] verifying NodePressure condition ...
	I0401 19:33:08.793853   70284 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 19:33:08.793883   70284 node_conditions.go:123] node cpu capacity is 2
	I0401 19:33:08.793897   70284 node_conditions.go:105] duration metric: took 4.016996ms to run NodePressure ...
	I0401 19:33:08.793916   70284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0401 19:33:09.081698   70284 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0401 19:33:09.085681   70284 kubeadm.go:733] kubelet initialised
	I0401 19:33:09.085699   70284 kubeadm.go:734] duration metric: took 3.976973ms waiting for restarted kubelet to initialise ...
	I0401 19:33:09.085705   70284 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:33:09.090647   70284 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:11.102738   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:08.608194   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:11.109659   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:07.970794   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:07.970850   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:08.013694   71168 cri.go:89] found id: ""
	I0401 19:33:08.013719   71168 logs.go:276] 0 containers: []
	W0401 19:33:08.013729   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:08.013737   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:08.013810   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:08.050810   71168 cri.go:89] found id: ""
	I0401 19:33:08.050849   71168 logs.go:276] 0 containers: []
	W0401 19:33:08.050861   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:08.050868   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:08.050932   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:08.092056   71168 cri.go:89] found id: ""
	I0401 19:33:08.092086   71168 logs.go:276] 0 containers: []
	W0401 19:33:08.092096   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:08.092102   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:08.092157   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:08.133171   71168 cri.go:89] found id: ""
	I0401 19:33:08.133195   71168 logs.go:276] 0 containers: []
	W0401 19:33:08.133205   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:08.133212   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:08.133271   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:08.173997   71168 cri.go:89] found id: ""
	I0401 19:33:08.174023   71168 logs.go:276] 0 containers: []
	W0401 19:33:08.174034   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:08.174041   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:08.174102   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:08.212740   71168 cri.go:89] found id: ""
	I0401 19:33:08.212768   71168 logs.go:276] 0 containers: []
	W0401 19:33:08.212778   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:08.212785   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:08.212831   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:08.254815   71168 cri.go:89] found id: ""
	I0401 19:33:08.254837   71168 logs.go:276] 0 containers: []
	W0401 19:33:08.254847   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:08.254854   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:08.254909   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:08.295347   71168 cri.go:89] found id: ""
	I0401 19:33:08.295375   71168 logs.go:276] 0 containers: []
	W0401 19:33:08.295382   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:08.295390   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:08.295402   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:08.311574   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:08.311600   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:08.405437   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:08.405455   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:08.405470   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:08.483687   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:08.483722   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:08.526132   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:08.526158   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:11.076590   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:11.093846   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:11.093983   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:11.146046   71168 cri.go:89] found id: ""
	I0401 19:33:11.146073   71168 logs.go:276] 0 containers: []
	W0401 19:33:11.146083   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:11.146088   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:11.146146   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:11.193751   71168 cri.go:89] found id: ""
	I0401 19:33:11.193782   71168 logs.go:276] 0 containers: []
	W0401 19:33:11.193793   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:11.193801   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:11.193873   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:11.242150   71168 cri.go:89] found id: ""
	I0401 19:33:11.242178   71168 logs.go:276] 0 containers: []
	W0401 19:33:11.242189   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:11.242197   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:11.242271   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:11.294063   71168 cri.go:89] found id: ""
	I0401 19:33:11.294092   71168 logs.go:276] 0 containers: []
	W0401 19:33:11.294103   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:11.294110   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:11.294175   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:11.334764   71168 cri.go:89] found id: ""
	I0401 19:33:11.334784   71168 logs.go:276] 0 containers: []
	W0401 19:33:11.334791   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:11.334797   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:11.334846   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:11.372770   71168 cri.go:89] found id: ""
	I0401 19:33:11.372789   71168 logs.go:276] 0 containers: []
	W0401 19:33:11.372795   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:11.372806   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:11.372871   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:11.413233   71168 cri.go:89] found id: ""
	I0401 19:33:11.413261   71168 logs.go:276] 0 containers: []
	W0401 19:33:11.413271   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:11.413278   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:11.413337   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:11.456044   71168 cri.go:89] found id: ""
	I0401 19:33:11.456073   71168 logs.go:276] 0 containers: []
	W0401 19:33:11.456084   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:11.456093   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:11.456103   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:11.471157   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:11.471183   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:11.550489   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:11.550508   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:11.550523   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:11.635360   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:11.635389   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:11.680683   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:11.680713   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:13.827425   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:16.325563   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:13.104812   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:15.602114   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:13.607926   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:16.107219   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:14.235295   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:14.251513   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:14.251590   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:14.291688   71168 cri.go:89] found id: ""
	I0401 19:33:14.291715   71168 logs.go:276] 0 containers: []
	W0401 19:33:14.291725   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:14.291732   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:14.291792   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:14.332030   71168 cri.go:89] found id: ""
	I0401 19:33:14.332051   71168 logs.go:276] 0 containers: []
	W0401 19:33:14.332060   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:14.332068   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:14.332132   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:14.372098   71168 cri.go:89] found id: ""
	I0401 19:33:14.372122   71168 logs.go:276] 0 containers: []
	W0401 19:33:14.372130   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:14.372137   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:14.372183   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:14.410529   71168 cri.go:89] found id: ""
	I0401 19:33:14.410554   71168 logs.go:276] 0 containers: []
	W0401 19:33:14.410563   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:14.410570   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:14.410624   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:14.451198   71168 cri.go:89] found id: ""
	I0401 19:33:14.451226   71168 logs.go:276] 0 containers: []
	W0401 19:33:14.451238   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:14.451246   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:14.451306   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:14.494588   71168 cri.go:89] found id: ""
	I0401 19:33:14.494616   71168 logs.go:276] 0 containers: []
	W0401 19:33:14.494627   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:14.494635   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:14.494689   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:14.537561   71168 cri.go:89] found id: ""
	I0401 19:33:14.537583   71168 logs.go:276] 0 containers: []
	W0401 19:33:14.537590   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:14.537597   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:14.537674   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:14.580624   71168 cri.go:89] found id: ""
	I0401 19:33:14.580651   71168 logs.go:276] 0 containers: []
	W0401 19:33:14.580662   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:14.580672   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:14.580688   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:14.635769   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:14.635798   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:14.650275   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:14.650304   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:14.742355   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:14.742378   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:14.742394   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:14.827839   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:14.827869   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:17.373408   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:17.390110   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:17.390185   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:17.432355   71168 cri.go:89] found id: ""
	I0401 19:33:17.432384   71168 logs.go:276] 0 containers: []
	W0401 19:33:17.432396   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:17.432409   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:17.432471   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:17.476458   71168 cri.go:89] found id: ""
	I0401 19:33:17.476484   71168 logs.go:276] 0 containers: []
	W0401 19:33:17.476495   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:17.476502   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:17.476587   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:17.519657   71168 cri.go:89] found id: ""
	I0401 19:33:17.519686   71168 logs.go:276] 0 containers: []
	W0401 19:33:17.519694   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:17.519699   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:17.519751   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:17.559962   71168 cri.go:89] found id: ""
	I0401 19:33:17.559985   71168 logs.go:276] 0 containers: []
	W0401 19:33:17.559992   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:17.559997   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:17.560054   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:17.608924   71168 cri.go:89] found id: ""
	I0401 19:33:17.608995   71168 logs.go:276] 0 containers: []
	W0401 19:33:17.609009   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:17.609016   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:17.609075   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:17.648371   71168 cri.go:89] found id: ""
	I0401 19:33:17.648394   71168 logs.go:276] 0 containers: []
	W0401 19:33:17.648401   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:17.648406   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:17.648462   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:17.689217   71168 cri.go:89] found id: ""
	I0401 19:33:17.689239   71168 logs.go:276] 0 containers: []
	W0401 19:33:17.689246   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:17.689252   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:17.689312   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:17.741738   71168 cri.go:89] found id: ""
	I0401 19:33:17.741768   71168 logs.go:276] 0 containers: []
	W0401 19:33:17.741779   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:17.741790   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:17.741805   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:17.839857   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:17.839887   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:17.888684   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:17.888716   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:17.944268   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:17.944298   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:17.959305   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:17.959334   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0401 19:33:18.327388   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:20.826627   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:18.100065   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:20.100714   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:18.107770   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:20.108880   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	W0401 19:33:18.040820   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:20.541980   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:20.558198   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:20.558270   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:20.596329   71168 cri.go:89] found id: ""
	I0401 19:33:20.596357   71168 logs.go:276] 0 containers: []
	W0401 19:33:20.596366   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:20.596373   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:20.596431   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:20.638611   71168 cri.go:89] found id: ""
	I0401 19:33:20.638639   71168 logs.go:276] 0 containers: []
	W0401 19:33:20.638664   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:20.638672   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:20.638729   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:20.677984   71168 cri.go:89] found id: ""
	I0401 19:33:20.678014   71168 logs.go:276] 0 containers: []
	W0401 19:33:20.678024   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:20.678032   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:20.678080   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:20.718491   71168 cri.go:89] found id: ""
	I0401 19:33:20.718520   71168 logs.go:276] 0 containers: []
	W0401 19:33:20.718530   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:20.718537   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:20.718597   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:20.772147   71168 cri.go:89] found id: ""
	I0401 19:33:20.772174   71168 logs.go:276] 0 containers: []
	W0401 19:33:20.772185   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:20.772199   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:20.772258   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:20.823339   71168 cri.go:89] found id: ""
	I0401 19:33:20.823361   71168 logs.go:276] 0 containers: []
	W0401 19:33:20.823372   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:20.823380   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:20.823463   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:20.884081   71168 cri.go:89] found id: ""
	I0401 19:33:20.884106   71168 logs.go:276] 0 containers: []
	W0401 19:33:20.884117   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:20.884124   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:20.884185   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:20.931679   71168 cri.go:89] found id: ""
	I0401 19:33:20.931703   71168 logs.go:276] 0 containers: []
	W0401 19:33:20.931713   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:20.931722   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:20.931736   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:21.016766   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:21.016797   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:21.067600   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:21.067632   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:21.136989   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:21.137045   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:21.152673   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:21.152706   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:21.250186   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:23.325222   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:25.326919   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:22.597922   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:24.602701   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:22.606659   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:24.606811   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:26.608185   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:23.750565   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:23.768458   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:23.768534   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:23.814489   71168 cri.go:89] found id: ""
	I0401 19:33:23.814534   71168 logs.go:276] 0 containers: []
	W0401 19:33:23.814555   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:23.814565   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:23.814632   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:23.854954   71168 cri.go:89] found id: ""
	I0401 19:33:23.854981   71168 logs.go:276] 0 containers: []
	W0401 19:33:23.854989   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:23.854995   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:23.855060   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:23.896115   71168 cri.go:89] found id: ""
	I0401 19:33:23.896148   71168 logs.go:276] 0 containers: []
	W0401 19:33:23.896159   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:23.896169   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:23.896231   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:23.941300   71168 cri.go:89] found id: ""
	I0401 19:33:23.941324   71168 logs.go:276] 0 containers: []
	W0401 19:33:23.941337   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:23.941344   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:23.941390   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:23.983955   71168 cri.go:89] found id: ""
	I0401 19:33:23.983982   71168 logs.go:276] 0 containers: []
	W0401 19:33:23.983991   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:23.983997   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:23.984056   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:24.020756   71168 cri.go:89] found id: ""
	I0401 19:33:24.020777   71168 logs.go:276] 0 containers: []
	W0401 19:33:24.020784   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:24.020789   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:24.020835   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:24.063426   71168 cri.go:89] found id: ""
	I0401 19:33:24.063454   71168 logs.go:276] 0 containers: []
	W0401 19:33:24.063462   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:24.063467   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:24.063529   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:24.110924   71168 cri.go:89] found id: ""
	I0401 19:33:24.110945   71168 logs.go:276] 0 containers: []
	W0401 19:33:24.110952   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:24.110960   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:24.110969   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:24.179200   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:24.179240   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:24.194880   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:24.194909   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:24.280555   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:24.280588   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:24.280603   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:24.359502   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:24.359534   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:26.909147   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:26.925961   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:26.926028   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:26.969502   71168 cri.go:89] found id: ""
	I0401 19:33:26.969525   71168 logs.go:276] 0 containers: []
	W0401 19:33:26.969536   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:26.969543   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:26.969604   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:27.015205   71168 cri.go:89] found id: ""
	I0401 19:33:27.015232   71168 logs.go:276] 0 containers: []
	W0401 19:33:27.015241   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:27.015246   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:27.015296   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:27.055943   71168 cri.go:89] found id: ""
	I0401 19:33:27.055968   71168 logs.go:276] 0 containers: []
	W0401 19:33:27.055977   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:27.055983   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:27.056039   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:27.095447   71168 cri.go:89] found id: ""
	I0401 19:33:27.095474   71168 logs.go:276] 0 containers: []
	W0401 19:33:27.095485   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:27.095497   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:27.095558   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:27.137912   71168 cri.go:89] found id: ""
	I0401 19:33:27.137941   71168 logs.go:276] 0 containers: []
	W0401 19:33:27.137948   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:27.137954   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:27.138008   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:27.183303   71168 cri.go:89] found id: ""
	I0401 19:33:27.183325   71168 logs.go:276] 0 containers: []
	W0401 19:33:27.183335   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:27.183344   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:27.183403   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:27.225780   71168 cri.go:89] found id: ""
	I0401 19:33:27.225804   71168 logs.go:276] 0 containers: []
	W0401 19:33:27.225814   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:27.225822   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:27.225880   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:27.268136   71168 cri.go:89] found id: ""
	I0401 19:33:27.268159   71168 logs.go:276] 0 containers: []
	W0401 19:33:27.268168   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:27.268191   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:27.268215   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:27.325527   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:27.325557   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:27.341727   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:27.341763   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:27.432369   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:27.432389   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:27.432403   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:27.523104   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:27.523135   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:27.826804   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:30.326279   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:27.099509   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:29.597830   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:31.598325   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:29.107400   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:31.107514   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:30.066147   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:30.079999   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:30.080062   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:30.121887   71168 cri.go:89] found id: ""
	I0401 19:33:30.121911   71168 logs.go:276] 0 containers: []
	W0401 19:33:30.121920   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:30.121929   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:30.121986   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:30.163939   71168 cri.go:89] found id: ""
	I0401 19:33:30.163967   71168 logs.go:276] 0 containers: []
	W0401 19:33:30.163978   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:30.163986   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:30.164051   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:30.203924   71168 cri.go:89] found id: ""
	I0401 19:33:30.203965   71168 logs.go:276] 0 containers: []
	W0401 19:33:30.203977   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:30.203985   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:30.204048   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:30.243771   71168 cri.go:89] found id: ""
	I0401 19:33:30.243798   71168 logs.go:276] 0 containers: []
	W0401 19:33:30.243809   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:30.243816   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:30.243888   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:30.284039   71168 cri.go:89] found id: ""
	I0401 19:33:30.284066   71168 logs.go:276] 0 containers: []
	W0401 19:33:30.284074   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:30.284079   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:30.284127   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:30.327549   71168 cri.go:89] found id: ""
	I0401 19:33:30.327570   71168 logs.go:276] 0 containers: []
	W0401 19:33:30.327577   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:30.327583   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:30.327630   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:30.365258   71168 cri.go:89] found id: ""
	I0401 19:33:30.365281   71168 logs.go:276] 0 containers: []
	W0401 19:33:30.365291   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:30.365297   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:30.365352   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:30.405959   71168 cri.go:89] found id: ""
	I0401 19:33:30.405984   71168 logs.go:276] 0 containers: []
	W0401 19:33:30.405992   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:30.405999   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:30.406011   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:30.480668   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:30.480692   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:30.480706   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:30.566042   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:30.566077   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:30.629250   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:30.629285   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:30.682185   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:30.682213   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:32.824844   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:34.826598   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:33.600555   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:36.100194   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:33.608315   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:36.106573   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:33.199466   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:33.213557   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:33.213630   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:33.255038   71168 cri.go:89] found id: ""
	I0401 19:33:33.255062   71168 logs.go:276] 0 containers: []
	W0401 19:33:33.255072   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:33.255079   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:33.255143   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:33.297724   71168 cri.go:89] found id: ""
	I0401 19:33:33.297751   71168 logs.go:276] 0 containers: []
	W0401 19:33:33.297761   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:33.297767   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:33.297836   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:33.340694   71168 cri.go:89] found id: ""
	I0401 19:33:33.340718   71168 logs.go:276] 0 containers: []
	W0401 19:33:33.340727   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:33.340735   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:33.340794   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:33.388857   71168 cri.go:89] found id: ""
	I0401 19:33:33.388883   71168 logs.go:276] 0 containers: []
	W0401 19:33:33.388891   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:33.388896   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:33.388940   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:33.430875   71168 cri.go:89] found id: ""
	I0401 19:33:33.430899   71168 logs.go:276] 0 containers: []
	W0401 19:33:33.430906   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:33.430911   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:33.430966   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:33.479877   71168 cri.go:89] found id: ""
	I0401 19:33:33.479905   71168 logs.go:276] 0 containers: []
	W0401 19:33:33.479917   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:33.479923   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:33.479968   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:33.522635   71168 cri.go:89] found id: ""
	I0401 19:33:33.522662   71168 logs.go:276] 0 containers: []
	W0401 19:33:33.522672   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:33.522680   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:33.522737   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:33.560497   71168 cri.go:89] found id: ""
	I0401 19:33:33.560519   71168 logs.go:276] 0 containers: []
	W0401 19:33:33.560527   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:33.560534   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:33.560549   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:33.612141   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:33.612170   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:33.665142   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:33.665170   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:33.681076   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:33.681100   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:33.755938   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:33.755966   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:33.755983   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:36.341957   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:36.359519   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:36.359586   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:36.416339   71168 cri.go:89] found id: ""
	I0401 19:33:36.416362   71168 logs.go:276] 0 containers: []
	W0401 19:33:36.416373   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:36.416381   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:36.416442   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:36.473883   71168 cri.go:89] found id: ""
	I0401 19:33:36.473906   71168 logs.go:276] 0 containers: []
	W0401 19:33:36.473918   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:36.473925   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:36.473988   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:36.521532   71168 cri.go:89] found id: ""
	I0401 19:33:36.521558   71168 logs.go:276] 0 containers: []
	W0401 19:33:36.521568   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:36.521575   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:36.521639   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:36.563420   71168 cri.go:89] found id: ""
	I0401 19:33:36.563446   71168 logs.go:276] 0 containers: []
	W0401 19:33:36.563454   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:36.563459   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:36.563520   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:36.605658   71168 cri.go:89] found id: ""
	I0401 19:33:36.605678   71168 logs.go:276] 0 containers: []
	W0401 19:33:36.605689   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:36.605697   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:36.605759   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:36.645611   71168 cri.go:89] found id: ""
	I0401 19:33:36.645631   71168 logs.go:276] 0 containers: []
	W0401 19:33:36.645638   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:36.645656   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:36.645715   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:36.685994   71168 cri.go:89] found id: ""
	I0401 19:33:36.686022   71168 logs.go:276] 0 containers: []
	W0401 19:33:36.686033   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:36.686041   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:36.686099   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:36.725573   71168 cri.go:89] found id: ""
	I0401 19:33:36.725598   71168 logs.go:276] 0 containers: []
	W0401 19:33:36.725608   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:36.725618   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:36.725630   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:36.778854   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:36.778885   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:36.795003   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:36.795036   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:36.872648   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:36.872666   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:36.872678   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:36.956648   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:36.956683   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:36.827745   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:38.830544   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:41.326012   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:38.597991   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:41.097044   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:38.107961   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:40.606475   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:39.502868   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:39.519090   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:39.519161   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:39.562347   71168 cri.go:89] found id: ""
	I0401 19:33:39.562371   71168 logs.go:276] 0 containers: []
	W0401 19:33:39.562379   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:39.562384   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:39.562442   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:39.607250   71168 cri.go:89] found id: ""
	I0401 19:33:39.607276   71168 logs.go:276] 0 containers: []
	W0401 19:33:39.607286   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:39.607293   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:39.607343   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:39.650683   71168 cri.go:89] found id: ""
	I0401 19:33:39.650704   71168 logs.go:276] 0 containers: []
	W0401 19:33:39.650712   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:39.650717   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:39.650764   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:39.694676   71168 cri.go:89] found id: ""
	I0401 19:33:39.694706   71168 logs.go:276] 0 containers: []
	W0401 19:33:39.694718   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:39.694724   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:39.694783   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:39.733873   71168 cri.go:89] found id: ""
	I0401 19:33:39.733901   71168 logs.go:276] 0 containers: []
	W0401 19:33:39.733911   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:39.733919   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:39.733980   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:39.773625   71168 cri.go:89] found id: ""
	I0401 19:33:39.773668   71168 logs.go:276] 0 containers: []
	W0401 19:33:39.773679   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:39.773686   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:39.773735   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:39.815020   71168 cri.go:89] found id: ""
	I0401 19:33:39.815053   71168 logs.go:276] 0 containers: []
	W0401 19:33:39.815064   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:39.815071   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:39.815134   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:39.855575   71168 cri.go:89] found id: ""
	I0401 19:33:39.855606   71168 logs.go:276] 0 containers: []
	W0401 19:33:39.855615   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:39.855626   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:39.855641   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:39.873827   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:39.873857   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:39.948487   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:39.948507   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:39.948521   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:40.034026   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:40.034062   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:40.077798   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:40.077828   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:42.637999   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:42.654991   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:42.655063   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:42.695920   71168 cri.go:89] found id: ""
	I0401 19:33:42.695953   71168 logs.go:276] 0 containers: []
	W0401 19:33:42.695964   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:42.695971   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:42.696030   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:42.737303   71168 cri.go:89] found id: ""
	I0401 19:33:42.737325   71168 logs.go:276] 0 containers: []
	W0401 19:33:42.737333   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:42.737341   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:42.737393   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:42.777922   71168 cri.go:89] found id: ""
	I0401 19:33:42.777953   71168 logs.go:276] 0 containers: []
	W0401 19:33:42.777965   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:42.777972   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:42.778036   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:42.818339   71168 cri.go:89] found id: ""
	I0401 19:33:42.818364   71168 logs.go:276] 0 containers: []
	W0401 19:33:42.818372   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:42.818379   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:42.818435   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:42.859470   71168 cri.go:89] found id: ""
	I0401 19:33:42.859494   71168 logs.go:276] 0 containers: []
	W0401 19:33:42.859502   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:42.859507   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:42.859556   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:42.901950   71168 cri.go:89] found id: ""
	I0401 19:33:42.901980   71168 logs.go:276] 0 containers: []
	W0401 19:33:42.901989   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:42.901996   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:42.902063   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:42.947230   71168 cri.go:89] found id: ""
	I0401 19:33:42.947258   71168 logs.go:276] 0 containers: []
	W0401 19:33:42.947268   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:42.947275   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:42.947351   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:43.827204   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:46.325749   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:43.098252   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:45.098316   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:42.607590   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:44.607666   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:47.107837   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:42.988997   71168 cri.go:89] found id: ""
	I0401 19:33:42.989022   71168 logs.go:276] 0 containers: []
	W0401 19:33:42.989032   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:42.989049   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:42.989066   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:43.075323   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:43.075352   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:43.075363   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:43.164445   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:43.164479   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:43.215852   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:43.215885   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:43.271301   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:43.271334   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:45.786705   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:45.804389   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:45.804445   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:45.849838   71168 cri.go:89] found id: ""
	I0401 19:33:45.849872   71168 logs.go:276] 0 containers: []
	W0401 19:33:45.849883   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:45.849891   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:45.849950   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:45.890603   71168 cri.go:89] found id: ""
	I0401 19:33:45.890625   71168 logs.go:276] 0 containers: []
	W0401 19:33:45.890635   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:45.890642   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:45.890703   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:45.929189   71168 cri.go:89] found id: ""
	I0401 19:33:45.929210   71168 logs.go:276] 0 containers: []
	W0401 19:33:45.929218   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:45.929223   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:45.929268   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:45.968266   71168 cri.go:89] found id: ""
	I0401 19:33:45.968292   71168 logs.go:276] 0 containers: []
	W0401 19:33:45.968303   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:45.968310   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:45.968365   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:46.007114   71168 cri.go:89] found id: ""
	I0401 19:33:46.007135   71168 logs.go:276] 0 containers: []
	W0401 19:33:46.007143   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:46.007148   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:46.007195   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:46.046067   71168 cri.go:89] found id: ""
	I0401 19:33:46.046088   71168 logs.go:276] 0 containers: []
	W0401 19:33:46.046095   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:46.046101   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:46.046186   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:46.083604   71168 cri.go:89] found id: ""
	I0401 19:33:46.083630   71168 logs.go:276] 0 containers: []
	W0401 19:33:46.083644   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:46.083651   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:46.083709   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:46.125435   71168 cri.go:89] found id: ""
	I0401 19:33:46.125457   71168 logs.go:276] 0 containers: []
	W0401 19:33:46.125464   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:46.125472   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:46.125483   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:46.179060   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:46.179092   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:46.195139   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:46.195179   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:46.275876   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:46.275903   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:46.275914   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:46.365430   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:46.365465   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:48.825540   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:50.827204   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:47.099197   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:49.105260   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:51.597808   70284 pod_ready.go:102] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:49.108344   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:51.607079   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:48.908390   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:48.924357   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:48.924416   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:48.969325   71168 cri.go:89] found id: ""
	I0401 19:33:48.969351   71168 logs.go:276] 0 containers: []
	W0401 19:33:48.969359   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:48.969364   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:48.969421   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:49.006702   71168 cri.go:89] found id: ""
	I0401 19:33:49.006724   71168 logs.go:276] 0 containers: []
	W0401 19:33:49.006731   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:49.006736   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:49.006785   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:49.051196   71168 cri.go:89] found id: ""
	I0401 19:33:49.051229   71168 logs.go:276] 0 containers: []
	W0401 19:33:49.051241   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:49.051260   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:49.051336   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:49.098123   71168 cri.go:89] found id: ""
	I0401 19:33:49.098150   71168 logs.go:276] 0 containers: []
	W0401 19:33:49.098159   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:49.098166   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:49.098225   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:49.138203   71168 cri.go:89] found id: ""
	I0401 19:33:49.138232   71168 logs.go:276] 0 containers: []
	W0401 19:33:49.138239   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:49.138244   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:49.138290   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:49.185441   71168 cri.go:89] found id: ""
	I0401 19:33:49.185465   71168 logs.go:276] 0 containers: []
	W0401 19:33:49.185473   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:49.185478   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:49.185537   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:49.235649   71168 cri.go:89] found id: ""
	I0401 19:33:49.235670   71168 logs.go:276] 0 containers: []
	W0401 19:33:49.235678   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:49.235683   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:49.235762   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:49.279638   71168 cri.go:89] found id: ""
	I0401 19:33:49.279662   71168 logs.go:276] 0 containers: []
	W0401 19:33:49.279673   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:49.279683   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:49.279699   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:49.340761   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:49.340798   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:49.356552   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:49.356581   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:49.441110   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:49.441129   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:49.441140   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:49.523159   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:49.523189   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:52.067710   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:52.082986   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:52.083046   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:52.128510   71168 cri.go:89] found id: ""
	I0401 19:33:52.128531   71168 logs.go:276] 0 containers: []
	W0401 19:33:52.128538   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:52.128543   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:52.128590   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:52.167767   71168 cri.go:89] found id: ""
	I0401 19:33:52.167792   71168 logs.go:276] 0 containers: []
	W0401 19:33:52.167803   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:52.167810   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:52.167871   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:52.206384   71168 cri.go:89] found id: ""
	I0401 19:33:52.206416   71168 logs.go:276] 0 containers: []
	W0401 19:33:52.206426   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:52.206433   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:52.206493   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:52.245277   71168 cri.go:89] found id: ""
	I0401 19:33:52.245301   71168 logs.go:276] 0 containers: []
	W0401 19:33:52.245309   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:52.245318   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:52.245388   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:52.283925   71168 cri.go:89] found id: ""
	I0401 19:33:52.283954   71168 logs.go:276] 0 containers: []
	W0401 19:33:52.283964   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:52.283971   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:52.284032   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:52.323944   71168 cri.go:89] found id: ""
	I0401 19:33:52.323970   71168 logs.go:276] 0 containers: []
	W0401 19:33:52.323981   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:52.323988   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:52.324045   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:52.364853   71168 cri.go:89] found id: ""
	I0401 19:33:52.364882   71168 logs.go:276] 0 containers: []
	W0401 19:33:52.364893   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:52.364901   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:52.364958   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:52.404136   71168 cri.go:89] found id: ""
	I0401 19:33:52.404158   71168 logs.go:276] 0 containers: []
	W0401 19:33:52.404165   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:52.404173   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:52.404184   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:52.459097   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:52.459129   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:52.474392   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:52.474417   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:52.551817   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:52.551843   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:52.551860   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:52.650710   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:52.650750   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:53.326050   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:55.327326   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:52.607062   70284 pod_ready.go:92] pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace has status "Ready":"True"
	I0401 19:33:52.607082   70284 pod_ready.go:81] duration metric: took 43.516413537s for pod "coredns-7db6d8ff4d-gdml5" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.607091   70284 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.628695   70284 pod_ready.go:92] pod "etcd-no-preload-472858" in "kube-system" namespace has status "Ready":"True"
	I0401 19:33:52.628725   70284 pod_ready.go:81] duration metric: took 21.625468ms for pod "etcd-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.628739   70284 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.643017   70284 pod_ready.go:92] pod "kube-apiserver-no-preload-472858" in "kube-system" namespace has status "Ready":"True"
	I0401 19:33:52.643044   70284 pod_ready.go:81] duration metric: took 14.296056ms for pod "kube-apiserver-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.643058   70284 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.649063   70284 pod_ready.go:92] pod "kube-controller-manager-no-preload-472858" in "kube-system" namespace has status "Ready":"True"
	I0401 19:33:52.649091   70284 pod_ready.go:81] duration metric: took 6.024238ms for pod "kube-controller-manager-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.649105   70284 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7c22p" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.654806   70284 pod_ready.go:92] pod "kube-proxy-7c22p" in "kube-system" namespace has status "Ready":"True"
	I0401 19:33:52.654829   70284 pod_ready.go:81] duration metric: took 5.709865ms for pod "kube-proxy-7c22p" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.654840   70284 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.997116   70284 pod_ready.go:92] pod "kube-scheduler-no-preload-472858" in "kube-system" namespace has status "Ready":"True"
	I0401 19:33:52.997139   70284 pod_ready.go:81] duration metric: took 342.291727ms for pod "kube-scheduler-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:52.997148   70284 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace to be "Ready" ...
	I0401 19:33:55.004130   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:53.608064   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:56.106148   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:55.205689   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:55.222840   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:55.222901   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:55.263783   71168 cri.go:89] found id: ""
	I0401 19:33:55.263813   71168 logs.go:276] 0 containers: []
	W0401 19:33:55.263820   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:55.263828   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:55.263883   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:55.300788   71168 cri.go:89] found id: ""
	I0401 19:33:55.300818   71168 logs.go:276] 0 containers: []
	W0401 19:33:55.300826   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:55.300834   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:55.300888   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:55.343189   71168 cri.go:89] found id: ""
	I0401 19:33:55.343215   71168 logs.go:276] 0 containers: []
	W0401 19:33:55.343223   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:55.343229   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:55.343286   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:55.387560   71168 cri.go:89] found id: ""
	I0401 19:33:55.387587   71168 logs.go:276] 0 containers: []
	W0401 19:33:55.387597   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:55.387604   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:55.387663   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:55.428078   71168 cri.go:89] found id: ""
	I0401 19:33:55.428103   71168 logs.go:276] 0 containers: []
	W0401 19:33:55.428112   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:55.428119   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:55.428181   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:55.472696   71168 cri.go:89] found id: ""
	I0401 19:33:55.472722   71168 logs.go:276] 0 containers: []
	W0401 19:33:55.472734   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:55.472741   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:55.472797   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:55.518071   71168 cri.go:89] found id: ""
	I0401 19:33:55.518115   71168 logs.go:276] 0 containers: []
	W0401 19:33:55.518126   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:55.518136   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:55.518201   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:55.555697   71168 cri.go:89] found id: ""
	I0401 19:33:55.555717   71168 logs.go:276] 0 containers: []
	W0401 19:33:55.555724   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:55.555732   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:55.555747   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:55.637462   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:55.637492   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:55.682353   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:55.682380   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:55.735451   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:55.735484   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:33:55.750928   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:55.750954   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:55.824610   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:57.328228   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:59.826213   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:57.005395   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:59.505575   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:01.506107   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:58.106643   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:00.606864   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:33:58.325742   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:33:58.341022   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:33:58.341092   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:33:58.380910   71168 cri.go:89] found id: ""
	I0401 19:33:58.380932   71168 logs.go:276] 0 containers: []
	W0401 19:33:58.380940   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:33:58.380946   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:33:58.380990   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:33:58.420387   71168 cri.go:89] found id: ""
	I0401 19:33:58.420413   71168 logs.go:276] 0 containers: []
	W0401 19:33:58.420425   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:33:58.420431   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:33:58.420479   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:33:58.460470   71168 cri.go:89] found id: ""
	I0401 19:33:58.460501   71168 logs.go:276] 0 containers: []
	W0401 19:33:58.460511   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:33:58.460520   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:33:58.460580   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:33:58.496844   71168 cri.go:89] found id: ""
	I0401 19:33:58.496867   71168 logs.go:276] 0 containers: []
	W0401 19:33:58.496875   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:33:58.496881   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:33:58.496930   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:33:58.535883   71168 cri.go:89] found id: ""
	I0401 19:33:58.535905   71168 logs.go:276] 0 containers: []
	W0401 19:33:58.535915   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:33:58.535922   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:33:58.535979   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:33:58.576833   71168 cri.go:89] found id: ""
	I0401 19:33:58.576855   71168 logs.go:276] 0 containers: []
	W0401 19:33:58.576863   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:33:58.576869   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:33:58.576913   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:33:58.615057   71168 cri.go:89] found id: ""
	I0401 19:33:58.615081   71168 logs.go:276] 0 containers: []
	W0401 19:33:58.615091   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:33:58.615098   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:33:58.615156   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:33:58.657982   71168 cri.go:89] found id: ""
	I0401 19:33:58.658008   71168 logs.go:276] 0 containers: []
	W0401 19:33:58.658018   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:33:58.658028   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:33:58.658045   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:33:58.734579   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:33:58.734601   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:33:58.734616   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:33:58.821779   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:33:58.821819   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:33:58.894470   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:33:58.894506   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:33:58.949854   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:33:58.949884   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:01.465820   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:01.481929   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:01.481984   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:01.525371   71168 cri.go:89] found id: ""
	I0401 19:34:01.525397   71168 logs.go:276] 0 containers: []
	W0401 19:34:01.525407   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:01.525415   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:01.525473   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:01.571106   71168 cri.go:89] found id: ""
	I0401 19:34:01.571136   71168 logs.go:276] 0 containers: []
	W0401 19:34:01.571146   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:01.571153   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:01.571214   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:01.617666   71168 cri.go:89] found id: ""
	I0401 19:34:01.617705   71168 logs.go:276] 0 containers: []
	W0401 19:34:01.617717   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:01.617725   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:01.617787   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:01.655286   71168 cri.go:89] found id: ""
	I0401 19:34:01.655311   71168 logs.go:276] 0 containers: []
	W0401 19:34:01.655321   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:01.655328   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:01.655396   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:01.694911   71168 cri.go:89] found id: ""
	I0401 19:34:01.694940   71168 logs.go:276] 0 containers: []
	W0401 19:34:01.694950   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:01.694957   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:01.695040   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:01.734970   71168 cri.go:89] found id: ""
	I0401 19:34:01.734996   71168 logs.go:276] 0 containers: []
	W0401 19:34:01.735007   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:01.735014   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:01.735071   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:01.778846   71168 cri.go:89] found id: ""
	I0401 19:34:01.778871   71168 logs.go:276] 0 containers: []
	W0401 19:34:01.778879   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:01.778885   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:01.778958   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:01.821934   71168 cri.go:89] found id: ""
	I0401 19:34:01.821964   71168 logs.go:276] 0 containers: []
	W0401 19:34:01.821975   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:01.821986   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:01.822002   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:01.880123   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:01.880155   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:01.895178   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:01.895200   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:01.972248   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:01.972275   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:01.972290   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:02.056663   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:02.056694   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:02.325323   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:04.326474   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:06.327583   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:04.004061   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:06.004176   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:02.608516   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:05.108477   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:04.603745   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:04.619269   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:04.619344   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:04.658089   71168 cri.go:89] found id: ""
	I0401 19:34:04.658111   71168 logs.go:276] 0 containers: []
	W0401 19:34:04.658118   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:04.658123   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:04.658168   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:04.700596   71168 cri.go:89] found id: ""
	I0401 19:34:04.700622   71168 logs.go:276] 0 containers: []
	W0401 19:34:04.700634   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:04.700641   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:04.700708   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:04.744960   71168 cri.go:89] found id: ""
	I0401 19:34:04.744990   71168 logs.go:276] 0 containers: []
	W0401 19:34:04.744999   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:04.745004   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:04.745052   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:04.788239   71168 cri.go:89] found id: ""
	I0401 19:34:04.788264   71168 logs.go:276] 0 containers: []
	W0401 19:34:04.788272   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:04.788278   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:04.788343   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:04.830788   71168 cri.go:89] found id: ""
	I0401 19:34:04.830812   71168 logs.go:276] 0 containers: []
	W0401 19:34:04.830850   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:04.830859   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:04.830917   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:04.889784   71168 cri.go:89] found id: ""
	I0401 19:34:04.889815   71168 logs.go:276] 0 containers: []
	W0401 19:34:04.889826   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:04.889834   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:04.889902   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:04.931969   71168 cri.go:89] found id: ""
	I0401 19:34:04.931996   71168 logs.go:276] 0 containers: []
	W0401 19:34:04.932004   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:04.932010   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:04.932058   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:04.975668   71168 cri.go:89] found id: ""
	I0401 19:34:04.975689   71168 logs.go:276] 0 containers: []
	W0401 19:34:04.975696   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:04.975704   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:04.975715   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:05.032212   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:05.032246   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:05.047900   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:05.047924   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:05.132371   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:05.132394   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:05.132408   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:05.222591   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:05.222623   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:07.767686   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:07.784473   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:07.784542   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:07.828460   71168 cri.go:89] found id: ""
	I0401 19:34:07.828487   71168 logs.go:276] 0 containers: []
	W0401 19:34:07.828498   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:07.828505   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:07.828564   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:07.872760   71168 cri.go:89] found id: ""
	I0401 19:34:07.872786   71168 logs.go:276] 0 containers: []
	W0401 19:34:07.872797   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:07.872804   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:07.872862   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:07.914241   71168 cri.go:89] found id: ""
	I0401 19:34:07.914263   71168 logs.go:276] 0 containers: []
	W0401 19:34:07.914271   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:07.914276   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:07.914340   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:07.953757   71168 cri.go:89] found id: ""
	I0401 19:34:07.953784   71168 logs.go:276] 0 containers: []
	W0401 19:34:07.953795   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:07.953803   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:07.953869   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:08.825113   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:10.827081   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:08.504038   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:10.508973   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:07.608037   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:10.110321   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:07.994382   71168 cri.go:89] found id: ""
	I0401 19:34:07.994401   71168 logs.go:276] 0 containers: []
	W0401 19:34:07.994409   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:07.994414   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:07.994459   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:08.038178   71168 cri.go:89] found id: ""
	I0401 19:34:08.038202   71168 logs.go:276] 0 containers: []
	W0401 19:34:08.038213   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:08.038220   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:08.038282   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:08.077532   71168 cri.go:89] found id: ""
	I0401 19:34:08.077562   71168 logs.go:276] 0 containers: []
	W0401 19:34:08.077573   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:08.077580   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:08.077657   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:08.119825   71168 cri.go:89] found id: ""
	I0401 19:34:08.119845   71168 logs.go:276] 0 containers: []
	W0401 19:34:08.119855   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:08.119865   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:08.119878   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:08.207688   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:08.207724   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:08.253050   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:08.253085   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:08.309119   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:08.309152   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:08.325675   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:08.325704   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:08.410877   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:10.911211   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:10.925590   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:10.925657   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:10.964180   71168 cri.go:89] found id: ""
	I0401 19:34:10.964205   71168 logs.go:276] 0 containers: []
	W0401 19:34:10.964216   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:10.964224   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:10.964273   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:11.004492   71168 cri.go:89] found id: ""
	I0401 19:34:11.004515   71168 logs.go:276] 0 containers: []
	W0401 19:34:11.004526   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:11.004533   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:11.004588   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:11.048771   71168 cri.go:89] found id: ""
	I0401 19:34:11.048792   71168 logs.go:276] 0 containers: []
	W0401 19:34:11.048804   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:11.048810   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:11.048861   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:11.084956   71168 cri.go:89] found id: ""
	I0401 19:34:11.084982   71168 logs.go:276] 0 containers: []
	W0401 19:34:11.084992   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:11.084999   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:11.085043   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:11.128194   71168 cri.go:89] found id: ""
	I0401 19:34:11.128218   71168 logs.go:276] 0 containers: []
	W0401 19:34:11.128225   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:11.128230   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:11.128274   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:11.169884   71168 cri.go:89] found id: ""
	I0401 19:34:11.169908   71168 logs.go:276] 0 containers: []
	W0401 19:34:11.169918   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:11.169925   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:11.169988   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:11.213032   71168 cri.go:89] found id: ""
	I0401 19:34:11.213066   71168 logs.go:276] 0 containers: []
	W0401 19:34:11.213077   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:11.213084   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:11.213149   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:11.258391   71168 cri.go:89] found id: ""
	I0401 19:34:11.258414   71168 logs.go:276] 0 containers: []
	W0401 19:34:11.258422   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:11.258429   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:11.258445   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:11.341297   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:11.341328   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:11.388628   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:11.388659   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:11.442300   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:11.442326   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:11.457531   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:11.457561   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:11.561556   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:13.324598   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:15.325464   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:13.005005   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:15.505216   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:12.607201   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:14.607580   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:17.107659   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:14.062670   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:14.077384   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:14.077449   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:14.119421   71168 cri.go:89] found id: ""
	I0401 19:34:14.119444   71168 logs.go:276] 0 containers: []
	W0401 19:34:14.119455   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:14.119462   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:14.119518   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:14.158762   71168 cri.go:89] found id: ""
	I0401 19:34:14.158783   71168 logs.go:276] 0 containers: []
	W0401 19:34:14.158798   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:14.158805   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:14.158867   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:14.197024   71168 cri.go:89] found id: ""
	I0401 19:34:14.197052   71168 logs.go:276] 0 containers: []
	W0401 19:34:14.197060   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:14.197065   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:14.197115   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:14.235976   71168 cri.go:89] found id: ""
	I0401 19:34:14.236004   71168 logs.go:276] 0 containers: []
	W0401 19:34:14.236015   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:14.236021   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:14.236085   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:14.280596   71168 cri.go:89] found id: ""
	I0401 19:34:14.280623   71168 logs.go:276] 0 containers: []
	W0401 19:34:14.280635   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:14.280642   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:14.280703   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:14.322196   71168 cri.go:89] found id: ""
	I0401 19:34:14.322219   71168 logs.go:276] 0 containers: []
	W0401 19:34:14.322230   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:14.322239   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:14.322298   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:14.364572   71168 cri.go:89] found id: ""
	I0401 19:34:14.364596   71168 logs.go:276] 0 containers: []
	W0401 19:34:14.364607   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:14.364615   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:14.364662   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:14.406043   71168 cri.go:89] found id: ""
	I0401 19:34:14.406066   71168 logs.go:276] 0 containers: []
	W0401 19:34:14.406072   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:14.406082   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:14.406097   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:14.461841   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:14.461870   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:14.479960   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:14.479990   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:14.557039   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:14.557058   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:14.557070   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:14.641945   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:14.641975   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:17.192681   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:17.207913   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:17.207964   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:17.245596   71168 cri.go:89] found id: ""
	I0401 19:34:17.245618   71168 logs.go:276] 0 containers: []
	W0401 19:34:17.245625   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:17.245630   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:17.245701   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:17.310845   71168 cri.go:89] found id: ""
	I0401 19:34:17.310875   71168 logs.go:276] 0 containers: []
	W0401 19:34:17.310887   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:17.310894   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:17.310958   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:17.367726   71168 cri.go:89] found id: ""
	I0401 19:34:17.367753   71168 logs.go:276] 0 containers: []
	W0401 19:34:17.367764   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:17.367770   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:17.367833   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:17.410807   71168 cri.go:89] found id: ""
	I0401 19:34:17.410834   71168 logs.go:276] 0 containers: []
	W0401 19:34:17.410842   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:17.410847   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:17.410892   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:17.448242   71168 cri.go:89] found id: ""
	I0401 19:34:17.448268   71168 logs.go:276] 0 containers: []
	W0401 19:34:17.448278   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:17.448285   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:17.448337   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:17.486552   71168 cri.go:89] found id: ""
	I0401 19:34:17.486580   71168 logs.go:276] 0 containers: []
	W0401 19:34:17.486590   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:17.486595   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:17.486644   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:17.529947   71168 cri.go:89] found id: ""
	I0401 19:34:17.529975   71168 logs.go:276] 0 containers: []
	W0401 19:34:17.529986   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:17.529993   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:17.530052   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:17.571617   71168 cri.go:89] found id: ""
	I0401 19:34:17.571640   71168 logs.go:276] 0 containers: []
	W0401 19:34:17.571648   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:17.571656   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:17.571673   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:17.627326   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:17.627354   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:17.643409   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:17.643431   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:17.723772   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:17.723798   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:17.723811   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:17.803383   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:17.803414   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:17.325836   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:19.328447   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:17.509486   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:20.004341   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:19.606840   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:21.607646   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:20.348949   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:20.363311   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:20.363385   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:20.401558   71168 cri.go:89] found id: ""
	I0401 19:34:20.401585   71168 logs.go:276] 0 containers: []
	W0401 19:34:20.401595   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:20.401603   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:20.401686   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:20.445979   71168 cri.go:89] found id: ""
	I0401 19:34:20.446004   71168 logs.go:276] 0 containers: []
	W0401 19:34:20.446011   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:20.446016   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:20.446060   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:20.487819   71168 cri.go:89] found id: ""
	I0401 19:34:20.487844   71168 logs.go:276] 0 containers: []
	W0401 19:34:20.487854   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:20.487862   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:20.487921   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:20.532107   71168 cri.go:89] found id: ""
	I0401 19:34:20.532131   71168 logs.go:276] 0 containers: []
	W0401 19:34:20.532154   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:20.532186   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:20.532247   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:20.577727   71168 cri.go:89] found id: ""
	I0401 19:34:20.577749   71168 logs.go:276] 0 containers: []
	W0401 19:34:20.577756   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:20.577762   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:20.577841   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:20.616774   71168 cri.go:89] found id: ""
	I0401 19:34:20.616805   71168 logs.go:276] 0 containers: []
	W0401 19:34:20.616816   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:20.616824   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:20.616887   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:20.656122   71168 cri.go:89] found id: ""
	I0401 19:34:20.656150   71168 logs.go:276] 0 containers: []
	W0401 19:34:20.656160   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:20.656167   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:20.656226   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:20.701249   71168 cri.go:89] found id: ""
	I0401 19:34:20.701274   71168 logs.go:276] 0 containers: []
	W0401 19:34:20.701285   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:20.701295   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:20.701310   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:20.746979   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:20.747003   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:20.799197   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:20.799226   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:20.815771   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:20.815808   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:20.895179   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:20.895202   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:20.895218   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:21.826671   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:24.325896   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:26.326569   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:22.503727   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:24.503877   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:26.506643   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:24.107702   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:26.607285   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:23.481911   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:23.496820   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:23.496889   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:23.538292   71168 cri.go:89] found id: ""
	I0401 19:34:23.538314   71168 logs.go:276] 0 containers: []
	W0401 19:34:23.538322   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:23.538327   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:23.538372   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:23.579171   71168 cri.go:89] found id: ""
	I0401 19:34:23.579200   71168 logs.go:276] 0 containers: []
	W0401 19:34:23.579209   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:23.579214   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:23.579269   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:23.620377   71168 cri.go:89] found id: ""
	I0401 19:34:23.620399   71168 logs.go:276] 0 containers: []
	W0401 19:34:23.620410   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:23.620417   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:23.620477   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:23.663309   71168 cri.go:89] found id: ""
	I0401 19:34:23.663329   71168 logs.go:276] 0 containers: []
	W0401 19:34:23.663337   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:23.663342   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:23.663392   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:23.702724   71168 cri.go:89] found id: ""
	I0401 19:34:23.702755   71168 logs.go:276] 0 containers: []
	W0401 19:34:23.702772   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:23.702778   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:23.702836   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:23.742797   71168 cri.go:89] found id: ""
	I0401 19:34:23.742827   71168 logs.go:276] 0 containers: []
	W0401 19:34:23.742837   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:23.742845   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:23.742913   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:23.781299   71168 cri.go:89] found id: ""
	I0401 19:34:23.781350   71168 logs.go:276] 0 containers: []
	W0401 19:34:23.781367   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:23.781375   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:23.781440   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:23.828244   71168 cri.go:89] found id: ""
	I0401 19:34:23.828270   71168 logs.go:276] 0 containers: []
	W0401 19:34:23.828277   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:23.828284   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:23.828298   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:23.914758   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:23.914782   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:23.914797   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:23.993300   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:23.993332   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:24.037388   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:24.037424   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:24.090157   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:24.090198   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:26.609062   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:26.624241   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:26.624309   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:26.665813   71168 cri.go:89] found id: ""
	I0401 19:34:26.665840   71168 logs.go:276] 0 containers: []
	W0401 19:34:26.665848   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:26.665857   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:26.665917   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:26.709571   71168 cri.go:89] found id: ""
	I0401 19:34:26.709593   71168 logs.go:276] 0 containers: []
	W0401 19:34:26.709600   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:26.709606   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:26.709680   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:26.757286   71168 cri.go:89] found id: ""
	I0401 19:34:26.757309   71168 logs.go:276] 0 containers: []
	W0401 19:34:26.757319   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:26.757325   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:26.757386   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:26.795715   71168 cri.go:89] found id: ""
	I0401 19:34:26.795768   71168 logs.go:276] 0 containers: []
	W0401 19:34:26.795781   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:26.795788   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:26.795839   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:26.835985   71168 cri.go:89] found id: ""
	I0401 19:34:26.836011   71168 logs.go:276] 0 containers: []
	W0401 19:34:26.836022   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:26.836029   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:26.836094   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:26.878890   71168 cri.go:89] found id: ""
	I0401 19:34:26.878918   71168 logs.go:276] 0 containers: []
	W0401 19:34:26.878929   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:26.878936   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:26.878991   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:26.920161   71168 cri.go:89] found id: ""
	I0401 19:34:26.920189   71168 logs.go:276] 0 containers: []
	W0401 19:34:26.920199   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:26.920206   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:26.920262   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:26.961597   71168 cri.go:89] found id: ""
	I0401 19:34:26.961626   71168 logs.go:276] 0 containers: []
	W0401 19:34:26.961637   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:26.961663   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:26.961679   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:27.019814   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:27.019847   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:27.035535   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:27.035564   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:27.111755   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:27.111776   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:27.111790   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:27.194932   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:27.194964   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:28.827702   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:31.325488   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:29.005830   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:31.007294   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:29.107097   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:31.109807   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:29.738592   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:29.752851   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:29.752913   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:29.791808   71168 cri.go:89] found id: ""
	I0401 19:34:29.791863   71168 logs.go:276] 0 containers: []
	W0401 19:34:29.791875   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:29.791883   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:29.791944   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:29.836113   71168 cri.go:89] found id: ""
	I0401 19:34:29.836132   71168 logs.go:276] 0 containers: []
	W0401 19:34:29.836139   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:29.836144   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:29.836200   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:29.879005   71168 cri.go:89] found id: ""
	I0401 19:34:29.879039   71168 logs.go:276] 0 containers: []
	W0401 19:34:29.879050   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:29.879059   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:29.879122   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:29.919349   71168 cri.go:89] found id: ""
	I0401 19:34:29.919383   71168 logs.go:276] 0 containers: []
	W0401 19:34:29.919394   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:29.919400   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:29.919454   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:29.957252   71168 cri.go:89] found id: ""
	I0401 19:34:29.957275   71168 logs.go:276] 0 containers: []
	W0401 19:34:29.957287   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:29.957294   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:29.957354   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:30.003220   71168 cri.go:89] found id: ""
	I0401 19:34:30.003245   71168 logs.go:276] 0 containers: []
	W0401 19:34:30.003256   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:30.003263   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:30.003311   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:30.043873   71168 cri.go:89] found id: ""
	I0401 19:34:30.043900   71168 logs.go:276] 0 containers: []
	W0401 19:34:30.043921   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:30.043928   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:30.043989   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:30.082215   71168 cri.go:89] found id: ""
	I0401 19:34:30.082242   71168 logs.go:276] 0 containers: []
	W0401 19:34:30.082253   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:30.082263   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:30.082277   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:30.098676   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:30.098701   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:30.180857   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:30.180879   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:30.180897   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:30.269982   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:30.270016   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:30.317933   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:30.317967   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:32.874312   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:32.888687   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:32.888742   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:32.926222   71168 cri.go:89] found id: ""
	I0401 19:34:32.926244   71168 logs.go:276] 0 containers: []
	W0401 19:34:32.926252   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:32.926257   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:32.926307   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:32.964838   71168 cri.go:89] found id: ""
	I0401 19:34:32.964858   71168 logs.go:276] 0 containers: []
	W0401 19:34:32.964865   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:32.964870   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:32.964914   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:33.327670   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:35.826387   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:33.504338   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:36.005240   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:33.606596   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:35.607014   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:33.006903   71168 cri.go:89] found id: ""
	I0401 19:34:33.006920   71168 logs.go:276] 0 containers: []
	W0401 19:34:33.006927   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:33.006933   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:33.006983   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:33.045663   71168 cri.go:89] found id: ""
	I0401 19:34:33.045691   71168 logs.go:276] 0 containers: []
	W0401 19:34:33.045701   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:33.045709   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:33.045770   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:33.086262   71168 cri.go:89] found id: ""
	I0401 19:34:33.086290   71168 logs.go:276] 0 containers: []
	W0401 19:34:33.086298   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:33.086303   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:33.086368   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:33.128302   71168 cri.go:89] found id: ""
	I0401 19:34:33.128327   71168 logs.go:276] 0 containers: []
	W0401 19:34:33.128335   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:33.128341   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:33.128402   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:33.171155   71168 cri.go:89] found id: ""
	I0401 19:34:33.171189   71168 logs.go:276] 0 containers: []
	W0401 19:34:33.171200   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:33.171207   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:33.171270   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:33.210793   71168 cri.go:89] found id: ""
	I0401 19:34:33.210820   71168 logs.go:276] 0 containers: []
	W0401 19:34:33.210838   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:33.210848   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:33.210870   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:33.295035   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:33.295072   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:33.345381   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:33.345417   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:33.401082   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:33.401120   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:33.417029   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:33.417055   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:33.497027   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:35.997632   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:36.013106   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:36.013161   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:36.053013   71168 cri.go:89] found id: ""
	I0401 19:34:36.053040   71168 logs.go:276] 0 containers: []
	W0401 19:34:36.053050   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:36.053059   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:36.053116   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:36.092268   71168 cri.go:89] found id: ""
	I0401 19:34:36.092297   71168 logs.go:276] 0 containers: []
	W0401 19:34:36.092308   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:36.092315   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:36.092389   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:36.131347   71168 cri.go:89] found id: ""
	I0401 19:34:36.131391   71168 logs.go:276] 0 containers: []
	W0401 19:34:36.131402   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:36.131409   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:36.131468   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:36.171402   71168 cri.go:89] found id: ""
	I0401 19:34:36.171432   71168 logs.go:276] 0 containers: []
	W0401 19:34:36.171443   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:36.171449   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:36.171511   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:36.211239   71168 cri.go:89] found id: ""
	I0401 19:34:36.211272   71168 logs.go:276] 0 containers: []
	W0401 19:34:36.211283   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:36.211290   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:36.211354   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:36.251246   71168 cri.go:89] found id: ""
	I0401 19:34:36.251275   71168 logs.go:276] 0 containers: []
	W0401 19:34:36.251287   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:36.251294   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:36.251354   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:36.293140   71168 cri.go:89] found id: ""
	I0401 19:34:36.293162   71168 logs.go:276] 0 containers: []
	W0401 19:34:36.293169   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:36.293174   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:36.293231   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:36.330281   71168 cri.go:89] found id: ""
	I0401 19:34:36.330308   71168 logs.go:276] 0 containers: []
	W0401 19:34:36.330318   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:36.330328   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:36.330342   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:36.421753   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:36.421790   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:36.467555   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:36.467581   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:36.524747   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:36.524778   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:36.540946   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:36.540976   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:36.622452   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:38.326341   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:40.327267   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:38.503641   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:40.504555   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:38.107732   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:40.608535   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:39.122969   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:39.139092   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:39.139157   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:39.177337   71168 cri.go:89] found id: ""
	I0401 19:34:39.177368   71168 logs.go:276] 0 containers: []
	W0401 19:34:39.177379   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:39.177387   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:39.177449   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:39.216471   71168 cri.go:89] found id: ""
	I0401 19:34:39.216498   71168 logs.go:276] 0 containers: []
	W0401 19:34:39.216507   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:39.216512   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:39.216558   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:39.255526   71168 cri.go:89] found id: ""
	I0401 19:34:39.255550   71168 logs.go:276] 0 containers: []
	W0401 19:34:39.255557   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:39.255563   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:39.255623   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:39.294682   71168 cri.go:89] found id: ""
	I0401 19:34:39.294711   71168 logs.go:276] 0 containers: []
	W0401 19:34:39.294723   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:39.294735   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:39.294798   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:39.337416   71168 cri.go:89] found id: ""
	I0401 19:34:39.337437   71168 logs.go:276] 0 containers: []
	W0401 19:34:39.337444   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:39.337449   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:39.337510   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:39.384560   71168 cri.go:89] found id: ""
	I0401 19:34:39.384586   71168 logs.go:276] 0 containers: []
	W0401 19:34:39.384598   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:39.384608   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:39.384671   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:39.421459   71168 cri.go:89] found id: ""
	I0401 19:34:39.421480   71168 logs.go:276] 0 containers: []
	W0401 19:34:39.421488   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:39.421493   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:39.421540   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:39.460221   71168 cri.go:89] found id: ""
	I0401 19:34:39.460246   71168 logs.go:276] 0 containers: []
	W0401 19:34:39.460256   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:39.460264   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:39.460275   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:39.543800   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:39.543835   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:39.591012   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:39.591038   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:39.645994   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:39.646025   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:39.662223   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:39.662250   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:39.741574   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:42.242541   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:42.256933   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:42.257006   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:42.294268   71168 cri.go:89] found id: ""
	I0401 19:34:42.294297   71168 logs.go:276] 0 containers: []
	W0401 19:34:42.294308   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:42.294315   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:42.294370   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:42.331978   71168 cri.go:89] found id: ""
	I0401 19:34:42.331999   71168 logs.go:276] 0 containers: []
	W0401 19:34:42.332005   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:42.332013   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:42.332078   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:42.369858   71168 cri.go:89] found id: ""
	I0401 19:34:42.369885   71168 logs.go:276] 0 containers: []
	W0401 19:34:42.369895   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:42.369903   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:42.369989   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:42.412688   71168 cri.go:89] found id: ""
	I0401 19:34:42.412708   71168 logs.go:276] 0 containers: []
	W0401 19:34:42.412715   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:42.412720   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:42.412776   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:42.449180   71168 cri.go:89] found id: ""
	I0401 19:34:42.449209   71168 logs.go:276] 0 containers: []
	W0401 19:34:42.449217   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:42.449225   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:42.449283   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:42.488582   71168 cri.go:89] found id: ""
	I0401 19:34:42.488606   71168 logs.go:276] 0 containers: []
	W0401 19:34:42.488613   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:42.488618   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:42.488665   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:42.527883   71168 cri.go:89] found id: ""
	I0401 19:34:42.527915   71168 logs.go:276] 0 containers: []
	W0401 19:34:42.527924   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:42.527931   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:42.527993   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:42.564372   71168 cri.go:89] found id: ""
	I0401 19:34:42.564394   71168 logs.go:276] 0 containers: []
	W0401 19:34:42.564401   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:42.564408   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:42.564419   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:42.646940   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:42.646974   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:42.689323   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:42.689354   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:42.744996   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:42.745024   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:42.761404   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:42.761429   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:42.836643   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:42.825895   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:45.325856   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:42.504642   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:45.004315   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:43.110114   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:45.607093   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:45.337809   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:45.352936   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:45.353029   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:45.395073   71168 cri.go:89] found id: ""
	I0401 19:34:45.395098   71168 logs.go:276] 0 containers: []
	W0401 19:34:45.395106   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:45.395112   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:45.395160   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:45.433537   71168 cri.go:89] found id: ""
	I0401 19:34:45.433567   71168 logs.go:276] 0 containers: []
	W0401 19:34:45.433578   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:45.433586   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:45.433658   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:45.477108   71168 cri.go:89] found id: ""
	I0401 19:34:45.477138   71168 logs.go:276] 0 containers: []
	W0401 19:34:45.477150   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:45.477157   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:45.477217   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:45.520350   71168 cri.go:89] found id: ""
	I0401 19:34:45.520389   71168 logs.go:276] 0 containers: []
	W0401 19:34:45.520401   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:45.520408   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:45.520466   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:45.562871   71168 cri.go:89] found id: ""
	I0401 19:34:45.562901   71168 logs.go:276] 0 containers: []
	W0401 19:34:45.562911   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:45.562918   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:45.562988   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:45.619214   71168 cri.go:89] found id: ""
	I0401 19:34:45.619237   71168 logs.go:276] 0 containers: []
	W0401 19:34:45.619248   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:45.619255   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:45.619317   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:45.664361   71168 cri.go:89] found id: ""
	I0401 19:34:45.664387   71168 logs.go:276] 0 containers: []
	W0401 19:34:45.664398   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:45.664405   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:45.664463   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:45.701087   71168 cri.go:89] found id: ""
	I0401 19:34:45.701110   71168 logs.go:276] 0 containers: []
	W0401 19:34:45.701120   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:45.701128   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:45.701139   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:45.716839   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:45.716863   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:45.794609   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:45.794630   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:45.794642   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:45.883428   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:45.883464   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:45.934342   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:45.934374   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:47.825597   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:50.326528   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:47.505036   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:49.505287   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:51.505884   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:47.609038   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:50.106705   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:52.107802   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:48.492128   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:48.508674   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:48.508746   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:48.549522   71168 cri.go:89] found id: ""
	I0401 19:34:48.549545   71168 logs.go:276] 0 containers: []
	W0401 19:34:48.549555   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:48.549561   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:48.549619   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:48.587014   71168 cri.go:89] found id: ""
	I0401 19:34:48.587037   71168 logs.go:276] 0 containers: []
	W0401 19:34:48.587045   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:48.587051   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:48.587108   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:48.629591   71168 cri.go:89] found id: ""
	I0401 19:34:48.629620   71168 logs.go:276] 0 containers: []
	W0401 19:34:48.629630   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:48.629636   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:48.629707   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:48.669335   71168 cri.go:89] found id: ""
	I0401 19:34:48.669363   71168 logs.go:276] 0 containers: []
	W0401 19:34:48.669383   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:48.669400   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:48.669455   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:48.708322   71168 cri.go:89] found id: ""
	I0401 19:34:48.708350   71168 logs.go:276] 0 containers: []
	W0401 19:34:48.708356   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:48.708362   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:48.708407   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:48.750680   71168 cri.go:89] found id: ""
	I0401 19:34:48.750708   71168 logs.go:276] 0 containers: []
	W0401 19:34:48.750718   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:48.750726   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:48.750791   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:48.790946   71168 cri.go:89] found id: ""
	I0401 19:34:48.790974   71168 logs.go:276] 0 containers: []
	W0401 19:34:48.790984   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:48.790998   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:48.791055   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:48.828849   71168 cri.go:89] found id: ""
	I0401 19:34:48.828871   71168 logs.go:276] 0 containers: []
	W0401 19:34:48.828880   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:48.828889   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:48.828904   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:48.909182   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:48.909212   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:48.954285   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:48.954315   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:49.010340   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:49.010372   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:49.026493   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:49.026516   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:49.099662   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:51.599905   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:51.618094   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:51.618168   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:51.657003   71168 cri.go:89] found id: ""
	I0401 19:34:51.657028   71168 logs.go:276] 0 containers: []
	W0401 19:34:51.657038   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:51.657046   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:51.657104   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:51.696415   71168 cri.go:89] found id: ""
	I0401 19:34:51.696441   71168 logs.go:276] 0 containers: []
	W0401 19:34:51.696451   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:51.696456   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:51.696515   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:51.734416   71168 cri.go:89] found id: ""
	I0401 19:34:51.734445   71168 logs.go:276] 0 containers: []
	W0401 19:34:51.734457   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:51.734465   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:51.734523   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:51.774895   71168 cri.go:89] found id: ""
	I0401 19:34:51.774918   71168 logs.go:276] 0 containers: []
	W0401 19:34:51.774925   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:51.774931   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:51.774980   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:51.814602   71168 cri.go:89] found id: ""
	I0401 19:34:51.814623   71168 logs.go:276] 0 containers: []
	W0401 19:34:51.814631   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:51.814637   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:51.814687   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:51.856035   71168 cri.go:89] found id: ""
	I0401 19:34:51.856061   71168 logs.go:276] 0 containers: []
	W0401 19:34:51.856071   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:51.856078   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:51.856132   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:51.897415   71168 cri.go:89] found id: ""
	I0401 19:34:51.897440   71168 logs.go:276] 0 containers: []
	W0401 19:34:51.897451   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:51.897457   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:51.897516   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:51.937406   71168 cri.go:89] found id: ""
	I0401 19:34:51.937428   71168 logs.go:276] 0 containers: []
	W0401 19:34:51.937436   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:51.937443   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:51.937456   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:51.981508   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:51.981535   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:52.039956   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:52.039995   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:52.066403   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:52.066429   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:52.172509   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:52.172530   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:52.172541   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:52.827950   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:55.331369   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:54.004625   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:56.503197   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:54.607359   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:57.108257   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:54.761459   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:54.776972   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:54.777030   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:54.822945   71168 cri.go:89] found id: ""
	I0401 19:34:54.822983   71168 logs.go:276] 0 containers: []
	W0401 19:34:54.822996   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:54.823004   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:54.823066   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:54.861602   71168 cri.go:89] found id: ""
	I0401 19:34:54.861629   71168 logs.go:276] 0 containers: []
	W0401 19:34:54.861639   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:54.861662   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:54.861727   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:54.901283   71168 cri.go:89] found id: ""
	I0401 19:34:54.901309   71168 logs.go:276] 0 containers: []
	W0401 19:34:54.901319   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:54.901327   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:54.901385   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:54.940071   71168 cri.go:89] found id: ""
	I0401 19:34:54.940103   71168 logs.go:276] 0 containers: []
	W0401 19:34:54.940114   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:54.940121   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:54.940179   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:54.978447   71168 cri.go:89] found id: ""
	I0401 19:34:54.978474   71168 logs.go:276] 0 containers: []
	W0401 19:34:54.978485   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:54.978493   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:54.978563   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:55.021786   71168 cri.go:89] found id: ""
	I0401 19:34:55.021810   71168 logs.go:276] 0 containers: []
	W0401 19:34:55.021819   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:55.021827   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:55.021886   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:55.059861   71168 cri.go:89] found id: ""
	I0401 19:34:55.059889   71168 logs.go:276] 0 containers: []
	W0401 19:34:55.059899   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:55.059907   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:55.059963   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:55.104484   71168 cri.go:89] found id: ""
	I0401 19:34:55.104516   71168 logs.go:276] 0 containers: []
	W0401 19:34:55.104527   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:55.104537   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:55.104551   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:55.152197   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:55.152221   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:55.203900   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:55.203942   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:55.221553   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:55.221580   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:55.299651   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:34:55.299668   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:55.299680   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:57.877382   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:34:57.899186   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:34:57.899260   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:34:57.948146   71168 cri.go:89] found id: ""
	I0401 19:34:57.948182   71168 logs.go:276] 0 containers: []
	W0401 19:34:57.948192   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:34:57.948203   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:34:57.948270   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:34:57.826282   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:59.826598   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:58.504492   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:01.003480   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:59.607646   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:02.107162   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:34:58.017121   71168 cri.go:89] found id: ""
	I0401 19:34:58.017150   71168 logs.go:276] 0 containers: []
	W0401 19:34:58.017161   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:34:58.017168   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:34:58.017230   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:34:58.073881   71168 cri.go:89] found id: ""
	I0401 19:34:58.073905   71168 logs.go:276] 0 containers: []
	W0401 19:34:58.073916   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:34:58.073923   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:34:58.073979   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:34:58.115410   71168 cri.go:89] found id: ""
	I0401 19:34:58.115435   71168 logs.go:276] 0 containers: []
	W0401 19:34:58.115445   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:34:58.115452   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:34:58.115512   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:34:58.155452   71168 cri.go:89] found id: ""
	I0401 19:34:58.155481   71168 logs.go:276] 0 containers: []
	W0401 19:34:58.155492   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:34:58.155500   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:34:58.155562   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:34:58.197335   71168 cri.go:89] found id: ""
	I0401 19:34:58.197376   71168 logs.go:276] 0 containers: []
	W0401 19:34:58.197397   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:34:58.197407   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:34:58.197469   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:34:58.239782   71168 cri.go:89] found id: ""
	I0401 19:34:58.239808   71168 logs.go:276] 0 containers: []
	W0401 19:34:58.239815   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:34:58.239820   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:34:58.239870   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:34:58.280936   71168 cri.go:89] found id: ""
	I0401 19:34:58.280961   71168 logs.go:276] 0 containers: []
	W0401 19:34:58.280971   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:34:58.280982   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:34:58.280998   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:34:58.368357   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:34:58.368401   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:34:58.415104   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:34:58.415132   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:34:58.474719   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:34:58.474749   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:34:58.491004   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:34:58.491031   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:34:58.573999   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:01.074865   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:01.091751   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:01.091822   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:01.140053   71168 cri.go:89] found id: ""
	I0401 19:35:01.140079   71168 logs.go:276] 0 containers: []
	W0401 19:35:01.140089   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:01.140096   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:01.140154   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:01.184046   71168 cri.go:89] found id: ""
	I0401 19:35:01.184078   71168 logs.go:276] 0 containers: []
	W0401 19:35:01.184089   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:01.184096   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:01.184161   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:01.225962   71168 cri.go:89] found id: ""
	I0401 19:35:01.225989   71168 logs.go:276] 0 containers: []
	W0401 19:35:01.225999   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:01.226006   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:01.226072   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:01.267212   71168 cri.go:89] found id: ""
	I0401 19:35:01.267234   71168 logs.go:276] 0 containers: []
	W0401 19:35:01.267242   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:01.267247   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:01.267308   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:01.307039   71168 cri.go:89] found id: ""
	I0401 19:35:01.307066   71168 logs.go:276] 0 containers: []
	W0401 19:35:01.307074   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:01.307080   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:01.307132   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:01.347856   71168 cri.go:89] found id: ""
	I0401 19:35:01.347886   71168 logs.go:276] 0 containers: []
	W0401 19:35:01.347898   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:01.347905   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:01.347962   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:01.385893   71168 cri.go:89] found id: ""
	I0401 19:35:01.385923   71168 logs.go:276] 0 containers: []
	W0401 19:35:01.385933   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:01.385940   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:01.385999   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:01.422983   71168 cri.go:89] found id: ""
	I0401 19:35:01.423012   71168 logs.go:276] 0 containers: []
	W0401 19:35:01.423022   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:01.423033   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:01.423048   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:01.469842   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:01.469875   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:01.527536   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:01.527566   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:01.542332   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:01.542357   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:01.617252   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:01.617270   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:01.617284   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:02.325502   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:04.326603   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:06.328115   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:03.005979   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:05.504470   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:04.107681   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:06.607619   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:04.195171   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:04.211963   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:04.212015   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:04.252298   71168 cri.go:89] found id: ""
	I0401 19:35:04.252324   71168 logs.go:276] 0 containers: []
	W0401 19:35:04.252334   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:04.252342   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:04.252396   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:04.299619   71168 cri.go:89] found id: ""
	I0401 19:35:04.299649   71168 logs.go:276] 0 containers: []
	W0401 19:35:04.299659   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:04.299667   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:04.299725   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:04.347386   71168 cri.go:89] found id: ""
	I0401 19:35:04.347409   71168 logs.go:276] 0 containers: []
	W0401 19:35:04.347416   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:04.347426   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:04.347473   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:04.385902   71168 cri.go:89] found id: ""
	I0401 19:35:04.385929   71168 logs.go:276] 0 containers: []
	W0401 19:35:04.385937   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:04.385943   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:04.385993   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:04.425235   71168 cri.go:89] found id: ""
	I0401 19:35:04.425258   71168 logs.go:276] 0 containers: []
	W0401 19:35:04.425266   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:04.425271   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:04.425325   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:04.463849   71168 cri.go:89] found id: ""
	I0401 19:35:04.463881   71168 logs.go:276] 0 containers: []
	W0401 19:35:04.463891   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:04.463899   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:04.463974   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:04.501983   71168 cri.go:89] found id: ""
	I0401 19:35:04.502003   71168 logs.go:276] 0 containers: []
	W0401 19:35:04.502010   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:04.502016   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:04.502072   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:04.544082   71168 cri.go:89] found id: ""
	I0401 19:35:04.544103   71168 logs.go:276] 0 containers: []
	W0401 19:35:04.544113   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:04.544124   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:04.544141   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:04.600545   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:04.600578   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:04.617049   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:04.617075   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:04.696927   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:04.696945   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:04.696957   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:04.780024   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:04.780056   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:07.323161   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:07.339368   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:07.339432   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:07.379407   71168 cri.go:89] found id: ""
	I0401 19:35:07.379429   71168 logs.go:276] 0 containers: []
	W0401 19:35:07.379440   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:07.379452   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:07.379497   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:07.418700   71168 cri.go:89] found id: ""
	I0401 19:35:07.418728   71168 logs.go:276] 0 containers: []
	W0401 19:35:07.418737   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:07.418743   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:07.418788   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:07.457580   71168 cri.go:89] found id: ""
	I0401 19:35:07.457606   71168 logs.go:276] 0 containers: []
	W0401 19:35:07.457617   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:07.457624   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:07.457696   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:07.498211   71168 cri.go:89] found id: ""
	I0401 19:35:07.498240   71168 logs.go:276] 0 containers: []
	W0401 19:35:07.498249   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:07.498256   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:07.498318   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:07.539659   71168 cri.go:89] found id: ""
	I0401 19:35:07.539681   71168 logs.go:276] 0 containers: []
	W0401 19:35:07.539692   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:07.539699   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:07.539759   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:07.577414   71168 cri.go:89] found id: ""
	I0401 19:35:07.577440   71168 logs.go:276] 0 containers: []
	W0401 19:35:07.577450   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:07.577456   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:07.577520   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:07.623318   71168 cri.go:89] found id: ""
	I0401 19:35:07.623340   71168 logs.go:276] 0 containers: []
	W0401 19:35:07.623352   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:07.623358   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:07.623416   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:07.664791   71168 cri.go:89] found id: ""
	I0401 19:35:07.664823   71168 logs.go:276] 0 containers: []
	W0401 19:35:07.664834   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:07.664842   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:07.664854   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:07.722158   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:07.722186   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:07.737838   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:07.737876   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:07.813694   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:07.813717   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:07.813728   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:07.899698   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:07.899740   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:08.825778   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:10.825935   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:07.505933   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:10.003529   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:09.107076   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:11.108917   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:10.446184   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:10.460860   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:10.460927   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:10.505656   71168 cri.go:89] found id: ""
	I0401 19:35:10.505685   71168 logs.go:276] 0 containers: []
	W0401 19:35:10.505692   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:10.505698   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:10.505742   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:10.547771   71168 cri.go:89] found id: ""
	I0401 19:35:10.547796   71168 logs.go:276] 0 containers: []
	W0401 19:35:10.547814   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:10.547820   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:10.547876   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:10.584625   71168 cri.go:89] found id: ""
	I0401 19:35:10.584652   71168 logs.go:276] 0 containers: []
	W0401 19:35:10.584664   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:10.584671   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:10.584737   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:10.625512   71168 cri.go:89] found id: ""
	I0401 19:35:10.625541   71168 logs.go:276] 0 containers: []
	W0401 19:35:10.625552   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:10.625559   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:10.625618   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:10.664905   71168 cri.go:89] found id: ""
	I0401 19:35:10.664936   71168 logs.go:276] 0 containers: []
	W0401 19:35:10.664949   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:10.664955   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:10.665015   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:10.703043   71168 cri.go:89] found id: ""
	I0401 19:35:10.703071   71168 logs.go:276] 0 containers: []
	W0401 19:35:10.703082   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:10.703090   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:10.703149   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:10.747750   71168 cri.go:89] found id: ""
	I0401 19:35:10.747777   71168 logs.go:276] 0 containers: []
	W0401 19:35:10.747790   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:10.747796   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:10.747841   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:10.792944   71168 cri.go:89] found id: ""
	I0401 19:35:10.792970   71168 logs.go:276] 0 containers: []
	W0401 19:35:10.792980   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:10.792989   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:10.793004   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:10.854029   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:10.854058   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:10.868968   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:10.868991   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:10.940537   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:10.940564   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:10.940579   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:11.018201   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:11.018231   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:12.826117   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:14.826387   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:12.003995   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:14.503258   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:16.504686   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:13.608777   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:16.108992   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:13.562139   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:13.579370   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:13.579435   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:13.620811   71168 cri.go:89] found id: ""
	I0401 19:35:13.620838   71168 logs.go:276] 0 containers: []
	W0401 19:35:13.620847   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:13.620859   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:13.620919   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:13.661377   71168 cri.go:89] found id: ""
	I0401 19:35:13.661408   71168 logs.go:276] 0 containers: []
	W0401 19:35:13.661419   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:13.661427   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:13.661489   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:13.702413   71168 cri.go:89] found id: ""
	I0401 19:35:13.702436   71168 logs.go:276] 0 containers: []
	W0401 19:35:13.702445   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:13.702453   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:13.702519   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:13.748760   71168 cri.go:89] found id: ""
	I0401 19:35:13.748788   71168 logs.go:276] 0 containers: []
	W0401 19:35:13.748796   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:13.748803   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:13.748874   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:13.795438   71168 cri.go:89] found id: ""
	I0401 19:35:13.795460   71168 logs.go:276] 0 containers: []
	W0401 19:35:13.795472   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:13.795479   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:13.795537   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:13.835572   71168 cri.go:89] found id: ""
	I0401 19:35:13.835601   71168 logs.go:276] 0 containers: []
	W0401 19:35:13.835612   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:13.835619   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:13.835677   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:13.874301   71168 cri.go:89] found id: ""
	I0401 19:35:13.874327   71168 logs.go:276] 0 containers: []
	W0401 19:35:13.874336   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:13.874342   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:13.874387   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:13.914847   71168 cri.go:89] found id: ""
	I0401 19:35:13.914876   71168 logs.go:276] 0 containers: []
	W0401 19:35:13.914883   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:13.914891   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:13.914904   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:13.929329   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:13.929355   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:14.004332   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:14.004358   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:14.004373   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:14.084901   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:14.084935   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:14.134471   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:14.134500   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:16.693432   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:16.710258   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:16.710332   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:16.757213   71168 cri.go:89] found id: ""
	I0401 19:35:16.757243   71168 logs.go:276] 0 containers: []
	W0401 19:35:16.757254   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:16.757261   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:16.757320   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:16.797134   71168 cri.go:89] found id: ""
	I0401 19:35:16.797174   71168 logs.go:276] 0 containers: []
	W0401 19:35:16.797182   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:16.797188   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:16.797233   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:16.839502   71168 cri.go:89] found id: ""
	I0401 19:35:16.839530   71168 logs.go:276] 0 containers: []
	W0401 19:35:16.839541   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:16.839549   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:16.839609   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:16.881380   71168 cri.go:89] found id: ""
	I0401 19:35:16.881406   71168 logs.go:276] 0 containers: []
	W0401 19:35:16.881413   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:16.881419   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:16.881472   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:16.922968   71168 cri.go:89] found id: ""
	I0401 19:35:16.922991   71168 logs.go:276] 0 containers: []
	W0401 19:35:16.923002   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:16.923009   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:16.923069   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:16.961262   71168 cri.go:89] found id: ""
	I0401 19:35:16.961290   71168 logs.go:276] 0 containers: []
	W0401 19:35:16.961301   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:16.961310   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:16.961369   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:16.996901   71168 cri.go:89] found id: ""
	I0401 19:35:16.996929   71168 logs.go:276] 0 containers: []
	W0401 19:35:16.996940   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:16.996947   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:16.997004   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:17.038447   71168 cri.go:89] found id: ""
	I0401 19:35:17.038473   71168 logs.go:276] 0 containers: []
	W0401 19:35:17.038481   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:17.038489   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:17.038500   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:17.079979   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:17.080013   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:17.136973   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:17.137010   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:17.153083   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:17.153108   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:17.232055   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:17.232078   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:17.232096   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:17.326246   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:19.326903   70687 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:20.818889   70687 pod_ready.go:81] duration metric: took 4m0.000381983s for pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace to be "Ready" ...
	E0401 19:35:20.818918   70687 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-g6z6c" in "kube-system" namespace to be "Ready" (will not retry!)
	I0401 19:35:20.818938   70687 pod_ready.go:38] duration metric: took 4m5.525170808s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:35:20.818967   70687 kubeadm.go:591] duration metric: took 4m13.404699267s to restartPrimaryControlPlane
	W0401 19:35:20.819026   70687 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0401 19:35:20.819059   70687 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0401 19:35:19.004932   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:21.504514   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:18.607067   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:20.609619   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:19.813327   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:19.830168   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:19.830229   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:19.875502   71168 cri.go:89] found id: ""
	I0401 19:35:19.875524   71168 logs.go:276] 0 containers: []
	W0401 19:35:19.875532   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:19.875537   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:19.875591   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:19.916084   71168 cri.go:89] found id: ""
	I0401 19:35:19.916107   71168 logs.go:276] 0 containers: []
	W0401 19:35:19.916117   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:19.916125   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:19.916188   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:19.960673   71168 cri.go:89] found id: ""
	I0401 19:35:19.960699   71168 logs.go:276] 0 containers: []
	W0401 19:35:19.960710   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:19.960717   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:19.960796   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:19.998736   71168 cri.go:89] found id: ""
	I0401 19:35:19.998760   71168 logs.go:276] 0 containers: []
	W0401 19:35:19.998768   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:19.998776   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:19.998840   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:20.043382   71168 cri.go:89] found id: ""
	I0401 19:35:20.043408   71168 logs.go:276] 0 containers: []
	W0401 19:35:20.043418   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:20.043425   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:20.043492   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:20.086132   71168 cri.go:89] found id: ""
	I0401 19:35:20.086158   71168 logs.go:276] 0 containers: []
	W0401 19:35:20.086171   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:20.086178   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:20.086239   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:20.131052   71168 cri.go:89] found id: ""
	I0401 19:35:20.131074   71168 logs.go:276] 0 containers: []
	W0401 19:35:20.131081   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:20.131091   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:20.131151   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:20.174668   71168 cri.go:89] found id: ""
	I0401 19:35:20.174693   71168 logs.go:276] 0 containers: []
	W0401 19:35:20.174699   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:20.174707   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:20.174718   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:20.266503   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:20.266521   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:20.266534   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:20.351555   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:20.351586   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:20.400261   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:20.400289   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:20.455149   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:20.455183   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:23.510048   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:26.005267   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:23.109720   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:25.608633   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:22.972675   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:22.987481   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:22.987555   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:23.032429   71168 cri.go:89] found id: ""
	I0401 19:35:23.032453   71168 logs.go:276] 0 containers: []
	W0401 19:35:23.032461   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:23.032467   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:23.032522   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:23.073286   71168 cri.go:89] found id: ""
	I0401 19:35:23.073313   71168 logs.go:276] 0 containers: []
	W0401 19:35:23.073322   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:23.073330   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:23.073397   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:23.115424   71168 cri.go:89] found id: ""
	I0401 19:35:23.115447   71168 logs.go:276] 0 containers: []
	W0401 19:35:23.115454   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:23.115459   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:23.115506   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:23.164883   71168 cri.go:89] found id: ""
	I0401 19:35:23.164908   71168 logs.go:276] 0 containers: []
	W0401 19:35:23.164918   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:23.164925   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:23.164985   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:23.213617   71168 cri.go:89] found id: ""
	I0401 19:35:23.213656   71168 logs.go:276] 0 containers: []
	W0401 19:35:23.213668   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:23.213675   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:23.213787   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:23.264846   71168 cri.go:89] found id: ""
	I0401 19:35:23.264874   71168 logs.go:276] 0 containers: []
	W0401 19:35:23.264886   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:23.264893   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:23.264958   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:23.306467   71168 cri.go:89] found id: ""
	I0401 19:35:23.306495   71168 logs.go:276] 0 containers: []
	W0401 19:35:23.306506   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:23.306514   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:23.306566   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:23.358574   71168 cri.go:89] found id: ""
	I0401 19:35:23.358597   71168 logs.go:276] 0 containers: []
	W0401 19:35:23.358608   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:23.358619   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:23.358634   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:23.437486   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:23.437510   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:23.437525   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:23.555307   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:23.555350   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:23.601776   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:23.601808   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:23.666654   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:23.666688   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:26.184503   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:26.199924   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:26.199997   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:26.252151   71168 cri.go:89] found id: ""
	I0401 19:35:26.252181   71168 logs.go:276] 0 containers: []
	W0401 19:35:26.252192   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:26.252199   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:26.252266   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:26.299094   71168 cri.go:89] found id: ""
	I0401 19:35:26.299126   71168 logs.go:276] 0 containers: []
	W0401 19:35:26.299134   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:26.299139   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:26.299194   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:26.340483   71168 cri.go:89] found id: ""
	I0401 19:35:26.340516   71168 logs.go:276] 0 containers: []
	W0401 19:35:26.340533   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:26.340540   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:26.340599   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:26.387153   71168 cri.go:89] found id: ""
	I0401 19:35:26.387180   71168 logs.go:276] 0 containers: []
	W0401 19:35:26.387188   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:26.387194   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:26.387261   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:26.430746   71168 cri.go:89] found id: ""
	I0401 19:35:26.430773   71168 logs.go:276] 0 containers: []
	W0401 19:35:26.430781   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:26.430787   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:26.430854   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:26.478412   71168 cri.go:89] found id: ""
	I0401 19:35:26.478440   71168 logs.go:276] 0 containers: []
	W0401 19:35:26.478451   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:26.478458   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:26.478523   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:26.521120   71168 cri.go:89] found id: ""
	I0401 19:35:26.521150   71168 logs.go:276] 0 containers: []
	W0401 19:35:26.521161   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:26.521168   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:26.521229   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:26.564678   71168 cri.go:89] found id: ""
	I0401 19:35:26.564721   71168 logs.go:276] 0 containers: []
	W0401 19:35:26.564731   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:26.564742   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:26.564757   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:26.625271   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:26.625308   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:26.640505   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:26.640529   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:26.722753   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:26.722777   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:26.722795   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:26.830507   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:26.830551   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:28.505100   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:31.004387   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:28.107396   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:30.108080   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:29.386655   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:29.401232   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:29.401308   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:29.440479   71168 cri.go:89] found id: ""
	I0401 19:35:29.440511   71168 logs.go:276] 0 containers: []
	W0401 19:35:29.440522   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:29.440530   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:29.440590   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:29.479022   71168 cri.go:89] found id: ""
	I0401 19:35:29.479049   71168 logs.go:276] 0 containers: []
	W0401 19:35:29.479057   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:29.479062   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:29.479119   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:29.518179   71168 cri.go:89] found id: ""
	I0401 19:35:29.518208   71168 logs.go:276] 0 containers: []
	W0401 19:35:29.518216   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:29.518222   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:29.518281   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:29.556654   71168 cri.go:89] found id: ""
	I0401 19:35:29.556682   71168 logs.go:276] 0 containers: []
	W0401 19:35:29.556692   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:29.556712   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:29.556772   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:29.593258   71168 cri.go:89] found id: ""
	I0401 19:35:29.593287   71168 logs.go:276] 0 containers: []
	W0401 19:35:29.593295   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:29.593301   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:29.593349   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:29.637215   71168 cri.go:89] found id: ""
	I0401 19:35:29.637243   71168 logs.go:276] 0 containers: []
	W0401 19:35:29.637253   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:29.637261   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:29.637321   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:29.683052   71168 cri.go:89] found id: ""
	I0401 19:35:29.683090   71168 logs.go:276] 0 containers: []
	W0401 19:35:29.683100   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:29.683108   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:29.683164   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:29.730948   71168 cri.go:89] found id: ""
	I0401 19:35:29.730979   71168 logs.go:276] 0 containers: []
	W0401 19:35:29.730991   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:29.731001   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:29.731014   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:29.781969   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:29.782001   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:29.800700   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:29.800729   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:29.877200   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:29.877225   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:29.877244   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:29.958110   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:29.958144   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:32.501060   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:32.519551   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:32.519619   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:32.579776   71168 cri.go:89] found id: ""
	I0401 19:35:32.579802   71168 logs.go:276] 0 containers: []
	W0401 19:35:32.579813   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:32.579824   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:32.579886   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:32.643271   71168 cri.go:89] found id: ""
	I0401 19:35:32.643300   71168 logs.go:276] 0 containers: []
	W0401 19:35:32.643312   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:32.643322   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:32.643387   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:32.688576   71168 cri.go:89] found id: ""
	I0401 19:35:32.688605   71168 logs.go:276] 0 containers: []
	W0401 19:35:32.688614   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:32.688619   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:32.688678   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:32.729867   71168 cri.go:89] found id: ""
	I0401 19:35:32.729890   71168 logs.go:276] 0 containers: []
	W0401 19:35:32.729898   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:32.729906   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:32.729962   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:32.771485   71168 cri.go:89] found id: ""
	I0401 19:35:32.771508   71168 logs.go:276] 0 containers: []
	W0401 19:35:32.771515   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:32.771521   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:32.771574   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:32.809362   71168 cri.go:89] found id: ""
	I0401 19:35:32.809385   71168 logs.go:276] 0 containers: []
	W0401 19:35:32.809393   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:32.809398   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:32.809458   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:32.844916   71168 cri.go:89] found id: ""
	I0401 19:35:32.844941   71168 logs.go:276] 0 containers: []
	W0401 19:35:32.844950   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:32.844955   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:32.845000   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:32.884638   71168 cri.go:89] found id: ""
	I0401 19:35:32.884660   71168 logs.go:276] 0 containers: []
	W0401 19:35:32.884670   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:32.884680   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:32.884695   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:32.937462   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:32.937489   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:32.952842   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:32.952871   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0401 19:35:33.005516   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:35.504755   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:32.608051   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:35.106708   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:37.108135   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	W0401 19:35:33.035254   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:33.035278   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:33.035294   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:33.114963   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:33.114994   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:35.662190   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:35.675960   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:35.676016   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:35.717300   71168 cri.go:89] found id: ""
	I0401 19:35:35.717329   71168 logs.go:276] 0 containers: []
	W0401 19:35:35.717340   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:35.717347   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:35.717409   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:35.756687   71168 cri.go:89] found id: ""
	I0401 19:35:35.756713   71168 logs.go:276] 0 containers: []
	W0401 19:35:35.756723   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:35.756730   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:35.756788   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:35.796995   71168 cri.go:89] found id: ""
	I0401 19:35:35.797017   71168 logs.go:276] 0 containers: []
	W0401 19:35:35.797025   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:35.797030   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:35.797083   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:35.840419   71168 cri.go:89] found id: ""
	I0401 19:35:35.840444   71168 logs.go:276] 0 containers: []
	W0401 19:35:35.840455   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:35.840462   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:35.840523   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:35.880059   71168 cri.go:89] found id: ""
	I0401 19:35:35.880093   71168 logs.go:276] 0 containers: []
	W0401 19:35:35.880107   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:35.880113   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:35.880171   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:35.929491   71168 cri.go:89] found id: ""
	I0401 19:35:35.929515   71168 logs.go:276] 0 containers: []
	W0401 19:35:35.929523   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:35.929530   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:35.929584   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:35.968745   71168 cri.go:89] found id: ""
	I0401 19:35:35.968771   71168 logs.go:276] 0 containers: []
	W0401 19:35:35.968778   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:35.968784   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:35.968833   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:36.014294   71168 cri.go:89] found id: ""
	I0401 19:35:36.014318   71168 logs.go:276] 0 containers: []
	W0401 19:35:36.014328   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:36.014338   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:36.014359   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:36.068418   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:36.068450   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:36.086343   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:36.086367   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:36.172027   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:36.172053   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:36.172067   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:36.250046   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:36.250080   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:38.004007   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:40.004138   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:39.607714   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:42.107775   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:38.794261   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:38.809535   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:38.809597   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:38.849139   71168 cri.go:89] found id: ""
	I0401 19:35:38.849167   71168 logs.go:276] 0 containers: []
	W0401 19:35:38.849176   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:38.849181   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:38.849238   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:38.886787   71168 cri.go:89] found id: ""
	I0401 19:35:38.886811   71168 logs.go:276] 0 containers: []
	W0401 19:35:38.886821   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:38.886828   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:38.886891   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:38.923388   71168 cri.go:89] found id: ""
	I0401 19:35:38.923419   71168 logs.go:276] 0 containers: []
	W0401 19:35:38.923431   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:38.923438   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:38.923497   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:38.959583   71168 cri.go:89] found id: ""
	I0401 19:35:38.959608   71168 logs.go:276] 0 containers: []
	W0401 19:35:38.959619   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:38.959626   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:38.959682   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:38.998201   71168 cri.go:89] found id: ""
	I0401 19:35:38.998226   71168 logs.go:276] 0 containers: []
	W0401 19:35:38.998233   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:38.998238   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:38.998294   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:39.039669   71168 cri.go:89] found id: ""
	I0401 19:35:39.039692   71168 logs.go:276] 0 containers: []
	W0401 19:35:39.039703   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:39.039710   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:39.039767   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:39.077331   71168 cri.go:89] found id: ""
	I0401 19:35:39.077358   71168 logs.go:276] 0 containers: []
	W0401 19:35:39.077366   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:39.077371   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:39.077423   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:39.125999   71168 cri.go:89] found id: ""
	I0401 19:35:39.126021   71168 logs.go:276] 0 containers: []
	W0401 19:35:39.126031   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:39.126041   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:39.126054   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:39.183579   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:39.183612   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:39.201200   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:39.201227   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:39.282262   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:39.282280   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:39.282291   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:39.365340   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:39.365370   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:41.914909   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:41.929243   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:41.929317   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:41.975594   71168 cri.go:89] found id: ""
	I0401 19:35:41.975622   71168 logs.go:276] 0 containers: []
	W0401 19:35:41.975632   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:41.975639   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:41.975701   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:42.023558   71168 cri.go:89] found id: ""
	I0401 19:35:42.023585   71168 logs.go:276] 0 containers: []
	W0401 19:35:42.023596   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:42.023602   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:42.023662   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:42.074242   71168 cri.go:89] found id: ""
	I0401 19:35:42.074266   71168 logs.go:276] 0 containers: []
	W0401 19:35:42.074276   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:42.074283   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:42.074340   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:42.123327   71168 cri.go:89] found id: ""
	I0401 19:35:42.123358   71168 logs.go:276] 0 containers: []
	W0401 19:35:42.123370   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:42.123378   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:42.123452   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:42.168931   71168 cri.go:89] found id: ""
	I0401 19:35:42.168961   71168 logs.go:276] 0 containers: []
	W0401 19:35:42.168972   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:42.168980   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:42.169037   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:42.211747   71168 cri.go:89] found id: ""
	I0401 19:35:42.211774   71168 logs.go:276] 0 containers: []
	W0401 19:35:42.211784   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:42.211793   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:42.211849   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:42.251809   71168 cri.go:89] found id: ""
	I0401 19:35:42.251830   71168 logs.go:276] 0 containers: []
	W0401 19:35:42.251841   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:42.251849   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:42.251908   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:42.293266   71168 cri.go:89] found id: ""
	I0401 19:35:42.293361   71168 logs.go:276] 0 containers: []
	W0401 19:35:42.293377   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:42.293388   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:42.293405   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:42.364502   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:42.364553   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:42.381147   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:42.381180   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:42.464219   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:42.464238   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:42.464249   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:42.544564   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:42.544594   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:42.006061   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:44.504700   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:46.505615   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:44.606915   70962 pod_ready.go:102] pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:46.100004   70962 pod_ready.go:81] duration metric: took 4m0.000146584s for pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace to be "Ready" ...
	E0401 19:35:46.100029   70962 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-g7mg2" in "kube-system" namespace to be "Ready" (will not retry!)
	I0401 19:35:46.100044   70962 pod_ready.go:38] duration metric: took 4m10.491414096s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:35:46.100088   70962 kubeadm.go:591] duration metric: took 4m18.223285856s to restartPrimaryControlPlane
	W0401 19:35:46.100141   70962 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0401 19:35:46.100164   70962 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0401 19:35:45.105777   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:45.119911   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:45.119976   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:45.161871   71168 cri.go:89] found id: ""
	I0401 19:35:45.161890   71168 logs.go:276] 0 containers: []
	W0401 19:35:45.161897   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:45.161902   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:45.161949   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:45.198677   71168 cri.go:89] found id: ""
	I0401 19:35:45.198702   71168 logs.go:276] 0 containers: []
	W0401 19:35:45.198710   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:45.198715   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:45.198776   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:45.236938   71168 cri.go:89] found id: ""
	I0401 19:35:45.236972   71168 logs.go:276] 0 containers: []
	W0401 19:35:45.236983   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:45.236990   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:45.237052   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:45.280621   71168 cri.go:89] found id: ""
	I0401 19:35:45.280650   71168 logs.go:276] 0 containers: []
	W0401 19:35:45.280661   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:45.280668   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:45.280727   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:45.326794   71168 cri.go:89] found id: ""
	I0401 19:35:45.326818   71168 logs.go:276] 0 containers: []
	W0401 19:35:45.326827   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:45.326834   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:45.326892   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:45.369405   71168 cri.go:89] found id: ""
	I0401 19:35:45.369431   71168 logs.go:276] 0 containers: []
	W0401 19:35:45.369441   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:45.369446   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:45.369501   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:45.407609   71168 cri.go:89] found id: ""
	I0401 19:35:45.407635   71168 logs.go:276] 0 containers: []
	W0401 19:35:45.407643   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:45.407648   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:45.407720   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:45.444848   71168 cri.go:89] found id: ""
	I0401 19:35:45.444871   71168 logs.go:276] 0 containers: []
	W0401 19:35:45.444881   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:45.444891   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:45.444911   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:45.531938   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:45.531957   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:45.531972   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:45.617109   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:45.617141   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:45.663559   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:45.663591   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:45.717622   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:45.717670   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:49.004037   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:51.004650   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:48.234834   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:48.250543   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:48.250606   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:48.294396   71168 cri.go:89] found id: ""
	I0401 19:35:48.294423   71168 logs.go:276] 0 containers: []
	W0401 19:35:48.294432   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:48.294439   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:48.294504   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:48.336866   71168 cri.go:89] found id: ""
	I0401 19:35:48.336892   71168 logs.go:276] 0 containers: []
	W0401 19:35:48.336902   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:48.336908   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:48.336965   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:48.376031   71168 cri.go:89] found id: ""
	I0401 19:35:48.376065   71168 logs.go:276] 0 containers: []
	W0401 19:35:48.376076   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:48.376084   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:48.376142   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:48.414975   71168 cri.go:89] found id: ""
	I0401 19:35:48.414995   71168 logs.go:276] 0 containers: []
	W0401 19:35:48.415003   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:48.415008   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:48.415058   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:48.453484   71168 cri.go:89] found id: ""
	I0401 19:35:48.453513   71168 logs.go:276] 0 containers: []
	W0401 19:35:48.453524   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:48.453532   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:48.453593   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:48.487712   71168 cri.go:89] found id: ""
	I0401 19:35:48.487739   71168 logs.go:276] 0 containers: []
	W0401 19:35:48.487749   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:48.487757   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:48.487815   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:48.533331   71168 cri.go:89] found id: ""
	I0401 19:35:48.533364   71168 logs.go:276] 0 containers: []
	W0401 19:35:48.533375   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:48.533383   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:48.533442   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:48.574103   71168 cri.go:89] found id: ""
	I0401 19:35:48.574131   71168 logs.go:276] 0 containers: []
	W0401 19:35:48.574139   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:48.574147   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:48.574160   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:48.632068   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:48.632098   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:48.649342   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:48.649369   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:48.721799   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:48.721822   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:48.721836   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:48.821549   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:48.821584   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:51.364852   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:51.380281   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:35:51.380362   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:35:51.423383   71168 cri.go:89] found id: ""
	I0401 19:35:51.423412   71168 logs.go:276] 0 containers: []
	W0401 19:35:51.423422   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:35:51.423430   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:35:51.423490   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:35:51.470331   71168 cri.go:89] found id: ""
	I0401 19:35:51.470359   71168 logs.go:276] 0 containers: []
	W0401 19:35:51.470370   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:35:51.470378   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:35:51.470441   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:35:51.520310   71168 cri.go:89] found id: ""
	I0401 19:35:51.520339   71168 logs.go:276] 0 containers: []
	W0401 19:35:51.520350   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:35:51.520358   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:35:51.520414   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:35:51.568681   71168 cri.go:89] found id: ""
	I0401 19:35:51.568706   71168 logs.go:276] 0 containers: []
	W0401 19:35:51.568716   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:35:51.568724   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:35:51.568843   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:35:51.615146   71168 cri.go:89] found id: ""
	I0401 19:35:51.615174   71168 logs.go:276] 0 containers: []
	W0401 19:35:51.615185   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:35:51.615193   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:35:51.615256   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:35:51.658678   71168 cri.go:89] found id: ""
	I0401 19:35:51.658703   71168 logs.go:276] 0 containers: []
	W0401 19:35:51.658712   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:35:51.658720   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:35:51.658791   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:35:51.700071   71168 cri.go:89] found id: ""
	I0401 19:35:51.700097   71168 logs.go:276] 0 containers: []
	W0401 19:35:51.700108   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:35:51.700114   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:35:51.700177   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:35:51.746772   71168 cri.go:89] found id: ""
	I0401 19:35:51.746798   71168 logs.go:276] 0 containers: []
	W0401 19:35:51.746809   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:35:51.746826   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:35:51.746849   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:35:51.762321   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:35:51.762350   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:35:51.843300   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:35:51.843322   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:35:51.843337   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:35:51.919059   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:35:51.919090   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:35:51.965899   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:35:51.965925   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:35:53.564613   70687 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.745530657s)
	I0401 19:35:53.564696   70687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:35:53.582161   70687 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:35:53.593313   70687 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:35:53.604441   70687 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:35:53.604460   70687 kubeadm.go:156] found existing configuration files:
	
	I0401 19:35:53.604502   70687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 19:35:53.615367   70687 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:35:53.615426   70687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:35:53.626375   70687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 19:35:53.636924   70687 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:35:53.636975   70687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:35:53.647493   70687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 19:35:53.657319   70687 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:35:53.657373   70687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:35:53.667422   70687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 19:35:53.677235   70687 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:35:53.677308   70687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:35:53.688043   70687 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 19:35:53.894204   70687 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 19:35:53.504486   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:55.505966   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:35:54.523484   71168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:35:54.542004   71168 kubeadm.go:591] duration metric: took 4m4.024054342s to restartPrimaryControlPlane
	W0401 19:35:54.542067   71168 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0401 19:35:54.542088   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0401 19:35:55.179619   71168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:35:55.196424   71168 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:35:55.209517   71168 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:35:55.222643   71168 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:35:55.222664   71168 kubeadm.go:156] found existing configuration files:
	
	I0401 19:35:55.222714   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 19:35:55.234756   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:35:55.234813   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:35:55.246725   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 19:35:55.258440   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:35:55.258499   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:35:55.270106   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 19:35:55.280724   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:35:55.280776   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:35:55.293630   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 19:35:55.305588   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:35:55.305660   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:35:55.318308   71168 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 19:35:55.574896   71168 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 19:35:58.004494   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:00.505168   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:02.622337   70687 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0401 19:36:02.622433   70687 kubeadm.go:309] [preflight] Running pre-flight checks
	I0401 19:36:02.622548   70687 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 19:36:02.622659   70687 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 19:36:02.622794   70687 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 19:36:02.622883   70687 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 19:36:02.624550   70687 out.go:204]   - Generating certificates and keys ...
	I0401 19:36:02.624640   70687 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0401 19:36:02.624734   70687 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0401 19:36:02.624861   70687 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0401 19:36:02.624952   70687 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0401 19:36:02.625042   70687 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0401 19:36:02.625114   70687 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0401 19:36:02.625206   70687 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0401 19:36:02.625271   70687 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0401 19:36:02.625337   70687 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0401 19:36:02.625398   70687 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0401 19:36:02.625430   70687 kubeadm.go:309] [certs] Using the existing "sa" key
	I0401 19:36:02.625475   70687 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 19:36:02.625519   70687 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 19:36:02.625567   70687 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 19:36:02.625630   70687 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 19:36:02.625744   70687 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 19:36:02.625825   70687 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 19:36:02.625938   70687 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 19:36:02.626041   70687 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 19:36:02.627616   70687 out.go:204]   - Booting up control plane ...
	I0401 19:36:02.627744   70687 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 19:36:02.627812   70687 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 19:36:02.627878   70687 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 19:36:02.627976   70687 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 19:36:02.628046   70687 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 19:36:02.628098   70687 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0401 19:36:02.628273   70687 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 19:36:02.628354   70687 kubeadm.go:309] [apiclient] All control plane components are healthy after 5.502318 seconds
	I0401 19:36:02.628467   70687 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 19:36:02.628587   70687 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 19:36:02.628642   70687 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 19:36:02.628800   70687 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-882095 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 19:36:02.628849   70687 kubeadm.go:309] [bootstrap-token] Using token: 821cxx.fac41nwqi8u5mwgu
	I0401 19:36:02.630202   70687 out.go:204]   - Configuring RBAC rules ...
	I0401 19:36:02.630328   70687 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 19:36:02.630413   70687 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 19:36:02.630593   70687 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 19:36:02.630794   70687 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 19:36:02.630941   70687 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 19:36:02.631049   70687 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 19:36:02.631205   70687 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 19:36:02.631255   70687 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0401 19:36:02.631318   70687 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0401 19:36:02.631326   70687 kubeadm.go:309] 
	I0401 19:36:02.631412   70687 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0401 19:36:02.631421   70687 kubeadm.go:309] 
	I0401 19:36:02.631527   70687 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0401 19:36:02.631534   70687 kubeadm.go:309] 
	I0401 19:36:02.631560   70687 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0401 19:36:02.631649   70687 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 19:36:02.631721   70687 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 19:36:02.631731   70687 kubeadm.go:309] 
	I0401 19:36:02.631810   70687 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0401 19:36:02.631822   70687 kubeadm.go:309] 
	I0401 19:36:02.631896   70687 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 19:36:02.631910   70687 kubeadm.go:309] 
	I0401 19:36:02.631986   70687 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0401 19:36:02.632088   70687 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 19:36:02.632181   70687 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 19:36:02.632190   70687 kubeadm.go:309] 
	I0401 19:36:02.632319   70687 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 19:36:02.632427   70687 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0401 19:36:02.632437   70687 kubeadm.go:309] 
	I0401 19:36:02.632532   70687 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 821cxx.fac41nwqi8u5mwgu \
	I0401 19:36:02.632695   70687 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b8a0197ad47aa27a5800307c57228d22e61e4d31af785fa8a896f2b7fab267b8 \
	I0401 19:36:02.632726   70687 kubeadm.go:309] 	--control-plane 
	I0401 19:36:02.632736   70687 kubeadm.go:309] 
	I0401 19:36:02.632860   70687 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0401 19:36:02.632875   70687 kubeadm.go:309] 
	I0401 19:36:02.632983   70687 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 821cxx.fac41nwqi8u5mwgu \
	I0401 19:36:02.633118   70687 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b8a0197ad47aa27a5800307c57228d22e61e4d31af785fa8a896f2b7fab267b8 
	I0401 19:36:02.633132   70687 cni.go:84] Creating CNI manager for ""
	I0401 19:36:02.633138   70687 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:36:02.634595   70687 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0401 19:36:02.635812   70687 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0401 19:36:02.671750   70687 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0401 19:36:02.705562   70687 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 19:36:02.705657   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:02.705671   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-882095 minikube.k8s.io/updated_at=2024_04_01T19_36_02_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2 minikube.k8s.io/name=embed-certs-882095 minikube.k8s.io/primary=true
	I0401 19:36:02.762626   70687 ops.go:34] apiserver oom_adj: -16
	I0401 19:36:03.065957   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:03.566513   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:04.066178   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:04.566321   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:05.066798   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:05.566877   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:06.066520   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:03.004878   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:05.505057   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:06.566982   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:07.066931   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:07.566107   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:08.066843   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:08.566186   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:09.066550   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:09.566205   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:10.066287   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:10.566902   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:11.066656   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:08.005380   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:10.504026   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:11.566894   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:12.066235   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:12.566599   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:13.066132   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:13.566865   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:14.066759   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:14.566435   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:15.066907   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:15.566851   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:16.066880   70687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:16.158125   70687 kubeadm.go:1107] duration metric: took 13.452541301s to wait for elevateKubeSystemPrivileges
	W0401 19:36:16.158168   70687 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0401 19:36:16.158176   70687 kubeadm.go:393] duration metric: took 5m8.800288084s to StartCluster
	I0401 19:36:16.158195   70687 settings.go:142] acquiring lock: {Name:mk5cd3d9600680d3808ad7ff6310a5e71b09e71d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:36:16.158268   70687 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 19:36:16.159976   70687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/kubeconfig: {Name:mkbd988e40ba29769e9f8a43c4d876f38e957f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:36:16.160254   70687 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.190 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 19:36:16.162239   70687 out.go:177] * Verifying Kubernetes components...
	I0401 19:36:16.160346   70687 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0401 19:36:16.162276   70687 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-882095"
	I0401 19:36:16.162311   70687 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-882095"
	W0401 19:36:16.162320   70687 addons.go:243] addon storage-provisioner should already be in state true
	I0401 19:36:16.162339   70687 addons.go:69] Setting default-storageclass=true in profile "embed-certs-882095"
	I0401 19:36:16.162348   70687 addons.go:69] Setting metrics-server=true in profile "embed-certs-882095"
	I0401 19:36:16.162363   70687 addons.go:234] Setting addon metrics-server=true in "embed-certs-882095"
	W0401 19:36:16.162371   70687 addons.go:243] addon metrics-server should already be in state true
	I0401 19:36:16.162377   70687 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-882095"
	I0401 19:36:16.162384   70687 host.go:66] Checking if "embed-certs-882095" exists ...
	I0401 19:36:16.162345   70687 host.go:66] Checking if "embed-certs-882095" exists ...
	I0401 19:36:16.163767   70687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:36:16.160484   70687 config.go:182] Loaded profile config "embed-certs-882095": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 19:36:16.162673   70687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:16.162687   70687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:16.163886   70687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:16.163900   70687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:16.162704   70687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:16.163963   70687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:16.180743   70687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41647
	I0401 19:36:16.180759   70687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46707
	I0401 19:36:16.180746   70687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44419
	I0401 19:36:16.181334   70687 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:16.181342   70687 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:16.181369   70687 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:16.181830   70687 main.go:141] libmachine: Using API Version  1
	I0401 19:36:16.181848   70687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:16.181973   70687 main.go:141] libmachine: Using API Version  1
	I0401 19:36:16.181991   70687 main.go:141] libmachine: Using API Version  1
	I0401 19:36:16.182001   70687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:16.182007   70687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:16.182187   70687 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:16.182360   70687 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:16.182393   70687 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:16.182592   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetState
	I0401 19:36:16.182726   70687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:16.182753   70687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:16.182829   70687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:16.182871   70687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:16.186198   70687 addons.go:234] Setting addon default-storageclass=true in "embed-certs-882095"
	W0401 19:36:16.186226   70687 addons.go:243] addon default-storageclass should already be in state true
	I0401 19:36:16.186258   70687 host.go:66] Checking if "embed-certs-882095" exists ...
	I0401 19:36:16.186603   70687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:16.186636   70687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:16.198494   70687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36915
	I0401 19:36:16.198862   70687 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:16.199298   70687 main.go:141] libmachine: Using API Version  1
	I0401 19:36:16.199315   70687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:16.199777   70687 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:16.200056   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetState
	I0401 19:36:16.201955   70687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39769
	I0401 19:36:16.202167   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:36:16.202416   70687 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:16.204728   70687 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:36:16.202891   70687 main.go:141] libmachine: Using API Version  1
	I0401 19:36:16.205309   70687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35751
	I0401 19:36:16.207964   70687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:16.208022   70687 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 19:36:16.208038   70687 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 19:36:16.208057   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:36:16.208345   70687 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:16.208482   70687 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:16.208550   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetState
	I0401 19:36:16.209106   70687 main.go:141] libmachine: Using API Version  1
	I0401 19:36:16.209121   70687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:16.209764   70687 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:16.210220   70687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:16.210258   70687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:16.211015   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:36:16.213549   70687 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 19:36:16.212105   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:36:16.215606   70687 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 19:36:16.213577   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:36:16.215625   70687 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 19:36:16.215632   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:36:16.212867   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:36:16.215647   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:36:16.215791   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:36:16.215913   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:36:16.216028   70687 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/embed-certs-882095/id_rsa Username:docker}
	I0401 19:36:16.218302   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:36:16.218924   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:36:16.218948   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:36:16.219174   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:36:16.219340   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:36:16.219496   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:36:16.219818   70687 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/embed-certs-882095/id_rsa Username:docker}
	I0401 19:36:16.227813   70687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35001
	I0401 19:36:16.228198   70687 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:16.228612   70687 main.go:141] libmachine: Using API Version  1
	I0401 19:36:16.228635   70687 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:16.228989   70687 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:16.229159   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetState
	I0401 19:36:16.230712   70687 main.go:141] libmachine: (embed-certs-882095) Calling .DriverName
	I0401 19:36:16.230969   70687 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 19:36:16.230987   70687 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 19:36:16.231003   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHHostname
	I0401 19:36:16.233712   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:36:16.234102   70687 main.go:141] libmachine: (embed-certs-882095) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:f1:a7", ip: ""} in network mk-embed-certs-882095: {Iface:virbr1 ExpiryTime:2024-04-01 20:30:51 +0000 UTC Type:0 Mac:52:54:00:8c:f1:a7 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:embed-certs-882095 Clientid:01:52:54:00:8c:f1:a7}
	I0401 19:36:16.234126   70687 main.go:141] libmachine: (embed-certs-882095) DBG | domain embed-certs-882095 has defined IP address 192.168.39.190 and MAC address 52:54:00:8c:f1:a7 in network mk-embed-certs-882095
	I0401 19:36:16.234273   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHPort
	I0401 19:36:16.234435   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHKeyPath
	I0401 19:36:16.234593   70687 main.go:141] libmachine: (embed-certs-882095) Calling .GetSSHUsername
	I0401 19:36:16.234753   70687 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/embed-certs-882095/id_rsa Username:docker}
	I0401 19:36:16.332504   70687 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:36:16.354423   70687 node_ready.go:35] waiting up to 6m0s for node "embed-certs-882095" to be "Ready" ...
	I0401 19:36:16.363527   70687 node_ready.go:49] node "embed-certs-882095" has status "Ready":"True"
	I0401 19:36:16.363555   70687 node_ready.go:38] duration metric: took 9.10669ms for node "embed-certs-882095" to be "Ready" ...
	I0401 19:36:16.363567   70687 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:36:16.369606   70687 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-fx6hf" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:16.435769   70687 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 19:36:16.435793   70687 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 19:36:16.450934   70687 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 19:36:16.468137   70687 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 19:36:16.474209   70687 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 19:36:16.474233   70687 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 19:36:13.003028   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:15.004924   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:16.530201   70687 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 19:36:16.530222   70687 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0401 19:36:16.607557   70687 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 19:36:17.044156   70687 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:17.044183   70687 main.go:141] libmachine: (embed-certs-882095) Calling .Close
	I0401 19:36:17.044165   70687 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:17.044244   70687 main.go:141] libmachine: (embed-certs-882095) Calling .Close
	I0401 19:36:17.044569   70687 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:17.044606   70687 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:17.044617   70687 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:17.044624   70687 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:17.044630   70687 main.go:141] libmachine: (embed-certs-882095) Calling .Close
	I0401 19:36:17.044639   70687 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:17.044656   70687 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:17.044657   70687 main.go:141] libmachine: (embed-certs-882095) DBG | Closing plugin on server side
	I0401 19:36:17.044670   70687 main.go:141] libmachine: (embed-certs-882095) Calling .Close
	I0401 19:36:17.044616   70687 main.go:141] libmachine: (embed-certs-882095) DBG | Closing plugin on server side
	I0401 19:36:17.044947   70687 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:17.044963   70687 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:17.044964   70687 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:17.044973   70687 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:17.045019   70687 main.go:141] libmachine: (embed-certs-882095) DBG | Closing plugin on server side
	I0401 19:36:17.058441   70687 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:17.058469   70687 main.go:141] libmachine: (embed-certs-882095) Calling .Close
	I0401 19:36:17.058718   70687 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:17.058735   70687 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:17.276263   70687 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:17.276283   70687 main.go:141] libmachine: (embed-certs-882095) Calling .Close
	I0401 19:36:17.276548   70687 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:17.276562   70687 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:17.276571   70687 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:17.276584   70687 main.go:141] libmachine: (embed-certs-882095) Calling .Close
	I0401 19:36:17.276823   70687 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:17.276837   70687 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:17.276852   70687 addons.go:470] Verifying addon metrics-server=true in "embed-certs-882095"
	I0401 19:36:17.278536   70687 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0401 19:36:17.279740   70687 addons.go:505] duration metric: took 1.119396s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0401 19:36:18.412746   70687 pod_ready.go:102] pod "coredns-76f75df574-fx6hf" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:19.378799   70687 pod_ready.go:92] pod "coredns-76f75df574-fx6hf" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:19.378819   70687 pod_ready.go:81] duration metric: took 3.009189982s for pod "coredns-76f75df574-fx6hf" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.378828   70687 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-hwbw6" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.384482   70687 pod_ready.go:92] pod "coredns-76f75df574-hwbw6" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:19.384498   70687 pod_ready.go:81] duration metric: took 5.664781ms for pod "coredns-76f75df574-hwbw6" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.384507   70687 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.390258   70687 pod_ready.go:92] pod "etcd-embed-certs-882095" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:19.390274   70687 pod_ready.go:81] duration metric: took 5.761319ms for pod "etcd-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.390281   70687 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.395592   70687 pod_ready.go:92] pod "kube-apiserver-embed-certs-882095" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:19.395611   70687 pod_ready.go:81] duration metric: took 5.323181ms for pod "kube-apiserver-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.395622   70687 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.400979   70687 pod_ready.go:92] pod "kube-controller-manager-embed-certs-882095" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:19.400994   70687 pod_ready.go:81] duration metric: took 5.365282ms for pod "kube-controller-manager-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.401002   70687 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mbs4m" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.775009   70687 pod_ready.go:92] pod "kube-proxy-mbs4m" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:19.775036   70687 pod_ready.go:81] duration metric: took 374.027521ms for pod "kube-proxy-mbs4m" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:19.775047   70687 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:20.174962   70687 pod_ready.go:92] pod "kube-scheduler-embed-certs-882095" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:20.174986   70687 pod_ready.go:81] duration metric: took 399.930828ms for pod "kube-scheduler-embed-certs-882095" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:20.174994   70687 pod_ready.go:38] duration metric: took 3.811414774s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:36:20.175006   70687 api_server.go:52] waiting for apiserver process to appear ...
	I0401 19:36:20.175064   70687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:36:20.191452   70687 api_server.go:72] duration metric: took 4.031156406s to wait for apiserver process to appear ...
	I0401 19:36:20.191477   70687 api_server.go:88] waiting for apiserver healthz status ...
	I0401 19:36:20.191498   70687 api_server.go:253] Checking apiserver healthz at https://192.168.39.190:8443/healthz ...
	I0401 19:36:20.196706   70687 api_server.go:279] https://192.168.39.190:8443/healthz returned 200:
	ok
	I0401 19:36:20.197772   70687 api_server.go:141] control plane version: v1.29.3
	I0401 19:36:20.197791   70687 api_server.go:131] duration metric: took 6.308074ms to wait for apiserver health ...
	I0401 19:36:20.197799   70687 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 19:36:20.380616   70687 system_pods.go:59] 9 kube-system pods found
	I0401 19:36:20.380645   70687 system_pods.go:61] "coredns-76f75df574-fx6hf" [1c07b740-3374-4a54-a786-784b23ec6b83] Running
	I0401 19:36:20.380651   70687 system_pods.go:61] "coredns-76f75df574-hwbw6" [7b12145a-2689-47e9-9724-d80790ed079c] Running
	I0401 19:36:20.380657   70687 system_pods.go:61] "etcd-embed-certs-882095" [3848d128-2fde-42f5-9543-b8d0343ba15b] Running
	I0401 19:36:20.380663   70687 system_pods.go:61] "kube-apiserver-embed-certs-882095" [116c5cd1-2d04-4a85-96e9-bd1e6af4cba4] Running
	I0401 19:36:20.380668   70687 system_pods.go:61] "kube-controller-manager-embed-certs-882095" [8a2282cf-2a87-4cee-a482-355e92048642] Running
	I0401 19:36:20.380672   70687 system_pods.go:61] "kube-proxy-mbs4m" [ffccbae0-7538-4a75-a6ce-afce49865f07] Running
	I0401 19:36:20.380676   70687 system_pods.go:61] "kube-scheduler-embed-certs-882095" [d2554007-1c9c-4238-809a-72aae1fb7de3] Running
	I0401 19:36:20.380684   70687 system_pods.go:61] "metrics-server-57f55c9bc5-dktr6" [c6adfcab-c746-4ad8-abe2-8b300389a4f5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:36:20.380689   70687 system_pods.go:61] "storage-provisioner" [bcff0d1d-a555-4b25-9aa5-7ab1188c21fd] Running
	I0401 19:36:20.380700   70687 system_pods.go:74] duration metric: took 182.895079ms to wait for pod list to return data ...
	I0401 19:36:20.380711   70687 default_sa.go:34] waiting for default service account to be created ...
	I0401 19:36:20.574739   70687 default_sa.go:45] found service account: "default"
	I0401 19:36:20.574771   70687 default_sa.go:55] duration metric: took 194.049249ms for default service account to be created ...
	I0401 19:36:20.574785   70687 system_pods.go:116] waiting for k8s-apps to be running ...
	I0401 19:36:20.781600   70687 system_pods.go:86] 9 kube-system pods found
	I0401 19:36:20.781630   70687 system_pods.go:89] "coredns-76f75df574-fx6hf" [1c07b740-3374-4a54-a786-784b23ec6b83] Running
	I0401 19:36:20.781638   70687 system_pods.go:89] "coredns-76f75df574-hwbw6" [7b12145a-2689-47e9-9724-d80790ed079c] Running
	I0401 19:36:20.781658   70687 system_pods.go:89] "etcd-embed-certs-882095" [3848d128-2fde-42f5-9543-b8d0343ba15b] Running
	I0401 19:36:20.781664   70687 system_pods.go:89] "kube-apiserver-embed-certs-882095" [116c5cd1-2d04-4a85-96e9-bd1e6af4cba4] Running
	I0401 19:36:20.781672   70687 system_pods.go:89] "kube-controller-manager-embed-certs-882095" [8a2282cf-2a87-4cee-a482-355e92048642] Running
	I0401 19:36:20.781678   70687 system_pods.go:89] "kube-proxy-mbs4m" [ffccbae0-7538-4a75-a6ce-afce49865f07] Running
	I0401 19:36:20.781686   70687 system_pods.go:89] "kube-scheduler-embed-certs-882095" [d2554007-1c9c-4238-809a-72aae1fb7de3] Running
	I0401 19:36:20.781695   70687 system_pods.go:89] "metrics-server-57f55c9bc5-dktr6" [c6adfcab-c746-4ad8-abe2-8b300389a4f5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:36:20.781705   70687 system_pods.go:89] "storage-provisioner" [bcff0d1d-a555-4b25-9aa5-7ab1188c21fd] Running
	I0401 19:36:20.781722   70687 system_pods.go:126] duration metric: took 206.928658ms to wait for k8s-apps to be running ...
	I0401 19:36:20.781738   70687 system_svc.go:44] waiting for kubelet service to be running ....
	I0401 19:36:20.781789   70687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:36:20.798910   70687 system_svc.go:56] duration metric: took 17.163227ms WaitForService to wait for kubelet
	I0401 19:36:20.798940   70687 kubeadm.go:576] duration metric: took 4.638649198s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 19:36:20.798962   70687 node_conditions.go:102] verifying NodePressure condition ...
	I0401 19:36:20.975011   70687 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 19:36:20.975034   70687 node_conditions.go:123] node cpu capacity is 2
	I0401 19:36:20.975045   70687 node_conditions.go:105] duration metric: took 176.077669ms to run NodePressure ...
	I0401 19:36:20.975055   70687 start.go:240] waiting for startup goroutines ...
	I0401 19:36:20.975061   70687 start.go:245] waiting for cluster config update ...
	I0401 19:36:20.975070   70687 start.go:254] writing updated cluster config ...
	I0401 19:36:20.975313   70687 ssh_runner.go:195] Run: rm -f paused
	I0401 19:36:21.024261   70687 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0401 19:36:21.026583   70687 out.go:177] * Done! kubectl is now configured to use "embed-certs-882095" cluster and "default" namespace by default
	I0401 19:36:17.504621   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:20.003964   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:18.623277   70962 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.523094705s)
	I0401 19:36:18.623344   70962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:36:18.640939   70962 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:36:18.653983   70962 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:36:18.666162   70962 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:36:18.666182   70962 kubeadm.go:156] found existing configuration files:
	
	I0401 19:36:18.666233   70962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0401 19:36:18.679043   70962 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:36:18.679092   70962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:36:18.690185   70962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0401 19:36:18.703017   70962 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:36:18.703078   70962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:36:18.714986   70962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0401 19:36:18.727138   70962 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:36:18.727188   70962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:36:18.737886   70962 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0401 19:36:18.748013   70962 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:36:18.748064   70962 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:36:18.758552   70962 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 19:36:18.988309   70962 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 19:36:22.004400   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:24.004510   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:26.504264   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:28.053408   70962 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0401 19:36:28.053478   70962 kubeadm.go:309] [preflight] Running pre-flight checks
	I0401 19:36:28.053544   70962 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 19:36:28.053677   70962 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 19:36:28.053837   70962 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 19:36:28.053953   70962 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 19:36:28.055426   70962 out.go:204]   - Generating certificates and keys ...
	I0401 19:36:28.055513   70962 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0401 19:36:28.055614   70962 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0401 19:36:28.055742   70962 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0401 19:36:28.055834   70962 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0401 19:36:28.055942   70962 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0401 19:36:28.056022   70962 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0401 19:36:28.056104   70962 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0401 19:36:28.056167   70962 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0401 19:36:28.056250   70962 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0401 19:36:28.056331   70962 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0401 19:36:28.056371   70962 kubeadm.go:309] [certs] Using the existing "sa" key
	I0401 19:36:28.056449   70962 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 19:36:28.056531   70962 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 19:36:28.056600   70962 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 19:36:28.056677   70962 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 19:36:28.056772   70962 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 19:36:28.056870   70962 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 19:36:28.057006   70962 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 19:36:28.057100   70962 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 19:36:28.058575   70962 out.go:204]   - Booting up control plane ...
	I0401 19:36:28.058693   70962 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 19:36:28.058773   70962 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 19:36:28.058830   70962 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 19:36:28.058923   70962 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 19:36:28.058998   70962 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 19:36:28.059032   70962 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0401 19:36:28.059201   70962 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 19:36:28.059307   70962 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003148 seconds
	I0401 19:36:28.059432   70962 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 19:36:28.059592   70962 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 19:36:28.059665   70962 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 19:36:28.059892   70962 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-734648 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 19:36:28.059966   70962 kubeadm.go:309] [bootstrap-token] Using token: x76swh.zbuhmc8jrh5hodf9
	I0401 19:36:28.061321   70962 out.go:204]   - Configuring RBAC rules ...
	I0401 19:36:28.061450   70962 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 19:36:28.061577   70962 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 19:36:28.061803   70962 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 19:36:28.061993   70962 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 19:36:28.062153   70962 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 19:36:28.062252   70962 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 19:36:28.062363   70962 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 19:36:28.062422   70962 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0401 19:36:28.062481   70962 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0401 19:36:28.062493   70962 kubeadm.go:309] 
	I0401 19:36:28.062556   70962 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0401 19:36:28.062569   70962 kubeadm.go:309] 
	I0401 19:36:28.062686   70962 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0401 19:36:28.062697   70962 kubeadm.go:309] 
	I0401 19:36:28.062727   70962 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0401 19:36:28.062805   70962 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 19:36:28.062872   70962 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 19:36:28.062886   70962 kubeadm.go:309] 
	I0401 19:36:28.062959   70962 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0401 19:36:28.062969   70962 kubeadm.go:309] 
	I0401 19:36:28.063050   70962 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 19:36:28.063061   70962 kubeadm.go:309] 
	I0401 19:36:28.063103   70962 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0401 19:36:28.063172   70962 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 19:36:28.063234   70962 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 19:36:28.063240   70962 kubeadm.go:309] 
	I0401 19:36:28.063337   70962 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 19:36:28.063440   70962 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0401 19:36:28.063453   70962 kubeadm.go:309] 
	I0401 19:36:28.063559   70962 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token x76swh.zbuhmc8jrh5hodf9 \
	I0401 19:36:28.063676   70962 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b8a0197ad47aa27a5800307c57228d22e61e4d31af785fa8a896f2b7fab267b8 \
	I0401 19:36:28.063725   70962 kubeadm.go:309] 	--control-plane 
	I0401 19:36:28.063734   70962 kubeadm.go:309] 
	I0401 19:36:28.063835   70962 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0401 19:36:28.063844   70962 kubeadm.go:309] 
	I0401 19:36:28.063955   70962 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token x76swh.zbuhmc8jrh5hodf9 \
	I0401 19:36:28.064092   70962 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b8a0197ad47aa27a5800307c57228d22e61e4d31af785fa8a896f2b7fab267b8 
	I0401 19:36:28.064105   70962 cni.go:84] Creating CNI manager for ""
	I0401 19:36:28.064114   70962 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:36:28.065560   70962 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0401 19:36:28.505029   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:31.005436   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:28.066823   70962 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0401 19:36:28.089595   70962 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0401 19:36:28.150074   70962 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 19:36:28.150195   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:28.150206   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-734648 minikube.k8s.io/updated_at=2024_04_01T19_36_28_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2 minikube.k8s.io/name=default-k8s-diff-port-734648 minikube.k8s.io/primary=true
	I0401 19:36:28.494391   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:28.529148   70962 ops.go:34] apiserver oom_adj: -16
	I0401 19:36:28.994780   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:29.494976   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:29.994627   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:30.495192   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:30.995334   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:31.494861   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:31.994576   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:33.505264   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:35.506298   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:32.495185   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:32.995090   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:33.494755   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:33.994758   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:34.494609   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:34.995423   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:35.495219   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:35.994557   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:36.495175   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:36.994857   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:37.494725   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:37.994846   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:38.494687   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:38.994615   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:39.494929   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:39.994514   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:40.494838   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:40.994846   70962 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:36:41.105036   70962 kubeadm.go:1107] duration metric: took 12.954907711s to wait for elevateKubeSystemPrivileges
	W0401 19:36:41.105072   70962 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0401 19:36:41.105080   70962 kubeadm.go:393] duration metric: took 5m13.291890816s to StartCluster
	I0401 19:36:41.105098   70962 settings.go:142] acquiring lock: {Name:mk5cd3d9600680d3808ad7ff6310a5e71b09e71d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:36:41.105193   70962 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 19:36:41.107226   70962 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/kubeconfig: {Name:mkbd988e40ba29769e9f8a43c4d876f38e957f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:36:41.107451   70962 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.145 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 19:36:41.109245   70962 out.go:177] * Verifying Kubernetes components...
	I0401 19:36:41.107543   70962 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0401 19:36:41.107682   70962 config.go:182] Loaded profile config "default-k8s-diff-port-734648": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 19:36:41.110583   70962 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:36:41.110596   70962 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-734648"
	I0401 19:36:41.110621   70962 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-734648"
	I0401 19:36:41.110620   70962 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-734648"
	I0401 19:36:41.110652   70962 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-734648"
	I0401 19:36:41.110588   70962 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-734648"
	W0401 19:36:41.110665   70962 addons.go:243] addon metrics-server should already be in state true
	I0401 19:36:41.110685   70962 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-734648"
	W0401 19:36:41.110699   70962 addons.go:243] addon storage-provisioner should already be in state true
	I0401 19:36:41.110700   70962 host.go:66] Checking if "default-k8s-diff-port-734648" exists ...
	I0401 19:36:41.110727   70962 host.go:66] Checking if "default-k8s-diff-port-734648" exists ...
	I0401 19:36:41.111032   70962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:41.111039   70962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:41.111062   70962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:41.111098   70962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:41.111126   70962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:41.111158   70962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:41.129376   70962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46657
	I0401 19:36:41.130833   70962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38623
	I0401 19:36:41.131158   70962 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:41.131258   70962 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:41.131761   70962 main.go:141] libmachine: Using API Version  1
	I0401 19:36:41.131786   70962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:41.132119   70962 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:41.132313   70962 main.go:141] libmachine: Using API Version  1
	I0401 19:36:41.132437   70962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:41.132477   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetState
	I0401 19:36:41.133129   70962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36213
	I0401 19:36:41.133449   70962 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:41.133456   70962 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:41.133871   70962 main.go:141] libmachine: Using API Version  1
	I0401 19:36:41.133894   70962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:41.133990   70962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:41.134021   70962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:41.134159   70962 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:41.134572   70962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:41.134609   70962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:41.143808   70962 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-734648"
	W0401 19:36:41.143829   70962 addons.go:243] addon default-storageclass should already be in state true
	I0401 19:36:41.143858   70962 host.go:66] Checking if "default-k8s-diff-port-734648" exists ...
	I0401 19:36:41.144202   70962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:41.144241   70962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:41.154009   70962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38703
	I0401 19:36:41.156112   70962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45449
	I0401 19:36:41.156579   70962 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:41.157085   70962 main.go:141] libmachine: Using API Version  1
	I0401 19:36:41.157112   70962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:41.157458   70962 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:41.157631   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetState
	I0401 19:36:41.157891   70962 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:41.158593   70962 main.go:141] libmachine: Using API Version  1
	I0401 19:36:41.158615   70962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:41.158924   70962 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:41.159123   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetState
	I0401 19:36:41.160683   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:36:41.162801   70962 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 19:36:41.164275   70962 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 19:36:41.164292   70962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 19:36:41.164310   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:36:41.162762   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:36:41.163321   70962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39643
	I0401 19:36:41.166161   70962 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:36:38.004666   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:40.005118   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:41.164866   70962 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:41.167473   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:36:41.167806   70962 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 19:36:41.167833   70962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 19:36:41.167850   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:36:41.168056   70962 main.go:141] libmachine: Using API Version  1
	I0401 19:36:41.168074   70962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:41.168145   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:36:41.168163   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:36:41.168194   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:36:41.168353   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:36:41.168429   70962 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:41.168583   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:36:41.168723   70962 sshutil.go:53] new ssh client: &{IP:192.168.61.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/default-k8s-diff-port-734648/id_rsa Username:docker}
	I0401 19:36:41.169323   70962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:36:41.169374   70962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:36:41.170857   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:36:41.171269   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:36:41.171323   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:36:41.171412   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:36:41.171576   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:36:41.171723   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:36:41.171860   70962 sshutil.go:53] new ssh client: &{IP:192.168.61.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/default-k8s-diff-port-734648/id_rsa Username:docker}
	I0401 19:36:41.191280   70962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42133
	I0401 19:36:41.191576   70962 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:36:41.192122   70962 main.go:141] libmachine: Using API Version  1
	I0401 19:36:41.192152   70962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:36:41.192511   70962 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:36:41.192673   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetState
	I0401 19:36:41.194286   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .DriverName
	I0401 19:36:41.194528   70962 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 19:36:41.194546   70962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 19:36:41.194564   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHHostname
	I0401 19:36:41.197639   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:36:41.198235   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:dc:50", ip: ""} in network mk-default-k8s-diff-port-734648: {Iface:virbr4 ExpiryTime:2024-04-01 20:23:29 +0000 UTC Type:0 Mac:52:54:00:49:dc:50 Iaid: IPaddr:192.168.61.145 Prefix:24 Hostname:default-k8s-diff-port-734648 Clientid:01:52:54:00:49:dc:50}
	I0401 19:36:41.198259   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | domain default-k8s-diff-port-734648 has defined IP address 192.168.61.145 and MAC address 52:54:00:49:dc:50 in network mk-default-k8s-diff-port-734648
	I0401 19:36:41.198296   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHPort
	I0401 19:36:41.198491   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHKeyPath
	I0401 19:36:41.198670   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .GetSSHUsername
	I0401 19:36:41.198857   70962 sshutil.go:53] new ssh client: &{IP:192.168.61.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/default-k8s-diff-port-734648/id_rsa Username:docker}
	I0401 19:36:41.308472   70962 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:36:41.334121   70962 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-734648" to be "Ready" ...
	I0401 19:36:41.343898   70962 node_ready.go:49] node "default-k8s-diff-port-734648" has status "Ready":"True"
	I0401 19:36:41.343943   70962 node_ready.go:38] duration metric: took 9.780821ms for node "default-k8s-diff-port-734648" to be "Ready" ...
	I0401 19:36:41.343952   70962 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:36:41.352294   70962 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:41.362318   70962 pod_ready.go:92] pod "etcd-default-k8s-diff-port-734648" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:41.362345   70962 pod_ready.go:81] duration metric: took 10.020335ms for pod "etcd-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:41.362358   70962 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:41.367338   70962 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-734648" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:41.367356   70962 pod_ready.go:81] duration metric: took 4.990987ms for pod "kube-apiserver-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:41.367364   70962 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:41.372379   70962 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-734648" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:41.372401   70962 pod_ready.go:81] duration metric: took 5.030239ms for pod "kube-controller-manager-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:41.372412   70962 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:41.377862   70962 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace has status "Ready":"True"
	I0401 19:36:41.377881   70962 pod_ready.go:81] duration metric: took 5.460968ms for pod "kube-scheduler-default-k8s-diff-port-734648" in "kube-system" namespace to be "Ready" ...
	I0401 19:36:41.377891   70962 pod_ready.go:38] duration metric: took 33.929349ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:36:41.377915   70962 api_server.go:52] waiting for apiserver process to appear ...
	I0401 19:36:41.377965   70962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:36:41.396518   70962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 19:36:41.407024   70962 api_server.go:72] duration metric: took 299.545156ms to wait for apiserver process to appear ...
	I0401 19:36:41.407049   70962 api_server.go:88] waiting for apiserver healthz status ...
	I0401 19:36:41.407068   70962 api_server.go:253] Checking apiserver healthz at https://192.168.61.145:8444/healthz ...
	I0401 19:36:41.411429   70962 api_server.go:279] https://192.168.61.145:8444/healthz returned 200:
	ok
	I0401 19:36:41.412620   70962 api_server.go:141] control plane version: v1.29.3
	I0401 19:36:41.412640   70962 api_server.go:131] duration metric: took 5.58478ms to wait for apiserver health ...
	I0401 19:36:41.412646   70962 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 19:36:41.426474   70962 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 19:36:41.426500   70962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 19:36:41.447003   70962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 19:36:41.470135   70962 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 19:36:41.470153   70962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 19:36:41.526684   70962 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 19:36:41.526710   70962 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0401 19:36:41.540871   70962 system_pods.go:59] 4 kube-system pods found
	I0401 19:36:41.540894   70962 system_pods.go:61] "etcd-default-k8s-diff-port-734648" [7b60f629-8a15-420e-936c-872a0d55ce74] Running
	I0401 19:36:41.540900   70962 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-734648" [811a3391-02c8-43dd-9129-3fc50a4fab41] Running
	I0401 19:36:41.540905   70962 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-734648" [4b57b14a-5f46-482f-8661-8fa500db5390] Running
	I0401 19:36:41.540908   70962 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-734648" [e0fb5e6b-aaa8-45ba-9df9-be947cbbdb80] Running
	I0401 19:36:41.540914   70962 system_pods.go:74] duration metric: took 128.262683ms to wait for pod list to return data ...
	I0401 19:36:41.540920   70962 default_sa.go:34] waiting for default service account to be created ...
	I0401 19:36:41.625507   70962 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 19:36:41.750232   70962 default_sa.go:45] found service account: "default"
	I0401 19:36:41.750261   70962 default_sa.go:55] duration metric: took 209.334562ms for default service account to be created ...
	I0401 19:36:41.750273   70962 system_pods.go:116] waiting for k8s-apps to be running ...
	I0401 19:36:41.968623   70962 system_pods.go:86] 7 kube-system pods found
	I0401 19:36:41.968651   70962 system_pods.go:89] "coredns-76f75df574-lwsms" [9f432161-c5e3-42fa-8857-8e61959511b0] Pending
	I0401 19:36:41.968657   70962 system_pods.go:89] "coredns-76f75df574-ws9cc" [65660abf-9856-4df4-a07b-854cfd8e3fc6] Pending
	I0401 19:36:41.968663   70962 system_pods.go:89] "etcd-default-k8s-diff-port-734648" [7b60f629-8a15-420e-936c-872a0d55ce74] Running
	I0401 19:36:41.968669   70962 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-734648" [811a3391-02c8-43dd-9129-3fc50a4fab41] Running
	I0401 19:36:41.968675   70962 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-734648" [4b57b14a-5f46-482f-8661-8fa500db5390] Running
	I0401 19:36:41.968683   70962 system_pods.go:89] "kube-proxy-p8wrc" [2f6b37e6-b3f9-44b6-8ff9-e8fd781ef1a3] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:36:41.968690   70962 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-734648" [e0fb5e6b-aaa8-45ba-9df9-be947cbbdb80] Running
	I0401 19:36:41.968712   70962 retry.go:31] will retry after 288.42332ms: missing components: kube-dns, kube-proxy
	I0401 19:36:42.231814   70962 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:42.231848   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .Close
	I0401 19:36:42.231904   70962 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:42.231925   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .Close
	I0401 19:36:42.232160   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | Closing plugin on server side
	I0401 19:36:42.232161   70962 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:42.232179   70962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:42.232187   70962 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:42.232191   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | Closing plugin on server side
	I0401 19:36:42.232199   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .Close
	I0401 19:36:42.232223   70962 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:42.232235   70962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:42.232244   70962 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:42.232255   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .Close
	I0401 19:36:42.232431   70962 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:42.232478   70962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:42.232578   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | Closing plugin on server side
	I0401 19:36:42.232612   70962 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:42.232629   70962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:42.251515   70962 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:42.251538   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .Close
	I0401 19:36:42.251795   70962 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:42.251809   70962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:42.267102   70962 system_pods.go:86] 8 kube-system pods found
	I0401 19:36:42.267135   70962 system_pods.go:89] "coredns-76f75df574-lwsms" [9f432161-c5e3-42fa-8857-8e61959511b0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:36:42.267148   70962 system_pods.go:89] "coredns-76f75df574-ws9cc" [65660abf-9856-4df4-a07b-854cfd8e3fc6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:36:42.267163   70962 system_pods.go:89] "etcd-default-k8s-diff-port-734648" [7b60f629-8a15-420e-936c-872a0d55ce74] Running
	I0401 19:36:42.267181   70962 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-734648" [811a3391-02c8-43dd-9129-3fc50a4fab41] Running
	I0401 19:36:42.267187   70962 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-734648" [4b57b14a-5f46-482f-8661-8fa500db5390] Running
	I0401 19:36:42.267196   70962 system_pods.go:89] "kube-proxy-p8wrc" [2f6b37e6-b3f9-44b6-8ff9-e8fd781ef1a3] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:36:42.267204   70962 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-734648" [e0fb5e6b-aaa8-45ba-9df9-be947cbbdb80] Running
	I0401 19:36:42.267222   70962 system_pods.go:89] "storage-provisioner" [8509e661-1b53-4018-b6b0-b6a5e242768d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:36:42.267244   70962 retry.go:31] will retry after 336.906399ms: missing components: kube-dns, kube-proxy
	I0401 19:36:42.632180   70962 system_pods.go:86] 9 kube-system pods found
	I0401 19:36:42.632212   70962 system_pods.go:89] "coredns-76f75df574-lwsms" [9f432161-c5e3-42fa-8857-8e61959511b0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:36:42.632223   70962 system_pods.go:89] "coredns-76f75df574-ws9cc" [65660abf-9856-4df4-a07b-854cfd8e3fc6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:36:42.632232   70962 system_pods.go:89] "etcd-default-k8s-diff-port-734648" [7b60f629-8a15-420e-936c-872a0d55ce74] Running
	I0401 19:36:42.632240   70962 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-734648" [811a3391-02c8-43dd-9129-3fc50a4fab41] Running
	I0401 19:36:42.632247   70962 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-734648" [4b57b14a-5f46-482f-8661-8fa500db5390] Running
	I0401 19:36:42.632257   70962 system_pods.go:89] "kube-proxy-p8wrc" [2f6b37e6-b3f9-44b6-8ff9-e8fd781ef1a3] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:36:42.632264   70962 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-734648" [e0fb5e6b-aaa8-45ba-9df9-be947cbbdb80] Running
	I0401 19:36:42.632275   70962 system_pods.go:89] "metrics-server-57f55c9bc5-fj5x5" [e25fa51c-d80e-4ddc-898f-3b9903746537] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:36:42.632289   70962 system_pods.go:89] "storage-provisioner" [8509e661-1b53-4018-b6b0-b6a5e242768d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:36:42.632313   70962 retry.go:31] will retry after 406.571029ms: missing components: kube-dns, kube-proxy
	I0401 19:36:42.739308   70962 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.113759645s)
	I0401 19:36:42.739364   70962 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:42.739383   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .Close
	I0401 19:36:42.739822   70962 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:42.739842   70962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:42.739859   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) DBG | Closing plugin on server side
	I0401 19:36:42.739867   70962 main.go:141] libmachine: Making call to close driver server
	I0401 19:36:42.739890   70962 main.go:141] libmachine: (default-k8s-diff-port-734648) Calling .Close
	I0401 19:36:42.740171   70962 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:36:42.740186   70962 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:36:42.740198   70962 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-734648"
	I0401 19:36:42.742233   70962 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0401 19:36:42.743265   70962 addons.go:505] duration metric: took 1.635721448s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0401 19:36:43.053149   70962 system_pods.go:86] 9 kube-system pods found
	I0401 19:36:43.053183   70962 system_pods.go:89] "coredns-76f75df574-lwsms" [9f432161-c5e3-42fa-8857-8e61959511b0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:36:43.053195   70962 system_pods.go:89] "coredns-76f75df574-ws9cc" [65660abf-9856-4df4-a07b-854cfd8e3fc6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:36:43.053205   70962 system_pods.go:89] "etcd-default-k8s-diff-port-734648" [7b60f629-8a15-420e-936c-872a0d55ce74] Running
	I0401 19:36:43.053215   70962 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-734648" [811a3391-02c8-43dd-9129-3fc50a4fab41] Running
	I0401 19:36:43.053223   70962 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-734648" [4b57b14a-5f46-482f-8661-8fa500db5390] Running
	I0401 19:36:43.053235   70962 system_pods.go:89] "kube-proxy-p8wrc" [2f6b37e6-b3f9-44b6-8ff9-e8fd781ef1a3] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:36:43.053240   70962 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-734648" [e0fb5e6b-aaa8-45ba-9df9-be947cbbdb80] Running
	I0401 19:36:43.053249   70962 system_pods.go:89] "metrics-server-57f55c9bc5-fj5x5" [e25fa51c-d80e-4ddc-898f-3b9903746537] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:36:43.053258   70962 system_pods.go:89] "storage-provisioner" [8509e661-1b53-4018-b6b0-b6a5e242768d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:36:43.053275   70962 retry.go:31] will retry after 524.250739ms: missing components: kube-dns, kube-proxy
	I0401 19:36:43.591419   70962 system_pods.go:86] 9 kube-system pods found
	I0401 19:36:43.591451   70962 system_pods.go:89] "coredns-76f75df574-lwsms" [9f432161-c5e3-42fa-8857-8e61959511b0] Running
	I0401 19:36:43.591463   70962 system_pods.go:89] "coredns-76f75df574-ws9cc" [65660abf-9856-4df4-a07b-854cfd8e3fc6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:36:43.591471   70962 system_pods.go:89] "etcd-default-k8s-diff-port-734648" [7b60f629-8a15-420e-936c-872a0d55ce74] Running
	I0401 19:36:43.591480   70962 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-734648" [811a3391-02c8-43dd-9129-3fc50a4fab41] Running
	I0401 19:36:43.591487   70962 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-734648" [4b57b14a-5f46-482f-8661-8fa500db5390] Running
	I0401 19:36:43.591493   70962 system_pods.go:89] "kube-proxy-p8wrc" [2f6b37e6-b3f9-44b6-8ff9-e8fd781ef1a3] Running
	I0401 19:36:43.591498   70962 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-734648" [e0fb5e6b-aaa8-45ba-9df9-be947cbbdb80] Running
	I0401 19:36:43.591508   70962 system_pods.go:89] "metrics-server-57f55c9bc5-fj5x5" [e25fa51c-d80e-4ddc-898f-3b9903746537] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:36:43.591517   70962 system_pods.go:89] "storage-provisioner" [8509e661-1b53-4018-b6b0-b6a5e242768d] Running
	I0401 19:36:43.591529   70962 system_pods.go:126] duration metric: took 1.841248999s to wait for k8s-apps to be running ...
	I0401 19:36:43.591561   70962 system_svc.go:44] waiting for kubelet service to be running ....
	I0401 19:36:43.591613   70962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:36:43.611873   70962 system_svc.go:56] duration metric: took 20.296001ms WaitForService to wait for kubelet
	I0401 19:36:43.611907   70962 kubeadm.go:576] duration metric: took 2.504430824s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 19:36:43.611930   70962 node_conditions.go:102] verifying NodePressure condition ...
	I0401 19:36:43.617697   70962 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 19:36:43.617720   70962 node_conditions.go:123] node cpu capacity is 2
	I0401 19:36:43.617732   70962 node_conditions.go:105] duration metric: took 5.796357ms to run NodePressure ...
	I0401 19:36:43.617745   70962 start.go:240] waiting for startup goroutines ...
	I0401 19:36:43.617754   70962 start.go:245] waiting for cluster config update ...
	I0401 19:36:43.617765   70962 start.go:254] writing updated cluster config ...
	I0401 19:36:43.618023   70962 ssh_runner.go:195] Run: rm -f paused
	I0401 19:36:43.666581   70962 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0401 19:36:43.668685   70962 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-734648" cluster and "default" namespace by default
	I0401 19:36:42.505149   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:45.003855   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:47.004247   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:49.504898   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:51.505403   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:54.005163   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:56.503395   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:36:58.503791   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:00.504001   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:02.504193   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:05.003540   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:07.003582   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:09.503975   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:12.005037   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:14.503460   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:16.504630   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:19.004307   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:21.004909   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:23.503286   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:25.503469   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:27.503520   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:30.004792   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:32.503693   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:35.005137   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:37.504848   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:39.504961   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:41.510644   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:44.004680   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:46.005118   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:51.561231   71168 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0401 19:37:51.561356   71168 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0401 19:37:51.563350   71168 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0401 19:37:51.563417   71168 kubeadm.go:309] [preflight] Running pre-flight checks
	I0401 19:37:51.563497   71168 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 19:37:51.563596   71168 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 19:37:51.563711   71168 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 19:37:51.563797   71168 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 19:37:51.565710   71168 out.go:204]   - Generating certificates and keys ...
	I0401 19:37:51.565809   71168 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0401 19:37:51.565908   71168 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0401 19:37:51.566051   71168 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0401 19:37:51.566136   71168 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0401 19:37:51.566230   71168 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0401 19:37:51.566325   71168 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0401 19:37:51.566402   71168 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0401 19:37:51.566464   71168 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0401 19:37:51.566580   71168 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0401 19:37:51.566688   71168 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0401 19:37:51.566727   71168 kubeadm.go:309] [certs] Using the existing "sa" key
	I0401 19:37:51.566774   71168 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 19:37:51.566822   71168 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 19:37:51.566917   71168 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 19:37:51.567001   71168 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 19:37:51.567068   71168 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 19:37:51.567210   71168 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 19:37:51.567314   71168 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 19:37:51.567371   71168 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0401 19:37:51.567473   71168 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 19:37:48.504708   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:51.005355   70284 pod_ready.go:102] pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace has status "Ready":"False"
	I0401 19:37:51.569285   71168 out.go:204]   - Booting up control plane ...
	I0401 19:37:51.569394   71168 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 19:37:51.569498   71168 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 19:37:51.569568   71168 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 19:37:51.569661   71168 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 19:37:51.569802   71168 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 19:37:51.569866   71168 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0401 19:37:51.569957   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:37:51.570195   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:37:51.570287   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:37:51.570514   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:37:51.570589   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:37:51.570769   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:37:51.570859   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:37:51.571033   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:37:51.571134   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:37:51.571342   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:37:51.571351   71168 kubeadm.go:309] 
	I0401 19:37:51.571394   71168 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0401 19:37:51.571453   71168 kubeadm.go:309] 		timed out waiting for the condition
	I0401 19:37:51.571475   71168 kubeadm.go:309] 
	I0401 19:37:51.571521   71168 kubeadm.go:309] 	This error is likely caused by:
	I0401 19:37:51.571558   71168 kubeadm.go:309] 		- The kubelet is not running
	I0401 19:37:51.571676   71168 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0401 19:37:51.571687   71168 kubeadm.go:309] 
	I0401 19:37:51.571824   71168 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0401 19:37:51.571880   71168 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0401 19:37:51.571921   71168 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0401 19:37:51.571931   71168 kubeadm.go:309] 
	I0401 19:37:51.572077   71168 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0401 19:37:51.572198   71168 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0401 19:37:51.572209   71168 kubeadm.go:309] 
	I0401 19:37:51.572359   71168 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0401 19:37:51.572477   71168 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0401 19:37:51.572576   71168 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0401 19:37:51.572676   71168 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0401 19:37:51.572731   71168 kubeadm.go:309] 
	W0401 19:37:51.572793   71168 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0401 19:37:51.572851   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0401 19:37:52.428554   71168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:37:52.445151   71168 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:37:52.456989   71168 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:37:52.457010   71168 kubeadm.go:156] found existing configuration files:
	
	I0401 19:37:52.457053   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 19:37:52.468305   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:37:52.468375   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:37:52.479305   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 19:37:52.489703   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:37:52.489753   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:37:52.501023   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 19:37:52.512418   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:37:52.512480   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:37:52.523850   71168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 19:37:52.534358   71168 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:37:52.534425   71168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:37:52.546135   71168 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 19:37:52.779427   71168 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 19:37:52.997253   70284 pod_ready.go:81] duration metric: took 4m0.000092266s for pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace to be "Ready" ...
	E0401 19:37:52.997287   70284 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-wlr7k" in "kube-system" namespace to be "Ready" (will not retry!)
	I0401 19:37:52.997309   70284 pod_ready.go:38] duration metric: took 4m43.911595731s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:37:52.997333   70284 kubeadm.go:591] duration metric: took 5m31.840082505s to restartPrimaryControlPlane
	W0401 19:37:52.997393   70284 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0401 19:37:52.997421   70284 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0401 19:38:25.458760   70284 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.46129187s)
	I0401 19:38:25.458845   70284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:38:25.476633   70284 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:38:25.487615   70284 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:38:25.498590   70284 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:38:25.498616   70284 kubeadm.go:156] found existing configuration files:
	
	I0401 19:38:25.498701   70284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 19:38:25.509063   70284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:38:25.509128   70284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:38:25.519806   70284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 19:38:25.530433   70284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:38:25.530488   70284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:38:25.540979   70284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 19:38:25.550786   70284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:38:25.550847   70284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:38:25.561979   70284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 19:38:25.571832   70284 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:38:25.571898   70284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:38:25.582501   70284 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0401 19:38:25.646956   70284 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0-rc.0
	I0401 19:38:25.647046   70284 kubeadm.go:309] [preflight] Running pre-flight checks
	I0401 19:38:25.825328   70284 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 19:38:25.825459   70284 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 19:38:25.825574   70284 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 19:38:26.066201   70284 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 19:38:26.069071   70284 out.go:204]   - Generating certificates and keys ...
	I0401 19:38:26.069170   70284 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0401 19:38:26.069260   70284 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0401 19:38:26.069402   70284 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0401 19:38:26.069493   70284 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0401 19:38:26.069588   70284 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0401 19:38:26.069703   70284 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0401 19:38:26.069765   70284 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0401 19:38:26.069822   70284 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0401 19:38:26.069986   70284 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0401 19:38:26.070644   70284 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0401 19:38:26.071149   70284 kubeadm.go:309] [certs] Using the existing "sa" key
	I0401 19:38:26.071308   70284 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 19:38:26.204651   70284 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 19:38:26.368926   70284 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 19:38:26.586004   70284 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 19:38:26.710851   70284 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 19:38:26.858015   70284 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 19:38:26.858741   70284 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 19:38:26.863879   70284 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 19:38:26.865794   70284 out.go:204]   - Booting up control plane ...
	I0401 19:38:26.865898   70284 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 19:38:26.865984   70284 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 19:38:26.866081   70284 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 19:38:26.886171   70284 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 19:38:26.887118   70284 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 19:38:26.887177   70284 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0401 19:38:27.021053   70284 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0401 19:38:27.021142   70284 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0401 19:38:28.023462   70284 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002303634s
	I0401 19:38:28.023549   70284 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0401 19:38:34.026967   70284 kubeadm.go:309] [api-check] The API server is healthy after 6.003391014s
	I0401 19:38:34.044095   70284 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 19:38:34.061716   70284 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 19:38:34.092708   70284 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 19:38:34.093037   70284 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-472858 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 19:38:34.111758   70284 kubeadm.go:309] [bootstrap-token] Using token: 45cmca.rj16278sw3ueq3us
	I0401 19:38:34.113211   70284 out.go:204]   - Configuring RBAC rules ...
	I0401 19:38:34.113333   70284 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 19:38:34.122292   70284 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 19:38:34.133114   70284 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 19:38:34.138441   70284 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 19:38:34.143964   70284 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 19:38:34.148675   70284 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 19:38:34.438167   70284 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 19:38:34.885250   70284 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0401 19:38:35.439990   70284 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0401 19:38:35.441439   70284 kubeadm.go:309] 
	I0401 19:38:35.441532   70284 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0401 19:38:35.441545   70284 kubeadm.go:309] 
	I0401 19:38:35.441659   70284 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0401 19:38:35.441690   70284 kubeadm.go:309] 
	I0401 19:38:35.441752   70284 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0401 19:38:35.441845   70284 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 19:38:35.441930   70284 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 19:38:35.441938   70284 kubeadm.go:309] 
	I0401 19:38:35.442014   70284 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0401 19:38:35.442028   70284 kubeadm.go:309] 
	I0401 19:38:35.442067   70284 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 19:38:35.442073   70284 kubeadm.go:309] 
	I0401 19:38:35.442120   70284 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0401 19:38:35.442186   70284 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 19:38:35.442295   70284 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 19:38:35.442307   70284 kubeadm.go:309] 
	I0401 19:38:35.442426   70284 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 19:38:35.442552   70284 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0401 19:38:35.442565   70284 kubeadm.go:309] 
	I0401 19:38:35.442643   70284 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 45cmca.rj16278sw3ueq3us \
	I0401 19:38:35.442766   70284 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b8a0197ad47aa27a5800307c57228d22e61e4d31af785fa8a896f2b7fab267b8 \
	I0401 19:38:35.442803   70284 kubeadm.go:309] 	--control-plane 
	I0401 19:38:35.442813   70284 kubeadm.go:309] 
	I0401 19:38:35.442922   70284 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0401 19:38:35.442936   70284 kubeadm.go:309] 
	I0401 19:38:35.443008   70284 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 45cmca.rj16278sw3ueq3us \
	I0401 19:38:35.443097   70284 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b8a0197ad47aa27a5800307c57228d22e61e4d31af785fa8a896f2b7fab267b8 
	I0401 19:38:35.443436   70284 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 19:38:35.443530   70284 cni.go:84] Creating CNI manager for ""
	I0401 19:38:35.443546   70284 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 19:38:35.445089   70284 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0401 19:38:35.446328   70284 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0401 19:38:35.459788   70284 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0401 19:38:35.486202   70284 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 19:38:35.486300   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:35.486308   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-472858 minikube.k8s.io/updated_at=2024_04_01T19_38_35_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=f5358d0432cb831273a488eed4dfd72793340bc2 minikube.k8s.io/name=no-preload-472858 minikube.k8s.io/primary=true
	I0401 19:38:35.700677   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:35.731567   70284 ops.go:34] apiserver oom_adj: -16
	I0401 19:38:36.200955   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:36.701003   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:37.201632   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:37.700719   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:38.201316   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:38.701334   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:39.201609   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:39.701034   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:40.201771   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:40.700786   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:41.201750   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:41.701709   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:42.201682   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:42.700838   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:43.201123   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:43.701587   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:44.200860   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:44.700795   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:45.200850   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:45.701273   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:46.201701   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:46.701450   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:47.201496   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:47.701351   70284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:38:47.800239   70284 kubeadm.go:1107] duration metric: took 12.313994383s to wait for elevateKubeSystemPrivileges
	W0401 19:38:47.800287   70284 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0401 19:38:47.800298   70284 kubeadm.go:393] duration metric: took 6m26.705086714s to StartCluster
	I0401 19:38:47.800320   70284 settings.go:142] acquiring lock: {Name:mk5cd3d9600680d3808ad7ff6310a5e71b09e71d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:38:47.800410   70284 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 19:38:47.802818   70284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/kubeconfig: {Name:mkbd988e40ba29769e9f8a43c4d876f38e957f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:38:47.803132   70284 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.119 Port:8443 KubernetesVersion:v1.30.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 19:38:47.805445   70284 out.go:177] * Verifying Kubernetes components...
	I0401 19:38:47.803273   70284 config.go:182] Loaded profile config "no-preload-472858": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.0
	I0401 19:38:47.803252   70284 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0401 19:38:47.806734   70284 addons.go:69] Setting storage-provisioner=true in profile "no-preload-472858"
	I0401 19:38:47.806761   70284 addons.go:69] Setting default-storageclass=true in profile "no-preload-472858"
	I0401 19:38:47.806774   70284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:38:47.806777   70284 addons.go:69] Setting metrics-server=true in profile "no-preload-472858"
	I0401 19:38:47.806802   70284 addons.go:234] Setting addon metrics-server=true in "no-preload-472858"
	W0401 19:38:47.806815   70284 addons.go:243] addon metrics-server should already be in state true
	I0401 19:38:47.806850   70284 host.go:66] Checking if "no-preload-472858" exists ...
	I0401 19:38:47.806802   70284 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-472858"
	I0401 19:38:47.806768   70284 addons.go:234] Setting addon storage-provisioner=true in "no-preload-472858"
	W0401 19:38:47.807229   70284 addons.go:243] addon storage-provisioner should already be in state true
	I0401 19:38:47.807257   70284 host.go:66] Checking if "no-preload-472858" exists ...
	I0401 19:38:47.807289   70284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:38:47.807332   70284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:38:47.807340   70284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:38:47.807366   70284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:38:47.807620   70284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:38:47.807690   70284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:38:47.823665   70284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38305
	I0401 19:38:47.823684   70284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35487
	I0401 19:38:47.824174   70284 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:38:47.824205   70284 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:38:47.824709   70284 main.go:141] libmachine: Using API Version  1
	I0401 19:38:47.824732   70284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:38:47.824838   70284 main.go:141] libmachine: Using API Version  1
	I0401 19:38:47.824867   70284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:38:47.825094   70284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:38:47.825276   70284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:38:47.825700   70284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:38:47.825746   70284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:38:47.825844   70284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:38:47.825866   70284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:38:47.826415   70284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38845
	I0401 19:38:47.826845   70284 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:38:47.827305   70284 main.go:141] libmachine: Using API Version  1
	I0401 19:38:47.827330   70284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:38:47.827800   70284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:38:47.828004   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetState
	I0401 19:38:47.831735   70284 addons.go:234] Setting addon default-storageclass=true in "no-preload-472858"
	W0401 19:38:47.831760   70284 addons.go:243] addon default-storageclass should already be in state true
	I0401 19:38:47.831791   70284 host.go:66] Checking if "no-preload-472858" exists ...
	I0401 19:38:47.832170   70284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:38:47.832218   70284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:38:47.842050   70284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42037
	I0401 19:38:47.842479   70284 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:38:47.842963   70284 main.go:141] libmachine: Using API Version  1
	I0401 19:38:47.842983   70284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:38:47.843354   70284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:38:47.843513   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetState
	I0401 19:38:47.845360   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:38:47.845430   70284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33357
	I0401 19:38:47.847622   70284 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:38:47.845959   70284 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:38:47.847568   70284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38785
	I0401 19:38:47.849255   70284 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 19:38:47.849283   70284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 19:38:47.849303   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:38:47.849356   70284 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:38:47.849524   70284 main.go:141] libmachine: Using API Version  1
	I0401 19:38:47.849536   70284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:38:47.850173   70284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:38:47.850228   70284 main.go:141] libmachine: Using API Version  1
	I0401 19:38:47.850238   70284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:38:47.850362   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetState
	I0401 19:38:47.851206   70284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:38:47.851773   70284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 19:38:47.851803   70284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 19:38:47.852404   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:38:47.854167   70284 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 19:38:47.853141   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:38:47.853926   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:38:47.855729   70284 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 19:38:47.855746   70284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 19:38:47.855763   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:38:47.855728   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:38:47.855809   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:38:47.855854   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:38:47.856000   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:38:47.856160   70284 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/no-preload-472858/id_rsa Username:docker}
	I0401 19:38:47.858726   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:38:47.859782   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:38:47.859826   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:38:47.859948   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:38:47.860138   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:38:47.860310   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:38:47.860593   70284 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/no-preload-472858/id_rsa Username:docker}
	I0401 19:38:47.870182   70284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34517
	I0401 19:38:47.870616   70284 main.go:141] libmachine: () Calling .GetVersion
	I0401 19:38:47.871182   70284 main.go:141] libmachine: Using API Version  1
	I0401 19:38:47.871203   70284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 19:38:47.871561   70284 main.go:141] libmachine: () Calling .GetMachineName
	I0401 19:38:47.871947   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetState
	I0401 19:38:47.873606   70284 main.go:141] libmachine: (no-preload-472858) Calling .DriverName
	I0401 19:38:47.873931   70284 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 19:38:47.873949   70284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 19:38:47.873967   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHHostname
	I0401 19:38:47.876826   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:38:47.877259   70284 main.go:141] libmachine: (no-preload-472858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:2e:03", ip: ""} in network mk-no-preload-472858: {Iface:virbr3 ExpiryTime:2024-04-01 20:31:54 +0000 UTC Type:0 Mac:52:54:00:0a:2e:03 Iaid: IPaddr:192.168.72.119 Prefix:24 Hostname:no-preload-472858 Clientid:01:52:54:00:0a:2e:03}
	I0401 19:38:47.877286   70284 main.go:141] libmachine: (no-preload-472858) DBG | domain no-preload-472858 has defined IP address 192.168.72.119 and MAC address 52:54:00:0a:2e:03 in network mk-no-preload-472858
	I0401 19:38:47.877389   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHPort
	I0401 19:38:47.877672   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHKeyPath
	I0401 19:38:47.877816   70284 main.go:141] libmachine: (no-preload-472858) Calling .GetSSHUsername
	I0401 19:38:47.877974   70284 sshutil.go:53] new ssh client: &{IP:192.168.72.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/no-preload-472858/id_rsa Username:docker}
	I0401 19:38:48.053731   70284 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:38:48.081160   70284 node_ready.go:35] waiting up to 6m0s for node "no-preload-472858" to be "Ready" ...
	I0401 19:38:48.107976   70284 node_ready.go:49] node "no-preload-472858" has status "Ready":"True"
	I0401 19:38:48.107998   70284 node_ready.go:38] duration metric: took 26.793115ms for node "no-preload-472858" to be "Ready" ...
	I0401 19:38:48.108009   70284 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:38:48.115968   70284 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:38:48.158349   70284 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 19:38:48.158383   70284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 19:38:48.166047   70284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 19:38:48.181902   70284 pod_ready.go:92] pod "etcd-no-preload-472858" in "kube-system" namespace has status "Ready":"True"
	I0401 19:38:48.181922   70284 pod_ready.go:81] duration metric: took 65.920299ms for pod "etcd-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:38:48.181935   70284 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:38:48.199372   70284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 19:38:48.232110   70284 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 19:38:48.232140   70284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 19:38:48.251891   70284 pod_ready.go:92] pod "kube-apiserver-no-preload-472858" in "kube-system" namespace has status "Ready":"True"
	I0401 19:38:48.251914   70284 pod_ready.go:81] duration metric: took 69.970077ms for pod "kube-apiserver-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:38:48.251929   70284 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:38:48.309605   70284 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 19:38:48.309627   70284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0401 19:38:48.325907   70284 pod_ready.go:92] pod "kube-controller-manager-no-preload-472858" in "kube-system" namespace has status "Ready":"True"
	I0401 19:38:48.325928   70284 pod_ready.go:81] duration metric: took 73.991711ms for pod "kube-controller-manager-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:38:48.325938   70284 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:38:48.373418   70284 pod_ready.go:92] pod "kube-scheduler-no-preload-472858" in "kube-system" namespace has status "Ready":"True"
	I0401 19:38:48.373448   70284 pod_ready.go:81] duration metric: took 47.503272ms for pod "kube-scheduler-no-preload-472858" in "kube-system" namespace to be "Ready" ...
	I0401 19:38:48.373456   70284 pod_ready.go:38] duration metric: took 265.436317ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:38:48.373479   70284 api_server.go:52] waiting for apiserver process to appear ...
	I0401 19:38:48.373543   70284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:38:48.396444   70284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 19:38:48.564838   70284 main.go:141] libmachine: Making call to close driver server
	I0401 19:38:48.564860   70284 main.go:141] libmachine: (no-preload-472858) Calling .Close
	I0401 19:38:48.565180   70284 main.go:141] libmachine: (no-preload-472858) DBG | Closing plugin on server side
	I0401 19:38:48.565197   70284 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:38:48.565227   70284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:38:48.565247   70284 main.go:141] libmachine: Making call to close driver server
	I0401 19:38:48.565258   70284 main.go:141] libmachine: (no-preload-472858) Calling .Close
	I0401 19:38:48.565489   70284 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:38:48.565506   70284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:38:48.579332   70284 main.go:141] libmachine: Making call to close driver server
	I0401 19:38:48.579355   70284 main.go:141] libmachine: (no-preload-472858) Calling .Close
	I0401 19:38:48.579599   70284 main.go:141] libmachine: (no-preload-472858) DBG | Closing plugin on server side
	I0401 19:38:48.579637   70284 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:38:48.579645   70284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:38:48.884887   70284 main.go:141] libmachine: Making call to close driver server
	I0401 19:38:48.884920   70284 main.go:141] libmachine: (no-preload-472858) Calling .Close
	I0401 19:38:48.884938   70284 api_server.go:72] duration metric: took 1.08176251s to wait for apiserver process to appear ...
	I0401 19:38:48.884958   70284 api_server.go:88] waiting for apiserver healthz status ...
	I0401 19:38:48.885018   70284 api_server.go:253] Checking apiserver healthz at https://192.168.72.119:8443/healthz ...
	I0401 19:38:48.885232   70284 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:38:48.885252   70284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:38:48.885260   70284 main.go:141] libmachine: Making call to close driver server
	I0401 19:38:48.885269   70284 main.go:141] libmachine: (no-preload-472858) Calling .Close
	I0401 19:38:48.885236   70284 main.go:141] libmachine: (no-preload-472858) DBG | Closing plugin on server side
	I0401 19:38:48.885519   70284 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:38:48.887182   70284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:38:48.885555   70284 main.go:141] libmachine: (no-preload-472858) DBG | Closing plugin on server side
	I0401 19:38:48.895737   70284 api_server.go:279] https://192.168.72.119:8443/healthz returned 200:
	ok
	I0401 19:38:48.899521   70284 api_server.go:141] control plane version: v1.30.0-rc.0
	I0401 19:38:48.899539   70284 api_server.go:131] duration metric: took 14.574989ms to wait for apiserver health ...
	I0401 19:38:48.899547   70284 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 19:38:48.914064   70284 system_pods.go:59] 8 kube-system pods found
	I0401 19:38:48.914090   70284 system_pods.go:61] "coredns-7db6d8ff4d-8285w" [c450ac4a-974e-4322-9857-fb65792a142b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:48.914106   70284 system_pods.go:61] "coredns-7db6d8ff4d-wmbsp" [7a73f081-42f4-4854-8785-25e54eb0a391] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:48.914112   70284 system_pods.go:61] "etcd-no-preload-472858" [d96862c6-4b97-4239-a79a-e877f2825eb6] Running
	I0401 19:38:48.914117   70284 system_pods.go:61] "kube-apiserver-no-preload-472858" [78418540-b912-4457-98ef-94cf57cf9379] Running
	I0401 19:38:48.914122   70284 system_pods.go:61] "kube-controller-manager-no-preload-472858" [4a48aaa7-c47f-4d1f-aace-f02d2f24c791] Running
	I0401 19:38:48.914126   70284 system_pods.go:61] "kube-proxy-5dmtl" [c243321b-b01a-4fd5-895a-888d18ee8527] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:38:48.914134   70284 system_pods.go:61] "kube-scheduler-no-preload-472858" [3564e7d0-f6cc-4584-a2cc-39fc6f884836] Running
	I0401 19:38:48.914138   70284 system_pods.go:61] "storage-provisioner" [844e010a-3bee-4fd1-942f-10fa50306617] Pending
	I0401 19:38:48.914146   70284 system_pods.go:74] duration metric: took 14.594359ms to wait for pod list to return data ...
	I0401 19:38:48.914156   70284 default_sa.go:34] waiting for default service account to be created ...
	I0401 19:38:48.924790   70284 default_sa.go:45] found service account: "default"
	I0401 19:38:48.924814   70284 default_sa.go:55] duration metric: took 10.649887ms for default service account to be created ...
	I0401 19:38:48.924825   70284 system_pods.go:116] waiting for k8s-apps to be running ...
	I0401 19:38:48.930993   70284 system_pods.go:86] 8 kube-system pods found
	I0401 19:38:48.931020   70284 system_pods.go:89] "coredns-7db6d8ff4d-8285w" [c450ac4a-974e-4322-9857-fb65792a142b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:48.931037   70284 system_pods.go:89] "coredns-7db6d8ff4d-wmbsp" [7a73f081-42f4-4854-8785-25e54eb0a391] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:48.931047   70284 system_pods.go:89] "etcd-no-preload-472858" [d96862c6-4b97-4239-a79a-e877f2825eb6] Running
	I0401 19:38:48.931056   70284 system_pods.go:89] "kube-apiserver-no-preload-472858" [78418540-b912-4457-98ef-94cf57cf9379] Running
	I0401 19:38:48.931066   70284 system_pods.go:89] "kube-controller-manager-no-preload-472858" [4a48aaa7-c47f-4d1f-aace-f02d2f24c791] Running
	I0401 19:38:48.931074   70284 system_pods.go:89] "kube-proxy-5dmtl" [c243321b-b01a-4fd5-895a-888d18ee8527] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:38:48.931089   70284 system_pods.go:89] "kube-scheduler-no-preload-472858" [3564e7d0-f6cc-4584-a2cc-39fc6f884836] Running
	I0401 19:38:48.931098   70284 system_pods.go:89] "storage-provisioner" [844e010a-3bee-4fd1-942f-10fa50306617] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:38:48.931117   70284 retry.go:31] will retry after 297.45527ms: missing components: kube-dns, kube-proxy
	I0401 19:38:49.123999   70284 main.go:141] libmachine: Making call to close driver server
	I0401 19:38:49.124019   70284 main.go:141] libmachine: (no-preload-472858) Calling .Close
	I0401 19:38:49.124344   70284 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:38:49.124394   70284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:38:49.124406   70284 main.go:141] libmachine: Making call to close driver server
	I0401 19:38:49.124414   70284 main.go:141] libmachine: (no-preload-472858) Calling .Close
	I0401 19:38:49.124356   70284 main.go:141] libmachine: (no-preload-472858) DBG | Closing plugin on server side
	I0401 19:38:49.124627   70284 main.go:141] libmachine: (no-preload-472858) DBG | Closing plugin on server side
	I0401 19:38:49.124661   70284 main.go:141] libmachine: Successfully made call to close driver server
	I0401 19:38:49.124677   70284 main.go:141] libmachine: Making call to close connection to plugin binary
	I0401 19:38:49.124690   70284 addons.go:470] Verifying addon metrics-server=true in "no-preload-472858"
	I0401 19:38:49.127415   70284 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0401 19:38:49.129047   70284 addons.go:505] duration metric: took 1.325796036s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0401 19:38:49.236094   70284 system_pods.go:86] 9 kube-system pods found
	I0401 19:38:49.236127   70284 system_pods.go:89] "coredns-7db6d8ff4d-8285w" [c450ac4a-974e-4322-9857-fb65792a142b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:49.236136   70284 system_pods.go:89] "coredns-7db6d8ff4d-wmbsp" [7a73f081-42f4-4854-8785-25e54eb0a391] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:49.236145   70284 system_pods.go:89] "etcd-no-preload-472858" [d96862c6-4b97-4239-a79a-e877f2825eb6] Running
	I0401 19:38:49.236152   70284 system_pods.go:89] "kube-apiserver-no-preload-472858" [78418540-b912-4457-98ef-94cf57cf9379] Running
	I0401 19:38:49.236159   70284 system_pods.go:89] "kube-controller-manager-no-preload-472858" [4a48aaa7-c47f-4d1f-aace-f02d2f24c791] Running
	I0401 19:38:49.236168   70284 system_pods.go:89] "kube-proxy-5dmtl" [c243321b-b01a-4fd5-895a-888d18ee8527] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:38:49.236175   70284 system_pods.go:89] "kube-scheduler-no-preload-472858" [3564e7d0-f6cc-4584-a2cc-39fc6f884836] Running
	I0401 19:38:49.236185   70284 system_pods.go:89] "metrics-server-569cc877fc-wj2tt" [5259722c-3d0b-468f-b941-419806e91177] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:38:49.236198   70284 system_pods.go:89] "storage-provisioner" [844e010a-3bee-4fd1-942f-10fa50306617] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:38:49.236218   70284 retry.go:31] will retry after 287.299528ms: missing components: kube-dns, kube-proxy
	I0401 19:38:49.530606   70284 system_pods.go:86] 9 kube-system pods found
	I0401 19:38:49.530643   70284 system_pods.go:89] "coredns-7db6d8ff4d-8285w" [c450ac4a-974e-4322-9857-fb65792a142b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:49.530654   70284 system_pods.go:89] "coredns-7db6d8ff4d-wmbsp" [7a73f081-42f4-4854-8785-25e54eb0a391] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:49.530663   70284 system_pods.go:89] "etcd-no-preload-472858" [d96862c6-4b97-4239-a79a-e877f2825eb6] Running
	I0401 19:38:49.530670   70284 system_pods.go:89] "kube-apiserver-no-preload-472858" [78418540-b912-4457-98ef-94cf57cf9379] Running
	I0401 19:38:49.530678   70284 system_pods.go:89] "kube-controller-manager-no-preload-472858" [4a48aaa7-c47f-4d1f-aace-f02d2f24c791] Running
	I0401 19:38:49.530687   70284 system_pods.go:89] "kube-proxy-5dmtl" [c243321b-b01a-4fd5-895a-888d18ee8527] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:38:49.530697   70284 system_pods.go:89] "kube-scheduler-no-preload-472858" [3564e7d0-f6cc-4584-a2cc-39fc6f884836] Running
	I0401 19:38:49.530711   70284 system_pods.go:89] "metrics-server-569cc877fc-wj2tt" [5259722c-3d0b-468f-b941-419806e91177] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:38:49.530721   70284 system_pods.go:89] "storage-provisioner" [844e010a-3bee-4fd1-942f-10fa50306617] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:38:49.530744   70284 retry.go:31] will retry after 435.286919ms: missing components: kube-dns, kube-proxy
	I0401 19:38:49.974049   70284 system_pods.go:86] 9 kube-system pods found
	I0401 19:38:49.974090   70284 system_pods.go:89] "coredns-7db6d8ff4d-8285w" [c450ac4a-974e-4322-9857-fb65792a142b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:49.974103   70284 system_pods.go:89] "coredns-7db6d8ff4d-wmbsp" [7a73f081-42f4-4854-8785-25e54eb0a391] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:49.974113   70284 system_pods.go:89] "etcd-no-preload-472858" [d96862c6-4b97-4239-a79a-e877f2825eb6] Running
	I0401 19:38:49.974121   70284 system_pods.go:89] "kube-apiserver-no-preload-472858" [78418540-b912-4457-98ef-94cf57cf9379] Running
	I0401 19:38:49.974128   70284 system_pods.go:89] "kube-controller-manager-no-preload-472858" [4a48aaa7-c47f-4d1f-aace-f02d2f24c791] Running
	I0401 19:38:49.974142   70284 system_pods.go:89] "kube-proxy-5dmtl" [c243321b-b01a-4fd5-895a-888d18ee8527] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:38:49.974153   70284 system_pods.go:89] "kube-scheduler-no-preload-472858" [3564e7d0-f6cc-4584-a2cc-39fc6f884836] Running
	I0401 19:38:49.974168   70284 system_pods.go:89] "metrics-server-569cc877fc-wj2tt" [5259722c-3d0b-468f-b941-419806e91177] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:38:49.974181   70284 system_pods.go:89] "storage-provisioner" [844e010a-3bee-4fd1-942f-10fa50306617] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:38:49.974203   70284 retry.go:31] will retry after 577.959209ms: missing components: kube-dns, kube-proxy
	I0401 19:38:50.558750   70284 system_pods.go:86] 9 kube-system pods found
	I0401 19:38:50.558780   70284 system_pods.go:89] "coredns-7db6d8ff4d-8285w" [c450ac4a-974e-4322-9857-fb65792a142b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:50.558787   70284 system_pods.go:89] "coredns-7db6d8ff4d-wmbsp" [7a73f081-42f4-4854-8785-25e54eb0a391] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:50.558795   70284 system_pods.go:89] "etcd-no-preload-472858" [d96862c6-4b97-4239-a79a-e877f2825eb6] Running
	I0401 19:38:50.558805   70284 system_pods.go:89] "kube-apiserver-no-preload-472858" [78418540-b912-4457-98ef-94cf57cf9379] Running
	I0401 19:38:50.558812   70284 system_pods.go:89] "kube-controller-manager-no-preload-472858" [4a48aaa7-c47f-4d1f-aace-f02d2f24c791] Running
	I0401 19:38:50.558820   70284 system_pods.go:89] "kube-proxy-5dmtl" [c243321b-b01a-4fd5-895a-888d18ee8527] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 19:38:50.558833   70284 system_pods.go:89] "kube-scheduler-no-preload-472858" [3564e7d0-f6cc-4584-a2cc-39fc6f884836] Running
	I0401 19:38:50.558840   70284 system_pods.go:89] "metrics-server-569cc877fc-wj2tt" [5259722c-3d0b-468f-b941-419806e91177] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:38:50.558846   70284 system_pods.go:89] "storage-provisioner" [844e010a-3bee-4fd1-942f-10fa50306617] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0401 19:38:50.558863   70284 retry.go:31] will retry after 723.380101ms: missing components: kube-dns, kube-proxy
	I0401 19:38:51.291450   70284 system_pods.go:86] 9 kube-system pods found
	I0401 19:38:51.291487   70284 system_pods.go:89] "coredns-7db6d8ff4d-8285w" [c450ac4a-974e-4322-9857-fb65792a142b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0401 19:38:51.291498   70284 system_pods.go:89] "coredns-7db6d8ff4d-wmbsp" [7a73f081-42f4-4854-8785-25e54eb0a391] Running
	I0401 19:38:51.291508   70284 system_pods.go:89] "etcd-no-preload-472858" [d96862c6-4b97-4239-a79a-e877f2825eb6] Running
	I0401 19:38:51.291514   70284 system_pods.go:89] "kube-apiserver-no-preload-472858" [78418540-b912-4457-98ef-94cf57cf9379] Running
	I0401 19:38:51.291521   70284 system_pods.go:89] "kube-controller-manager-no-preload-472858" [4a48aaa7-c47f-4d1f-aace-f02d2f24c791] Running
	I0401 19:38:51.291527   70284 system_pods.go:89] "kube-proxy-5dmtl" [c243321b-b01a-4fd5-895a-888d18ee8527] Running
	I0401 19:38:51.291532   70284 system_pods.go:89] "kube-scheduler-no-preload-472858" [3564e7d0-f6cc-4584-a2cc-39fc6f884836] Running
	I0401 19:38:51.291543   70284 system_pods.go:89] "metrics-server-569cc877fc-wj2tt" [5259722c-3d0b-468f-b941-419806e91177] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0401 19:38:51.291551   70284 system_pods.go:89] "storage-provisioner" [844e010a-3bee-4fd1-942f-10fa50306617] Running
	I0401 19:38:51.291559   70284 system_pods.go:126] duration metric: took 2.366728733s to wait for k8s-apps to be running ...
	I0401 19:38:51.291576   70284 system_svc.go:44] waiting for kubelet service to be running ....
	I0401 19:38:51.291622   70284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:38:51.310224   70284 system_svc.go:56] duration metric: took 18.63923ms WaitForService to wait for kubelet
	I0401 19:38:51.310250   70284 kubeadm.go:576] duration metric: took 3.50708191s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 19:38:51.310269   70284 node_conditions.go:102] verifying NodePressure condition ...
	I0401 19:38:51.312899   70284 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0401 19:38:51.312919   70284 node_conditions.go:123] node cpu capacity is 2
	I0401 19:38:51.312930   70284 node_conditions.go:105] duration metric: took 2.654739ms to run NodePressure ...
	I0401 19:38:51.312945   70284 start.go:240] waiting for startup goroutines ...
	I0401 19:38:51.312958   70284 start.go:245] waiting for cluster config update ...
	I0401 19:38:51.312985   70284 start.go:254] writing updated cluster config ...
	I0401 19:38:51.313269   70284 ssh_runner.go:195] Run: rm -f paused
	I0401 19:38:51.365041   70284 start.go:600] kubectl: 1.29.3, cluster: 1.30.0-rc.0 (minor skew: 1)
	I0401 19:38:51.367173   70284 out.go:177] * Done! kubectl is now configured to use "no-preload-472858" cluster and "default" namespace by default
	I0401 19:39:48.856665   71168 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0401 19:39:48.856779   71168 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0401 19:39:48.858840   71168 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0401 19:39:48.858896   71168 kubeadm.go:309] [preflight] Running pre-flight checks
	I0401 19:39:48.858987   71168 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 19:39:48.859122   71168 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 19:39:48.859222   71168 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 19:39:48.859314   71168 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 19:39:48.861104   71168 out.go:204]   - Generating certificates and keys ...
	I0401 19:39:48.861202   71168 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0401 19:39:48.861277   71168 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0401 19:39:48.861381   71168 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0401 19:39:48.861492   71168 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0401 19:39:48.861596   71168 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0401 19:39:48.861699   71168 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0401 19:39:48.861791   71168 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0401 19:39:48.861897   71168 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0401 19:39:48.862009   71168 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0401 19:39:48.862118   71168 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0401 19:39:48.862176   71168 kubeadm.go:309] [certs] Using the existing "sa" key
	I0401 19:39:48.862260   71168 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 19:39:48.862338   71168 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 19:39:48.862420   71168 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 19:39:48.862480   71168 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 19:39:48.862527   71168 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 19:39:48.862618   71168 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 19:39:48.862693   71168 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 19:39:48.862734   71168 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0401 19:39:48.862804   71168 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 19:39:48.864199   71168 out.go:204]   - Booting up control plane ...
	I0401 19:39:48.864291   71168 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 19:39:48.864359   71168 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 19:39:48.864420   71168 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 19:39:48.864504   71168 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 19:39:48.864712   71168 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 19:39:48.864788   71168 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0401 19:39:48.864871   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:39:48.865069   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:39:48.865153   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:39:48.865344   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:39:48.865453   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:39:48.865674   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:39:48.865755   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:39:48.865989   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:39:48.866095   71168 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0401 19:39:48.866269   71168 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0401 19:39:48.866285   71168 kubeadm.go:309] 
	I0401 19:39:48.866343   71168 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0401 19:39:48.866402   71168 kubeadm.go:309] 		timed out waiting for the condition
	I0401 19:39:48.866414   71168 kubeadm.go:309] 
	I0401 19:39:48.866458   71168 kubeadm.go:309] 	This error is likely caused by:
	I0401 19:39:48.866506   71168 kubeadm.go:309] 		- The kubelet is not running
	I0401 19:39:48.866651   71168 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0401 19:39:48.866665   71168 kubeadm.go:309] 
	I0401 19:39:48.866816   71168 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0401 19:39:48.866865   71168 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0401 19:39:48.866895   71168 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0401 19:39:48.866901   71168 kubeadm.go:309] 
	I0401 19:39:48.866989   71168 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0401 19:39:48.867061   71168 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0401 19:39:48.867070   71168 kubeadm.go:309] 
	I0401 19:39:48.867194   71168 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0401 19:39:48.867327   71168 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0401 19:39:48.867417   71168 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0401 19:39:48.867526   71168 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0401 19:39:48.867555   71168 kubeadm.go:309] 
	I0401 19:39:48.867633   71168 kubeadm.go:393] duration metric: took 7m58.404831893s to StartCluster
	I0401 19:39:48.867702   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:39:48.867764   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:39:48.922329   71168 cri.go:89] found id: ""
	I0401 19:39:48.922359   71168 logs.go:276] 0 containers: []
	W0401 19:39:48.922369   71168 logs.go:278] No container was found matching "kube-apiserver"
	I0401 19:39:48.922377   71168 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:39:48.922435   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:39:48.966212   71168 cri.go:89] found id: ""
	I0401 19:39:48.966235   71168 logs.go:276] 0 containers: []
	W0401 19:39:48.966243   71168 logs.go:278] No container was found matching "etcd"
	I0401 19:39:48.966248   71168 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:39:48.966309   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:39:49.015141   71168 cri.go:89] found id: ""
	I0401 19:39:49.015171   71168 logs.go:276] 0 containers: []
	W0401 19:39:49.015182   71168 logs.go:278] No container was found matching "coredns"
	I0401 19:39:49.015189   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:39:49.015249   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:39:49.053042   71168 cri.go:89] found id: ""
	I0401 19:39:49.053067   71168 logs.go:276] 0 containers: []
	W0401 19:39:49.053077   71168 logs.go:278] No container was found matching "kube-scheduler"
	I0401 19:39:49.053085   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:39:49.053144   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:39:49.093880   71168 cri.go:89] found id: ""
	I0401 19:39:49.093906   71168 logs.go:276] 0 containers: []
	W0401 19:39:49.093914   71168 logs.go:278] No container was found matching "kube-proxy"
	I0401 19:39:49.093923   71168 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:39:49.093976   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:39:49.129730   71168 cri.go:89] found id: ""
	I0401 19:39:49.129752   71168 logs.go:276] 0 containers: []
	W0401 19:39:49.129760   71168 logs.go:278] No container was found matching "kube-controller-manager"
	I0401 19:39:49.129766   71168 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:39:49.129818   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:39:49.171075   71168 cri.go:89] found id: ""
	I0401 19:39:49.171107   71168 logs.go:276] 0 containers: []
	W0401 19:39:49.171118   71168 logs.go:278] No container was found matching "kindnet"
	I0401 19:39:49.171125   71168 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0401 19:39:49.171204   71168 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0401 19:39:49.208279   71168 cri.go:89] found id: ""
	I0401 19:39:49.208308   71168 logs.go:276] 0 containers: []
	W0401 19:39:49.208319   71168 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0401 19:39:49.208330   71168 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:39:49.208345   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0401 19:39:49.294128   71168 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0401 19:39:49.294148   71168 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:39:49.294162   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:39:49.400930   71168 logs.go:123] Gathering logs for container status ...
	I0401 19:39:49.400963   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:39:49.443111   71168 logs.go:123] Gathering logs for kubelet ...
	I0401 19:39:49.443140   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:39:49.501382   71168 logs.go:123] Gathering logs for dmesg ...
	I0401 19:39:49.501417   71168 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0401 19:39:49.516418   71168 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0401 19:39:49.516461   71168 out.go:239] * 
	W0401 19:39:49.516521   71168 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0401 19:39:49.516591   71168 out.go:239] * 
	W0401 19:39:49.517377   71168 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 19:39:49.520389   71168 out.go:177] 
	W0401 19:39:49.521593   71168 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0401 19:39:49.521639   71168 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0401 19:39:49.521686   71168 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0401 19:39:49.523181   71168 out.go:177] 
	
	
	==> CRI-O <==
	Apr 01 19:51:19 old-k8s-version-163608 crio[649]: time="2024-04-01 19:51:19.783057799Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712001079783026140,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f7da13b3-185f-4440-ae4c-8119a004ceef name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:51:19 old-k8s-version-163608 crio[649]: time="2024-04-01 19:51:19.783967465Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=745c975a-717f-4702-ab95-bd9d4adc054a name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:51:19 old-k8s-version-163608 crio[649]: time="2024-04-01 19:51:19.784017896Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=745c975a-717f-4702-ab95-bd9d4adc054a name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:51:19 old-k8s-version-163608 crio[649]: time="2024-04-01 19:51:19.784057482Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=745c975a-717f-4702-ab95-bd9d4adc054a name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:51:19 old-k8s-version-163608 crio[649]: time="2024-04-01 19:51:19.822882338Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2230c090-911b-4d6c-8380-71816f7e4e01 name=/runtime.v1.RuntimeService/Version
	Apr 01 19:51:19 old-k8s-version-163608 crio[649]: time="2024-04-01 19:51:19.822966570Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2230c090-911b-4d6c-8380-71816f7e4e01 name=/runtime.v1.RuntimeService/Version
	Apr 01 19:51:19 old-k8s-version-163608 crio[649]: time="2024-04-01 19:51:19.824660757Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3461c21b-5d64-4d77-aaac-0316ef5ab176 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:51:19 old-k8s-version-163608 crio[649]: time="2024-04-01 19:51:19.825408698Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712001079825349522,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3461c21b-5d64-4d77-aaac-0316ef5ab176 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:51:19 old-k8s-version-163608 crio[649]: time="2024-04-01 19:51:19.826504196Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a3336df3-2f30-49c3-905a-f3d2b1e79356 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:51:19 old-k8s-version-163608 crio[649]: time="2024-04-01 19:51:19.826585345Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a3336df3-2f30-49c3-905a-f3d2b1e79356 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:51:19 old-k8s-version-163608 crio[649]: time="2024-04-01 19:51:19.826646486Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a3336df3-2f30-49c3-905a-f3d2b1e79356 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:51:19 old-k8s-version-163608 crio[649]: time="2024-04-01 19:51:19.863936508Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3949bd61-6a7b-4672-8c2d-4eeee2e4d2b6 name=/runtime.v1.RuntimeService/Version
	Apr 01 19:51:19 old-k8s-version-163608 crio[649]: time="2024-04-01 19:51:19.864036047Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3949bd61-6a7b-4672-8c2d-4eeee2e4d2b6 name=/runtime.v1.RuntimeService/Version
	Apr 01 19:51:19 old-k8s-version-163608 crio[649]: time="2024-04-01 19:51:19.866116554Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3e12c77e-f38a-4774-82ff-16437bd8667f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:51:19 old-k8s-version-163608 crio[649]: time="2024-04-01 19:51:19.866540780Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712001079866516409,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3e12c77e-f38a-4774-82ff-16437bd8667f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:51:19 old-k8s-version-163608 crio[649]: time="2024-04-01 19:51:19.867365237Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0a738d8e-0cf5-4ff8-bdd4-ee6fb81f8bb8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:51:19 old-k8s-version-163608 crio[649]: time="2024-04-01 19:51:19.867464737Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0a738d8e-0cf5-4ff8-bdd4-ee6fb81f8bb8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:51:19 old-k8s-version-163608 crio[649]: time="2024-04-01 19:51:19.867508988Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=0a738d8e-0cf5-4ff8-bdd4-ee6fb81f8bb8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:51:19 old-k8s-version-163608 crio[649]: time="2024-04-01 19:51:19.905375046Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=44be34e4-f89a-4c1d-918e-4a0fe6db424d name=/runtime.v1.RuntimeService/Version
	Apr 01 19:51:19 old-k8s-version-163608 crio[649]: time="2024-04-01 19:51:19.905447954Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=44be34e4-f89a-4c1d-918e-4a0fe6db424d name=/runtime.v1.RuntimeService/Version
	Apr 01 19:51:19 old-k8s-version-163608 crio[649]: time="2024-04-01 19:51:19.906846275Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3264a9d5-b2ac-49f5-9ab7-1eb9dccf4c50 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:51:19 old-k8s-version-163608 crio[649]: time="2024-04-01 19:51:19.907263776Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1712001079907242726,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3264a9d5-b2ac-49f5-9ab7-1eb9dccf4c50 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 01 19:51:19 old-k8s-version-163608 crio[649]: time="2024-04-01 19:51:19.907941422Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d90f2318-770d-405a-a9fd-b501b3f3abfb name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:51:19 old-k8s-version-163608 crio[649]: time="2024-04-01 19:51:19.908029438Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d90f2318-770d-405a-a9fd-b501b3f3abfb name=/runtime.v1.RuntimeService/ListContainers
	Apr 01 19:51:19 old-k8s-version-163608 crio[649]: time="2024-04-01 19:51:19.908069269Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d90f2318-770d-405a-a9fd-b501b3f3abfb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr 1 19:31] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054895] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.048499] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.863744] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.552305] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.682250] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.094710] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.062423] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068466] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.188548] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.189826] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.311320] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +7.231097] systemd-fstab-generator[843]: Ignoring "noauto" option for root device
	[  +0.070737] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.974260] systemd-fstab-generator[968]: Ignoring "noauto" option for root device
	[Apr 1 19:32] kauditd_printk_skb: 46 callbacks suppressed
	[Apr 1 19:35] systemd-fstab-generator[4981]: Ignoring "noauto" option for root device
	[Apr 1 19:37] systemd-fstab-generator[5267]: Ignoring "noauto" option for root device
	[  +0.080693] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:51:20 up 19 min,  0 users,  load average: 0.18, 0.08, 0.06
	Linux old-k8s-version-163608 5.10.207 #1 SMP Wed Mar 27 22:02:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 01 19:51:14 old-k8s-version-163608 kubelet[6756]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Apr 01 19:51:14 old-k8s-version-163608 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Apr 01 19:51:14 old-k8s-version-163608 kubelet[6756]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc000d289c0, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc000bbe2d0, 0x24, 0x0, ...)
	Apr 01 19:51:14 old-k8s-version-163608 kubelet[6756]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Apr 01 19:51:14 old-k8s-version-163608 kubelet[6756]: net.(*Dialer).DialContext(0xc000c62ae0, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000bbe2d0, 0x24, 0x0, 0x0, 0x0, ...)
	Apr 01 19:51:14 old-k8s-version-163608 kubelet[6756]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Apr 01 19:51:14 old-k8s-version-163608 kubelet[6756]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000c6a800, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000bbe2d0, 0x24, 0x60, 0x7f203bd66d90, 0x118, ...)
	Apr 01 19:51:14 old-k8s-version-163608 kubelet[6756]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Apr 01 19:51:14 old-k8s-version-163608 kubelet[6756]: net/http.(*Transport).dial(0xc000650000, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000bbe2d0, 0x24, 0x0, 0x0, 0x4f0b860, ...)
	Apr 01 19:51:14 old-k8s-version-163608 kubelet[6756]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Apr 01 19:51:14 old-k8s-version-163608 kubelet[6756]: net/http.(*Transport).dialConn(0xc000650000, 0x4f7fe00, 0xc000052030, 0x0, 0xc00009e780, 0x5, 0xc000bbe2d0, 0x24, 0x0, 0xc000b9e120, ...)
	Apr 01 19:51:14 old-k8s-version-163608 kubelet[6756]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Apr 01 19:51:14 old-k8s-version-163608 kubelet[6756]: net/http.(*Transport).dialConnFor(0xc000650000, 0xc000d1c160)
	Apr 01 19:51:14 old-k8s-version-163608 kubelet[6756]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Apr 01 19:51:14 old-k8s-version-163608 kubelet[6756]: created by net/http.(*Transport).queueForDial
	Apr 01 19:51:14 old-k8s-version-163608 kubelet[6756]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Apr 01 19:51:14 old-k8s-version-163608 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Apr 01 19:51:15 old-k8s-version-163608 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 138.
	Apr 01 19:51:15 old-k8s-version-163608 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 01 19:51:15 old-k8s-version-163608 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Apr 01 19:51:15 old-k8s-version-163608 kubelet[6765]: I0401 19:51:15.591546    6765 server.go:416] Version: v1.20.0
	Apr 01 19:51:15 old-k8s-version-163608 kubelet[6765]: I0401 19:51:15.591929    6765 server.go:837] Client rotation is on, will bootstrap in background
	Apr 01 19:51:15 old-k8s-version-163608 kubelet[6765]: I0401 19:51:15.594105    6765 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Apr 01 19:51:15 old-k8s-version-163608 kubelet[6765]: W0401 19:51:15.595114    6765 manager.go:159] Cannot detect current cgroup on cgroup v2
	Apr 01 19:51:15 old-k8s-version-163608 kubelet[6765]: I0401 19:51:15.595446    6765 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-163608 -n old-k8s-version-163608
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-163608 -n old-k8s-version-163608: exit status 2 (287.598162ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-163608" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (144.88s)

                                                
                                    

Test pass (249/319)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 10.07
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.29.3/json-events 4.34
13 TestDownloadOnly/v1.29.3/preload-exists 0
17 TestDownloadOnly/v1.29.3/LogsDuration 0.07
18 TestDownloadOnly/v1.29.3/DeleteAll 0.13
19 TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds 0.12
21 TestDownloadOnly/v1.30.0-rc.0/json-events 4.32
22 TestDownloadOnly/v1.30.0-rc.0/preload-exists 0
26 TestDownloadOnly/v1.30.0-rc.0/LogsDuration 0.07
27 TestDownloadOnly/v1.30.0-rc.0/DeleteAll 0.13
28 TestDownloadOnly/v1.30.0-rc.0/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 0.55
31 TestOffline 155.99
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 141.56
38 TestAddons/parallel/Registry 14.19
40 TestAddons/parallel/InspektorGadget 12.25
41 TestAddons/parallel/MetricsServer 7.11
42 TestAddons/parallel/HelmTiller 10.82
44 TestAddons/parallel/CSI 56.09
45 TestAddons/parallel/Headlamp 14.02
47 TestAddons/parallel/LocalPath 53.46
48 TestAddons/parallel/NvidiaDevicePlugin 6.6
49 TestAddons/parallel/Yakd 5.01
52 TestAddons/serial/GCPAuth/Namespaces 0.11
54 TestCertOptions 60.52
55 TestCertExpiration 318.84
57 TestForceSystemdFlag 79.86
58 TestForceSystemdEnv 75.34
60 TestKVMDriverInstallOrUpdate 1.1
64 TestErrorSpam/setup 46.65
65 TestErrorSpam/start 0.36
66 TestErrorSpam/status 0.76
67 TestErrorSpam/pause 1.64
68 TestErrorSpam/unpause 1.74
69 TestErrorSpam/stop 5.77
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 99.13
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 42.35
76 TestFunctional/serial/KubeContext 0.04
77 TestFunctional/serial/KubectlGetPods 0.15
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.26
81 TestFunctional/serial/CacheCmd/cache/add_local 1.12
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
83 TestFunctional/serial/CacheCmd/cache/list 0.05
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.66
86 TestFunctional/serial/CacheCmd/cache/delete 0.11
87 TestFunctional/serial/MinikubeKubectlCmd 0.11
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
89 TestFunctional/serial/ExtraConfig 50.81
90 TestFunctional/serial/ComponentHealth 0.06
91 TestFunctional/serial/LogsCmd 1.58
92 TestFunctional/serial/LogsFileCmd 1.55
93 TestFunctional/serial/InvalidService 4.27
95 TestFunctional/parallel/ConfigCmd 0.42
96 TestFunctional/parallel/DashboardCmd 13.58
97 TestFunctional/parallel/DryRun 0.29
98 TestFunctional/parallel/InternationalLanguage 0.15
99 TestFunctional/parallel/StatusCmd 1.06
103 TestFunctional/parallel/ServiceCmdConnect 11.99
104 TestFunctional/parallel/AddonsCmd 0.16
105 TestFunctional/parallel/PersistentVolumeClaim 34.88
107 TestFunctional/parallel/SSHCmd 0.48
108 TestFunctional/parallel/CpCmd 1.52
109 TestFunctional/parallel/MySQL 23.59
110 TestFunctional/parallel/FileSync 0.2
111 TestFunctional/parallel/CertSync 1.7
115 TestFunctional/parallel/NodeLabels 0.06
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.5
119 TestFunctional/parallel/License 0.36
120 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
121 TestFunctional/parallel/ImageCommands/ImageListTable 0.47
122 TestFunctional/parallel/ImageCommands/ImageListJson 0.4
123 TestFunctional/parallel/ImageCommands/ImageListYaml 0.56
125 TestFunctional/parallel/ImageCommands/Setup 1.01
135 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.38
136 TestFunctional/parallel/ServiceCmd/DeployApp 11.41
137 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.67
138 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 8.18
139 TestFunctional/parallel/ServiceCmd/List 0.38
140 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
141 TestFunctional/parallel/ServiceCmd/JSONOutput 0.31
142 TestFunctional/parallel/ProfileCmd/profile_list 0.34
143 TestFunctional/parallel/ServiceCmd/HTTPS 0.34
144 TestFunctional/parallel/ProfileCmd/profile_json_output 0.36
145 TestFunctional/parallel/ServiceCmd/Format 0.36
146 TestFunctional/parallel/MountCmd/any-port 8.06
147 TestFunctional/parallel/ServiceCmd/URL 0.37
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.71
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.65
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.42
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 4.96
152 TestFunctional/parallel/MountCmd/specific-port 2.17
153 TestFunctional/parallel/MountCmd/VerifyCleanup 1.46
154 TestFunctional/parallel/Version/short 0.05
155 TestFunctional/parallel/Version/components 0.72
156 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
157 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
158 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
159 TestFunctional/delete_addon-resizer_images 0.07
160 TestFunctional/delete_my-image_image 0.01
161 TestFunctional/delete_minikube_cached_images 0.01
165 TestMultiControlPlane/serial/StartCluster 204.73
166 TestMultiControlPlane/serial/DeployApp 5.21
167 TestMultiControlPlane/serial/PingHostFromPods 1.39
168 TestMultiControlPlane/serial/AddWorkerNode 47.19
169 TestMultiControlPlane/serial/NodeLabels 0.07
170 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.57
171 TestMultiControlPlane/serial/CopyFile 13.62
173 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.49
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.4
177 TestMultiControlPlane/serial/DeleteSecondaryNode 17.63
178 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.4
180 TestMultiControlPlane/serial/RestartCluster 374.67
181 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.41
182 TestMultiControlPlane/serial/AddSecondaryNode 74.53
183 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.54
187 TestJSONOutput/start/Command 97.41
188 TestJSONOutput/start/Audit 0
190 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/pause/Command 0.81
194 TestJSONOutput/pause/Audit 0
196 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/unpause/Command 0.7
200 TestJSONOutput/unpause/Audit 0
202 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/stop/Command 7.5
206 TestJSONOutput/stop/Audit 0
208 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
210 TestErrorJSONOutput 0.2
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 96.11
219 TestMountStart/serial/StartWithMountFirst 26.81
220 TestMountStart/serial/VerifyMountFirst 0.39
221 TestMountStart/serial/StartWithMountSecond 32.15
222 TestMountStart/serial/VerifyMountSecond 0.5
223 TestMountStart/serial/DeleteFirst 0.73
224 TestMountStart/serial/VerifyMountPostDelete 0.39
225 TestMountStart/serial/Stop 1.36
226 TestMountStart/serial/RestartStopped 25.66
227 TestMountStart/serial/VerifyMountPostStop 0.38
230 TestMultiNode/serial/FreshStart2Nodes 100.74
231 TestMultiNode/serial/DeployApp2Nodes 4
232 TestMultiNode/serial/PingHostFrom2Pods 0.88
233 TestMultiNode/serial/AddNode 39.2
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.23
236 TestMultiNode/serial/CopyFile 7.48
237 TestMultiNode/serial/StopNode 2.5
238 TestMultiNode/serial/StartAfterStop 29.09
240 TestMultiNode/serial/DeleteNode 2.38
242 TestMultiNode/serial/RestartMultiNode 165.69
243 TestMultiNode/serial/ValidateNameConflict 43.14
250 TestScheduledStopUnix 117.79
254 TestRunningBinaryUpgrade 198.88
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
260 TestNoKubernetes/serial/StartWithK8s 125.11
261 TestNoKubernetes/serial/StartWithStopK8s 13.28
262 TestNoKubernetes/serial/Start 30.93
270 TestNetworkPlugins/group/false 3.14
274 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
275 TestNoKubernetes/serial/ProfileList 0.83
276 TestNoKubernetes/serial/Stop 1.62
277 TestNoKubernetes/serial/StartNoArgs 44.92
278 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
279 TestStoppedBinaryUpgrade/Setup 0.48
280 TestStoppedBinaryUpgrade/Upgrade 171.75
289 TestPause/serial/Start 98.24
290 TestNetworkPlugins/group/auto/Start 104.64
291 TestStoppedBinaryUpgrade/MinikubeLogs 0.9
292 TestNetworkPlugins/group/kindnet/Start 93.36
294 TestNetworkPlugins/group/auto/KubeletFlags 0.21
295 TestNetworkPlugins/group/auto/NetCatPod 10.25
296 TestNetworkPlugins/group/auto/DNS 0.18
297 TestNetworkPlugins/group/auto/Localhost 0.16
298 TestNetworkPlugins/group/auto/HairPin 0.17
299 TestNetworkPlugins/group/calico/Start 92.67
300 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
301 TestNetworkPlugins/group/kindnet/KubeletFlags 0.24
302 TestNetworkPlugins/group/kindnet/NetCatPod 11.28
303 TestNetworkPlugins/group/custom-flannel/Start 99.66
304 TestNetworkPlugins/group/kindnet/DNS 0.15
305 TestNetworkPlugins/group/kindnet/Localhost 0.13
306 TestNetworkPlugins/group/kindnet/HairPin 0.14
307 TestNetworkPlugins/group/enable-default-cni/Start 129.05
308 TestNetworkPlugins/group/calico/ControllerPod 6.01
309 TestNetworkPlugins/group/calico/KubeletFlags 0.26
310 TestNetworkPlugins/group/calico/NetCatPod 13.27
311 TestNetworkPlugins/group/calico/DNS 0.19
312 TestNetworkPlugins/group/calico/Localhost 0.19
313 TestNetworkPlugins/group/calico/HairPin 0.15
314 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.26
315 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.28
316 TestNetworkPlugins/group/custom-flannel/DNS 0.23
317 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
318 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
319 TestNetworkPlugins/group/flannel/Start 80.52
320 TestNetworkPlugins/group/bridge/Start 127.87
321 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.25
322 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.33
323 TestNetworkPlugins/group/enable-default-cni/DNS 0.25
324 TestNetworkPlugins/group/enable-default-cni/Localhost 0.19
325 TestNetworkPlugins/group/enable-default-cni/HairPin 0.21
329 TestStartStop/group/no-preload/serial/FirstStart 113.35
330 TestNetworkPlugins/group/flannel/ControllerPod 6.01
331 TestNetworkPlugins/group/flannel/KubeletFlags 0.22
332 TestNetworkPlugins/group/flannel/NetCatPod 10.22
333 TestNetworkPlugins/group/flannel/DNS 0.23
334 TestNetworkPlugins/group/flannel/Localhost 0.14
335 TestNetworkPlugins/group/flannel/HairPin 0.17
337 TestStartStop/group/embed-certs/serial/FirstStart 107.57
338 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
339 TestNetworkPlugins/group/bridge/NetCatPod 10.21
340 TestNetworkPlugins/group/bridge/DNS 0.18
341 TestNetworkPlugins/group/bridge/Localhost 0.19
342 TestNetworkPlugins/group/bridge/HairPin 0.19
344 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 103.46
345 TestStartStop/group/no-preload/serial/DeployApp 9.33
346 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.03
348 TestStartStop/group/embed-certs/serial/DeployApp 8.31
349 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.3
351 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.29
352 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.05
355 TestStartStop/group/no-preload/serial/SecondStart 769.78
359 TestStartStop/group/embed-certs/serial/SecondStart 574.88
361 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 546.59
362 TestStartStop/group/old-k8s-version/serial/Stop 4.32
363 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
374 TestStartStop/group/newest-cni/serial/FirstStart 58.41
375 TestStartStop/group/newest-cni/serial/DeployApp 0
376 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.16
377 TestStartStop/group/newest-cni/serial/Stop 7.39
378 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
379 TestStartStop/group/newest-cni/serial/SecondStart 37.41
380 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
381 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
382 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
383 TestStartStop/group/newest-cni/serial/Pause 2.7
x
+
TestDownloadOnly/v1.20.0/json-events (10.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-794994 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-794994 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (10.071133193s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (10.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-794994
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-794994: exit status 85 (69.073433ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-794994 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:06 UTC |          |
	|         | -p download-only-794994        |                      |         |                |                     |          |
	|         | --force --alsologtostderr      |                      |         |                |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |          |
	|         | --container-runtime=crio       |                      |         |                |                     |          |
	|         | --driver=kvm2                  |                      |         |                |                     |          |
	|         | --container-runtime=crio       |                      |         |                |                     |          |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/01 18:06:10
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 18:06:10.638771   17763 out.go:291] Setting OutFile to fd 1 ...
	I0401 18:06:10.638864   17763 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 18:06:10.638878   17763 out.go:304] Setting ErrFile to fd 2...
	I0401 18:06:10.638884   17763 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 18:06:10.639068   17763 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-10493/.minikube/bin
	W0401 18:06:10.639184   17763 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18233-10493/.minikube/config/config.json: open /home/jenkins/minikube-integration/18233-10493/.minikube/config/config.json: no such file or directory
	I0401 18:06:10.640581   17763 out.go:298] Setting JSON to true
	I0401 18:06:10.641387   17763 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2923,"bootTime":1711991848,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 18:06:10.641438   17763 start.go:139] virtualization: kvm guest
	I0401 18:06:10.644185   17763 out.go:97] [download-only-794994] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 18:06:10.645822   17763 out.go:169] MINIKUBE_LOCATION=18233
	W0401 18:06:10.644312   17763 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball: no such file or directory
	I0401 18:06:10.644359   17763 notify.go:220] Checking for updates...
	I0401 18:06:10.647490   17763 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 18:06:10.648899   17763 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 18:06:10.650387   17763 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-10493/.minikube
	I0401 18:06:10.651783   17763 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0401 18:06:10.654185   17763 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0401 18:06:10.654407   17763 driver.go:392] Setting default libvirt URI to qemu:///system
	I0401 18:06:10.751564   17763 out.go:97] Using the kvm2 driver based on user configuration
	I0401 18:06:10.751600   17763 start.go:297] selected driver: kvm2
	I0401 18:06:10.751607   17763 start.go:901] validating driver "kvm2" against <nil>
	I0401 18:06:10.751920   17763 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 18:06:10.752028   17763 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18233-10493/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0401 18:06:10.766215   17763 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0401 18:06:10.766278   17763 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0401 18:06:10.766780   17763 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0401 18:06:10.766935   17763 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0401 18:06:10.767003   17763 cni.go:84] Creating CNI manager for ""
	I0401 18:06:10.767019   17763 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0401 18:06:10.767031   17763 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0401 18:06:10.767086   17763 start.go:340] cluster config:
	{Name:download-only-794994 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-794994 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 18:06:10.767302   17763 iso.go:125] acquiring lock: {Name:mka511ffe42ecd86bd7f46e7a17ddcdd3e5e4327 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 18:06:10.769164   17763 out.go:97] Downloading VM boot image ...
	I0401 18:06:10.769200   17763 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18233-10493/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso
	I0401 18:06:13.611421   17763 out.go:97] Starting "download-only-794994" primary control-plane node in "download-only-794994" cluster
	I0401 18:06:13.611443   17763 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0401 18:06:13.632546   17763 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0401 18:06:13.632579   17763 cache.go:56] Caching tarball of preloaded images
	I0401 18:06:13.632733   17763 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0401 18:06:13.634446   17763 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0401 18:06:13.634464   17763 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0401 18:06:13.660005   17763 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0401 18:06:19.218265   17763 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0401 18:06:19.218363   17763 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18233-10493/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0401 18:06:20.118194   17763 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0401 18:06:20.118530   17763 profile.go:143] Saving config to /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/download-only-794994/config.json ...
	I0401 18:06:20.118558   17763 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/download-only-794994/config.json: {Name:mk8814194f95d20ebdf58dfbdd450328b2b269cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 18:06:20.118769   17763 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0401 18:06:20.118977   17763 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18233-10493/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-794994 host does not exist
	  To start a cluster, run: "minikube start -p download-only-794994"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-794994
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/json-events (4.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-591417 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-591417 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.344376472s)
--- PASS: TestDownloadOnly/v1.29.3/json-events (4.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/preload-exists
--- PASS: TestDownloadOnly/v1.29.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-591417
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-591417: exit status 85 (70.520721ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-794994 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:06 UTC |                     |
	|         | -p download-only-794994        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |                     |
	|         | --container-runtime=crio       |                      |         |                |                     |                     |
	|         | --driver=kvm2                  |                      |         |                |                     |                     |
	|         | --container-runtime=crio       |                      |         |                |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:06 UTC | 01 Apr 24 18:06 UTC |
	| delete  | -p download-only-794994        | download-only-794994 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:06 UTC | 01 Apr 24 18:06 UTC |
	| start   | -o=json --download-only        | download-only-591417 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:06 UTC |                     |
	|         | -p download-only-591417        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3   |                      |         |                |                     |                     |
	|         | --container-runtime=crio       |                      |         |                |                     |                     |
	|         | --driver=kvm2                  |                      |         |                |                     |                     |
	|         | --container-runtime=crio       |                      |         |                |                     |                     |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/01 18:06:21
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 18:06:21.039786   17933 out.go:291] Setting OutFile to fd 1 ...
	I0401 18:06:21.039920   17933 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 18:06:21.039933   17933 out.go:304] Setting ErrFile to fd 2...
	I0401 18:06:21.039939   17933 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 18:06:21.040136   17933 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-10493/.minikube/bin
	I0401 18:06:21.040688   17933 out.go:298] Setting JSON to true
	I0401 18:06:21.041510   17933 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2933,"bootTime":1711991848,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 18:06:21.041567   17933 start.go:139] virtualization: kvm guest
	I0401 18:06:21.044046   17933 out.go:97] [download-only-591417] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 18:06:21.045670   17933 out.go:169] MINIKUBE_LOCATION=18233
	I0401 18:06:21.044244   17933 notify.go:220] Checking for updates...
	I0401 18:06:21.048551   17933 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 18:06:21.050196   17933 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 18:06:21.051751   17933 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-10493/.minikube
	I0401 18:06:21.054939   17933 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-591417 host does not exist
	  To start a cluster, run: "minikube start -p download-only-591417"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.3/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.3/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-591417
--- PASS: TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.0/json-events (4.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-040534 --force --alsologtostderr --kubernetes-version=v1.30.0-rc.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-040534 --force --alsologtostderr --kubernetes-version=v1.30.0-rc.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.314941351s)
--- PASS: TestDownloadOnly/v1.30.0-rc.0/json-events (4.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-040534
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-040534: exit status 85 (70.377698ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-794994 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:06 UTC |                     |
	|         | -p download-only-794994           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |                |                     |                     |
	|         | --container-runtime=crio          |                      |         |                |                     |                     |
	|         | --driver=kvm2                     |                      |         |                |                     |                     |
	|         | --container-runtime=crio          |                      |         |                |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:06 UTC | 01 Apr 24 18:06 UTC |
	| delete  | -p download-only-794994           | download-only-794994 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:06 UTC | 01 Apr 24 18:06 UTC |
	| start   | -o=json --download-only           | download-only-591417 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:06 UTC |                     |
	|         | -p download-only-591417           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3      |                      |         |                |                     |                     |
	|         | --container-runtime=crio          |                      |         |                |                     |                     |
	|         | --driver=kvm2                     |                      |         |                |                     |                     |
	|         | --container-runtime=crio          |                      |         |                |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:06 UTC | 01 Apr 24 18:06 UTC |
	| delete  | -p download-only-591417           | download-only-591417 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:06 UTC | 01 Apr 24 18:06 UTC |
	| start   | -o=json --download-only           | download-only-040534 | jenkins | v1.33.0-beta.0 | 01 Apr 24 18:06 UTC |                     |
	|         | -p download-only-040534           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.0 |                      |         |                |                     |                     |
	|         | --container-runtime=crio          |                      |         |                |                     |                     |
	|         | --driver=kvm2                     |                      |         |                |                     |                     |
	|         | --container-runtime=crio          |                      |         |                |                     |                     |
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/01 18:06:25
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 18:06:25.706367   18097 out.go:291] Setting OutFile to fd 1 ...
	I0401 18:06:25.706612   18097 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 18:06:25.706621   18097 out.go:304] Setting ErrFile to fd 2...
	I0401 18:06:25.706625   18097 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 18:06:25.706812   18097 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-10493/.minikube/bin
	I0401 18:06:25.707321   18097 out.go:298] Setting JSON to true
	I0401 18:06:25.708106   18097 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2938,"bootTime":1711991848,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 18:06:25.708165   18097 start.go:139] virtualization: kvm guest
	I0401 18:06:25.710140   18097 out.go:97] [download-only-040534] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 18:06:25.711770   18097 out.go:169] MINIKUBE_LOCATION=18233
	I0401 18:06:25.710320   18097 notify.go:220] Checking for updates...
	I0401 18:06:25.714323   18097 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 18:06:25.715775   18097 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 18:06:25.717428   18097 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-10493/.minikube
	I0401 18:06:25.718840   18097 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-040534 host does not exist
	  To start a cluster, run: "minikube start -p download-only-040534"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0-rc.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.0-rc.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-040534
--- PASS: TestDownloadOnly/v1.30.0-rc.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.55s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-431770 --alsologtostderr --binary-mirror http://127.0.0.1:37617 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-431770" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-431770
--- PASS: TestBinaryMirror (0.55s)

                                                
                                    
x
+
TestOffline (155.99s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-232805 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-232805 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (2m34.598918263s)
helpers_test.go:175: Cleaning up "offline-crio-232805" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-232805
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-232805: (1.387032533s)
--- PASS: TestOffline (155.99s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-881427
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-881427: exit status 85 (63.24455ms)

                                                
                                                
-- stdout --
	* Profile "addons-881427" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-881427"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-881427
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-881427: exit status 85 (61.06072ms)

                                                
                                                
-- stdout --
	* Profile "addons-881427" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-881427"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (141.56s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-881427 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-881427 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m21.562667636s)
--- PASS: TestAddons/Setup (141.56s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.19s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 35.443888ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-9jpg9" [257b26ce-194a-4b12-b7f6-a5da0f9cf9e6] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.008030465s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-hhmlr" [dae5e9cd-9b99-49cd-aa43-a0dd80d05e0f] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.008818331s
addons_test.go:340: (dbg) Run:  kubectl --context addons-881427 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-881427 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-881427 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.995054992s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-881427 ip
2024/04/01 18:09:06 [DEBUG] GET http://192.168.39.214:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-881427 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.19s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.25s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-x552z" [19685406-f298-40bb-8bc8-1d4a0f011b1e] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004351903s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-881427
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-881427: (6.24586517s)
--- PASS: TestAddons/parallel/InspektorGadget (12.25s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.11s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 35.5777ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-75d6c48ddd-s96px" [ae3f8b9b-1cda-4f49-bb5d-a99466fe6135] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.00531707s
addons_test.go:415: (dbg) Run:  kubectl --context addons-881427 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-881427 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (7.11s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.82s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 35.322424ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-swl9s" [a6dccfe9-2e74-4db2-b2b9-a8e8e6abcf92] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.007896467s
addons_test.go:473: (dbg) Run:  kubectl --context addons-881427 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-881427 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.021934735s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-881427 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.82s)

                                                
                                    
x
+
TestAddons/parallel/CSI (56.09s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 36.198701ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-881427 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881427 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881427 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881427 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881427 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881427 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881427 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881427 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881427 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881427 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881427 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881427 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881427 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881427 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881427 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881427 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881427 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881427 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881427 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881427 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881427 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-881427 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [cc4381a4-b5ce-4478-a676-6d43d9ae14a3] Pending
helpers_test.go:344: "task-pv-pod" [cc4381a4-b5ce-4478-a676-6d43d9ae14a3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [cc4381a4-b5ce-4478-a676-6d43d9ae14a3] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.004750949s
addons_test.go:584: (dbg) Run:  kubectl --context addons-881427 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-881427 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-881427 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-881427 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-881427 delete pod task-pv-pod: (2.440166454s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-881427 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-881427 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881427 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881427 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881427 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881427 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881427 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881427 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881427 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881427 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881427 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-881427 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [e3cd199e-9361-46bf-9f50-256b33b4f9a6] Pending
helpers_test.go:344: "task-pv-pod-restore" [e3cd199e-9361-46bf-9f50-256b33b4f9a6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [e3cd199e-9361-46bf-9f50-256b33b4f9a6] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004257252s
addons_test.go:626: (dbg) Run:  kubectl --context addons-881427 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-881427 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-881427 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-881427 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-881427 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.810694701s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-881427 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (56.09s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-881427 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-881427 --alsologtostderr -v=1: (1.016853783s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5b77dbd7c4-ssqx5" [71fe80a0-1f83-4f16-908c-b2bf00b585ee] Pending
helpers_test.go:344: "headlamp-5b77dbd7c4-ssqx5" [71fe80a0-1f83-4f16-908c-b2bf00b585ee] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5b77dbd7c4-ssqx5" [71fe80a0-1f83-4f16-908c-b2bf00b585ee] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.006075953s
--- PASS: TestAddons/parallel/Headlamp (14.02s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.46s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-881427 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-881427 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881427 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881427 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881427 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881427 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881427 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-881427 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [80642733-4707-42c6-8be7-d7f2bb1dc265] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [80642733-4707-42c6-8be7-d7f2bb1dc265] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [80642733-4707-42c6-8be7-d7f2bb1dc265] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004982033s
addons_test.go:891: (dbg) Run:  kubectl --context addons-881427 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-881427 ssh "cat /opt/local-path-provisioner/pvc-de16cdd6-519d-46fd-98d1-b0afa2a23e43_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-881427 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-881427 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-881427 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-881427 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.652531198s)
--- PASS: TestAddons/parallel/LocalPath (53.46s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.6s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-m86dq" [dd4046ef-ce6a-48e2-9d0e-bf3aa98f9156] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004882444s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-881427
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.60s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-n4pp4" [85d661ab-6d0c-4c5d-80d7-5e87e8e096b0] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.005050984s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-881427 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-881427 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestCertOptions (60.52s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-444257 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-444257 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (58.99407262s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-444257 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-444257 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-444257 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-444257" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-444257
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-444257: (1.005088005s)
--- PASS: TestCertOptions (60.52s)

                                                
                                    
x
+
TestCertExpiration (318.84s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-385547 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-385547 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (45.573527417s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-385547 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
E0401 19:13:59.905771   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/functional-784295/client.crt: no such file or directory
E0401 19:14:16.856962   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/functional-784295/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-385547 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (1m32.080185167s)
helpers_test.go:175: Cleaning up "cert-expiration-385547" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-385547
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-385547: (1.180993477s)
--- PASS: TestCertExpiration (318.84s)

                                                
                                    
x
+
TestForceSystemdFlag (79.86s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-128567 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-128567 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m18.83633065s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-128567 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-128567" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-128567
--- PASS: TestForceSystemdFlag (79.86s)

                                                
                                    
x
+
TestForceSystemdEnv (75.34s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-264634 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-264634 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m14.548440752s)
helpers_test.go:175: Cleaning up "force-systemd-env-264634" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-264634
--- PASS: TestForceSystemdEnv (75.34s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.1s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.10s)

                                                
                                    
x
+
TestErrorSpam/setup (46.65s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-354147 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-354147 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-354147 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-354147 --driver=kvm2  --container-runtime=crio: (46.648976258s)
--- PASS: TestErrorSpam/setup (46.65s)

                                                
                                    
x
+
TestErrorSpam/start (0.36s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-354147 --log_dir /tmp/nospam-354147 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-354147 --log_dir /tmp/nospam-354147 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-354147 --log_dir /tmp/nospam-354147 start --dry-run
--- PASS: TestErrorSpam/start (0.36s)

                                                
                                    
x
+
TestErrorSpam/status (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-354147 --log_dir /tmp/nospam-354147 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-354147 --log_dir /tmp/nospam-354147 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-354147 --log_dir /tmp/nospam-354147 status
--- PASS: TestErrorSpam/status (0.76s)

                                                
                                    
x
+
TestErrorSpam/pause (1.64s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-354147 --log_dir /tmp/nospam-354147 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-354147 --log_dir /tmp/nospam-354147 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-354147 --log_dir /tmp/nospam-354147 pause
--- PASS: TestErrorSpam/pause (1.64s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.74s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-354147 --log_dir /tmp/nospam-354147 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-354147 --log_dir /tmp/nospam-354147 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-354147 --log_dir /tmp/nospam-354147 unpause
--- PASS: TestErrorSpam/unpause (1.74s)

                                                
                                    
x
+
TestErrorSpam/stop (5.77s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-354147 --log_dir /tmp/nospam-354147 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-354147 --log_dir /tmp/nospam-354147 stop: (2.305950883s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-354147 --log_dir /tmp/nospam-354147 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-354147 --log_dir /tmp/nospam-354147 stop: (1.933139438s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-354147 --log_dir /tmp/nospam-354147 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-354147 --log_dir /tmp/nospam-354147 stop: (1.531829874s)
--- PASS: TestErrorSpam/stop (5.77s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18233-10493/.minikube/files/etc/test/nested/copy/17751/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (99.13s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-784295 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-784295 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m39.12581457s)
--- PASS: TestFunctional/serial/StartWithProxy (99.13s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (42.35s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-784295 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-784295 --alsologtostderr -v=8: (42.350798894s)
functional_test.go:659: soft start took 42.351507662s for "functional-784295" cluster.
--- PASS: TestFunctional/serial/SoftStart (42.35s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-784295 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-784295 cache add registry.k8s.io/pause:3.1: (1.090282841s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-784295 cache add registry.k8s.io/pause:3.3: (1.141616138s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-784295 cache add registry.k8s.io/pause:latest: (1.031538551s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-784295 /tmp/TestFunctionalserialCacheCmdcacheadd_local2416027497/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 cache add minikube-local-cache-test:functional-784295
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 cache delete minikube-local-cache-test:functional-784295
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-784295
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-784295 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (218.937476ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 kubectl -- --context functional-784295 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-784295 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (50.81s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-784295 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0401 18:18:52.855109   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/client.crt: no such file or directory
E0401 18:18:52.860826   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/client.crt: no such file or directory
E0401 18:18:52.871067   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/client.crt: no such file or directory
E0401 18:18:52.891371   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/client.crt: no such file or directory
E0401 18:18:52.931630   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/client.crt: no such file or directory
E0401 18:18:53.011971   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/client.crt: no such file or directory
E0401 18:18:53.172410   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/client.crt: no such file or directory
E0401 18:18:53.492939   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/client.crt: no such file or directory
E0401 18:18:54.133860   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/client.crt: no such file or directory
E0401 18:18:55.414371   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/client.crt: no such file or directory
E0401 18:18:57.975307   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/client.crt: no such file or directory
E0401 18:19:03.096302   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-784295 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (50.810592003s)
functional_test.go:757: restart took 50.810680455s for "functional-784295" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (50.81s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-784295 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.58s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-784295 logs: (1.582390739s)
--- PASS: TestFunctional/serial/LogsCmd (1.58s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.55s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 logs --file /tmp/TestFunctionalserialLogsFileCmd1027134599/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-784295 logs --file /tmp/TestFunctionalserialLogsFileCmd1027134599/001/logs.txt: (1.549774583s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.55s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.27s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-784295 apply -f testdata/invalidsvc.yaml
E0401 18:19:13.337344   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/client.crt: no such file or directory
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-784295
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-784295: exit status 115 (279.747375ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.229:31791 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-784295 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.27s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-784295 config get cpus: exit status 14 (64.345675ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-784295 config get cpus: exit status 14 (70.459846ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-784295 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-784295 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 25922: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.58s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-784295 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-784295 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (151.49616ms)

                                                
                                                
-- stdout --
	* [functional-784295] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18233
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18233-10493/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-10493/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 18:19:32.234683   25772 out.go:291] Setting OutFile to fd 1 ...
	I0401 18:19:32.235265   25772 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 18:19:32.235279   25772 out.go:304] Setting ErrFile to fd 2...
	I0401 18:19:32.235287   25772 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 18:19:32.235725   25772 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-10493/.minikube/bin
	I0401 18:19:32.236726   25772 out.go:298] Setting JSON to false
	I0401 18:19:32.237602   25772 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3724,"bootTime":1711991848,"procs":231,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 18:19:32.237670   25772 start.go:139] virtualization: kvm guest
	I0401 18:19:32.239324   25772 out.go:177] * [functional-784295] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 18:19:32.241019   25772 notify.go:220] Checking for updates...
	I0401 18:19:32.241047   25772 out.go:177]   - MINIKUBE_LOCATION=18233
	I0401 18:19:32.242900   25772 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 18:19:32.244709   25772 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 18:19:32.246214   25772 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-10493/.minikube
	I0401 18:19:32.248072   25772 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 18:19:32.249133   25772 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 18:19:32.251111   25772 config.go:182] Loaded profile config "functional-784295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 18:19:32.251502   25772 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:19:32.251577   25772 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:19:32.266283   25772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44399
	I0401 18:19:32.266661   25772 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:19:32.267197   25772 main.go:141] libmachine: Using API Version  1
	I0401 18:19:32.267222   25772 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:19:32.267542   25772 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:19:32.267728   25772 main.go:141] libmachine: (functional-784295) Calling .DriverName
	I0401 18:19:32.267941   25772 driver.go:392] Setting default libvirt URI to qemu:///system
	I0401 18:19:32.268210   25772 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:19:32.268240   25772 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:19:32.283534   25772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36689
	I0401 18:19:32.283983   25772 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:19:32.284495   25772 main.go:141] libmachine: Using API Version  1
	I0401 18:19:32.284524   25772 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:19:32.284854   25772 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:19:32.285033   25772 main.go:141] libmachine: (functional-784295) Calling .DriverName
	I0401 18:19:32.317342   25772 out.go:177] * Using the kvm2 driver based on existing profile
	I0401 18:19:32.318576   25772 start.go:297] selected driver: kvm2
	I0401 18:19:32.318587   25772 start.go:901] validating driver "kvm2" against &{Name:functional-784295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:functional-784295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.229 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 18:19:32.318689   25772 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 18:19:32.320698   25772 out.go:177] 
	W0401 18:19:32.322094   25772 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0401 18:19:32.323456   25772 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-784295 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-784295 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-784295 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (151.954836ms)

                                                
                                                
-- stdout --
	* [functional-784295] minikube v1.33.0-beta.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18233
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18233-10493/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-10493/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 18:19:32.077501   25743 out.go:291] Setting OutFile to fd 1 ...
	I0401 18:19:32.077782   25743 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 18:19:32.077792   25743 out.go:304] Setting ErrFile to fd 2...
	I0401 18:19:32.077797   25743 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 18:19:32.078071   25743 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-10493/.minikube/bin
	I0401 18:19:32.078560   25743 out.go:298] Setting JSON to false
	I0401 18:19:32.079399   25743 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3724,"bootTime":1711991848,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 18:19:32.079461   25743 start.go:139] virtualization: kvm guest
	I0401 18:19:32.082126   25743 out.go:177] * [functional-784295] minikube v1.33.0-beta.0 sur Ubuntu 20.04 (kvm/amd64)
	I0401 18:19:32.083785   25743 out.go:177]   - MINIKUBE_LOCATION=18233
	I0401 18:19:32.085157   25743 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 18:19:32.083799   25743 notify.go:220] Checking for updates...
	I0401 18:19:32.086805   25743 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 18:19:32.088396   25743 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-10493/.minikube
	I0401 18:19:32.089903   25743 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 18:19:32.091586   25743 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 18:19:32.093374   25743 config.go:182] Loaded profile config "functional-784295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 18:19:32.093764   25743 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:19:32.093836   25743 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:19:32.111588   25743 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33239
	I0401 18:19:32.112127   25743 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:19:32.112663   25743 main.go:141] libmachine: Using API Version  1
	I0401 18:19:32.112684   25743 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:19:32.113028   25743 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:19:32.113231   25743 main.go:141] libmachine: (functional-784295) Calling .DriverName
	I0401 18:19:32.113482   25743 driver.go:392] Setting default libvirt URI to qemu:///system
	I0401 18:19:32.113809   25743 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:19:32.113853   25743 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:19:32.128202   25743 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36547
	I0401 18:19:32.128643   25743 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:19:32.129157   25743 main.go:141] libmachine: Using API Version  1
	I0401 18:19:32.129209   25743 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:19:32.129512   25743 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:19:32.129696   25743 main.go:141] libmachine: (functional-784295) Calling .DriverName
	I0401 18:19:32.163438   25743 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0401 18:19:32.164870   25743 start.go:297] selected driver: kvm2
	I0401 18:19:32.164887   25743 start.go:901] validating driver "kvm2" against &{Name:functional-784295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:functional-784295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.229 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 18:19:32.165021   25743 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 18:19:32.167315   25743 out.go:177] 
	W0401 18:19:32.168663   25743 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0401 18:19:32.170039   25743 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-784295 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-784295 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-rt88g" [752d9ef3-2f28-4b56-88e5-3c095b133a61] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-rt88g" [752d9ef3-2f28-4b56-88e5-3c095b133a61] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.020204799s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.229:30925
functional_test.go:1671: http://192.168.39.229:30925: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-rt88g

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.229:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.229:30925
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.99s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (34.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [f2a14744-7f04-46cf-bc20-64ad79d2a58f] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.011637513s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-784295 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-784295 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-784295 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-784295 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-784295 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [071ec4e2-7f51-4740-8797-91f6bfe3995b] Pending
helpers_test.go:344: "sp-pod" [071ec4e2-7f51-4740-8797-91f6bfe3995b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [071ec4e2-7f51-4740-8797-91f6bfe3995b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.004663673s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-784295 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-784295 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-784295 delete -f testdata/storage-provisioner/pod.yaml: (2.049036378s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-784295 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f96a6972-989d-4cfc-af86-0b7d99b7fdee] Pending
helpers_test.go:344: "sp-pod" [f96a6972-989d-4cfc-af86-0b7d99b7fdee] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f96a6972-989d-4cfc-af86-0b7d99b7fdee] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.008405933s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-784295 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (34.88s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 ssh -n functional-784295 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 cp functional-784295:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1697901622/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 ssh -n functional-784295 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 ssh -n functional-784295 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-784295 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-t2qjr" [fe0b9bc8-3ccf-4fd5-946b-42dc9ca92108] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-t2qjr" [fe0b9bc8-3ccf-4fd5-946b-42dc9ca92108] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.004264219s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-784295 exec mysql-859648c796-t2qjr -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-784295 exec mysql-859648c796-t2qjr -- mysql -ppassword -e "show databases;": exit status 1 (137.801998ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-784295 exec mysql-859648c796-t2qjr -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.59s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/17751/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 ssh "sudo cat /etc/test/nested/copy/17751/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/17751.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 ssh "sudo cat /etc/ssl/certs/17751.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/17751.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 ssh "sudo cat /usr/share/ca-certificates/17751.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/177512.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 ssh "sudo cat /etc/ssl/certs/177512.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/177512.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 ssh "sudo cat /usr/share/ca-certificates/177512.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-784295 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-784295 ssh "sudo systemctl is-active docker": exit status 1 (244.421899ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-784295 ssh "sudo systemctl is-active containerd": exit status 1 (253.398189ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 image ls --format short --alsologtostderr
2024/04/01 18:19:45 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-784295 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.29.3
registry.k8s.io/kube-proxy:v1.29.3
registry.k8s.io/kube-controller-manager:v1.29.3
registry.k8s.io/kube-apiserver:v1.29.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-784295
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-784295
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-784295 image ls --format short --alsologtostderr:
I0401 18:19:45.545336   26922 out.go:291] Setting OutFile to fd 1 ...
I0401 18:19:45.545452   26922 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0401 18:19:45.545462   26922 out.go:304] Setting ErrFile to fd 2...
I0401 18:19:45.545466   26922 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0401 18:19:45.545700   26922 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-10493/.minikube/bin
I0401 18:19:45.546241   26922 config.go:182] Loaded profile config "functional-784295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0401 18:19:45.546335   26922 config.go:182] Loaded profile config "functional-784295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0401 18:19:45.546704   26922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0401 18:19:45.546740   26922 main.go:141] libmachine: Launching plugin server for driver kvm2
I0401 18:19:45.561586   26922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46095
I0401 18:19:45.561987   26922 main.go:141] libmachine: () Calling .GetVersion
I0401 18:19:45.562632   26922 main.go:141] libmachine: Using API Version  1
I0401 18:19:45.562654   26922 main.go:141] libmachine: () Calling .SetConfigRaw
I0401 18:19:45.563008   26922 main.go:141] libmachine: () Calling .GetMachineName
I0401 18:19:45.563233   26922 main.go:141] libmachine: (functional-784295) Calling .GetState
I0401 18:19:45.564970   26922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0401 18:19:45.565012   26922 main.go:141] libmachine: Launching plugin server for driver kvm2
I0401 18:19:45.579054   26922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44827
I0401 18:19:45.579521   26922 main.go:141] libmachine: () Calling .GetVersion
I0401 18:19:45.580016   26922 main.go:141] libmachine: Using API Version  1
I0401 18:19:45.580045   26922 main.go:141] libmachine: () Calling .SetConfigRaw
I0401 18:19:45.580331   26922 main.go:141] libmachine: () Calling .GetMachineName
I0401 18:19:45.580494   26922 main.go:141] libmachine: (functional-784295) Calling .DriverName
I0401 18:19:45.580674   26922 ssh_runner.go:195] Run: systemctl --version
I0401 18:19:45.580696   26922 main.go:141] libmachine: (functional-784295) Calling .GetSSHHostname
I0401 18:19:45.583272   26922 main.go:141] libmachine: (functional-784295) DBG | domain functional-784295 has defined MAC address 52:54:00:94:b3:40 in network mk-functional-784295
I0401 18:19:45.583645   26922 main.go:141] libmachine: (functional-784295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:b3:40", ip: ""} in network mk-functional-784295: {Iface:virbr1 ExpiryTime:2024-04-01 19:16:05 +0000 UTC Type:0 Mac:52:54:00:94:b3:40 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:functional-784295 Clientid:01:52:54:00:94:b3:40}
I0401 18:19:45.583673   26922 main.go:141] libmachine: (functional-784295) DBG | domain functional-784295 has defined IP address 192.168.39.229 and MAC address 52:54:00:94:b3:40 in network mk-functional-784295
I0401 18:19:45.583784   26922 main.go:141] libmachine: (functional-784295) Calling .GetSSHPort
I0401 18:19:45.583953   26922 main.go:141] libmachine: (functional-784295) Calling .GetSSHKeyPath
I0401 18:19:45.584111   26922 main.go:141] libmachine: (functional-784295) Calling .GetSSHUsername
I0401 18:19:45.584216   26922 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/functional-784295/id_rsa Username:docker}
I0401 18:19:45.673134   26922 ssh_runner.go:195] Run: sudo crictl images --output json
I0401 18:19:45.721717   26922 main.go:141] libmachine: Making call to close driver server
I0401 18:19:45.721729   26922 main.go:141] libmachine: (functional-784295) Calling .Close
I0401 18:19:45.721986   26922 main.go:141] libmachine: Successfully made call to close driver server
I0401 18:19:45.722002   26922 main.go:141] libmachine: Making call to close connection to plugin binary
I0401 18:19:45.722015   26922 main.go:141] libmachine: Making call to close driver server
I0401 18:19:45.722021   26922 main.go:141] libmachine: (functional-784295) Calling .Close
I0401 18:19:45.722225   26922 main.go:141] libmachine: Successfully made call to close driver server
I0401 18:19:45.722244   26922 main.go:141] libmachine: Making call to close connection to plugin binary
I0401 18:19:45.722246   26922 main.go:141] libmachine: (functional-784295) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-784295 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd              | v20240202-8f1494ea | 4950bb10b3f87 | 65.3MB |
| docker.io/library/nginx                 | latest             | 92b11f67642b6 | 191MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/kube-scheduler          | v1.29.3            | 8c390d98f50c0 | 60.7MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| gcr.io/google-containers/addon-resizer  | functional-784295  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/kube-controller-manager | v1.29.3            | 6052a25da3f97 | 123MB  |
| registry.k8s.io/kube-proxy              | v1.29.3            | a1d263b5dc5b0 | 83.6MB |
| localhost/minikube-local-cache-test     | functional-784295  | 6fa0595e4b171 | 3.33kB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.29.3            | 39f995c9f1996 | 129MB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-784295 image ls --format table --alsologtostderr:
I0401 18:19:46.742301   27047 out.go:291] Setting OutFile to fd 1 ...
I0401 18:19:46.742723   27047 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0401 18:19:46.742740   27047 out.go:304] Setting ErrFile to fd 2...
I0401 18:19:46.742748   27047 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0401 18:19:46.743188   27047 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-10493/.minikube/bin
I0401 18:19:46.744379   27047 config.go:182] Loaded profile config "functional-784295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0401 18:19:46.744543   27047 config.go:182] Loaded profile config "functional-784295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0401 18:19:46.744914   27047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0401 18:19:46.744952   27047 main.go:141] libmachine: Launching plugin server for driver kvm2
I0401 18:19:46.759368   27047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36901
I0401 18:19:46.759777   27047 main.go:141] libmachine: () Calling .GetVersion
I0401 18:19:46.760404   27047 main.go:141] libmachine: Using API Version  1
I0401 18:19:46.760434   27047 main.go:141] libmachine: () Calling .SetConfigRaw
I0401 18:19:46.760796   27047 main.go:141] libmachine: () Calling .GetMachineName
I0401 18:19:46.761039   27047 main.go:141] libmachine: (functional-784295) Calling .GetState
I0401 18:19:46.762937   27047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0401 18:19:46.762986   27047 main.go:141] libmachine: Launching plugin server for driver kvm2
I0401 18:19:46.777514   27047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45829
I0401 18:19:46.777940   27047 main.go:141] libmachine: () Calling .GetVersion
I0401 18:19:46.778490   27047 main.go:141] libmachine: Using API Version  1
I0401 18:19:46.778527   27047 main.go:141] libmachine: () Calling .SetConfigRaw
I0401 18:19:46.778856   27047 main.go:141] libmachine: () Calling .GetMachineName
I0401 18:19:46.779040   27047 main.go:141] libmachine: (functional-784295) Calling .DriverName
I0401 18:19:46.779252   27047 ssh_runner.go:195] Run: systemctl --version
I0401 18:19:46.779275   27047 main.go:141] libmachine: (functional-784295) Calling .GetSSHHostname
I0401 18:19:46.782063   27047 main.go:141] libmachine: (functional-784295) DBG | domain functional-784295 has defined MAC address 52:54:00:94:b3:40 in network mk-functional-784295
I0401 18:19:46.782433   27047 main.go:141] libmachine: (functional-784295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:b3:40", ip: ""} in network mk-functional-784295: {Iface:virbr1 ExpiryTime:2024-04-01 19:16:05 +0000 UTC Type:0 Mac:52:54:00:94:b3:40 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:functional-784295 Clientid:01:52:54:00:94:b3:40}
I0401 18:19:46.782466   27047 main.go:141] libmachine: (functional-784295) DBG | domain functional-784295 has defined IP address 192.168.39.229 and MAC address 52:54:00:94:b3:40 in network mk-functional-784295
I0401 18:19:46.782585   27047 main.go:141] libmachine: (functional-784295) Calling .GetSSHPort
I0401 18:19:46.782754   27047 main.go:141] libmachine: (functional-784295) Calling .GetSSHKeyPath
I0401 18:19:46.782894   27047 main.go:141] libmachine: (functional-784295) Calling .GetSSHUsername
I0401 18:19:46.783034   27047 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/functional-784295/id_rsa Username:docker}
I0401 18:19:46.938280   27047 ssh_runner.go:195] Run: sudo crictl images --output json
I0401 18:19:47.149234   27047 main.go:141] libmachine: Making call to close driver server
I0401 18:19:47.149266   27047 main.go:141] libmachine: (functional-784295) Calling .Close
I0401 18:19:47.149545   27047 main.go:141] libmachine: Successfully made call to close driver server
I0401 18:19:47.149562   27047 main.go:141] libmachine: Making call to close connection to plugin binary
I0401 18:19:47.149572   27047 main.go:141] libmachine: Making call to close driver server
I0401 18:19:47.149580   27047 main.go:141] libmachine: (functional-784295) Calling .Close
I0401 18:19:47.151348   27047 main.go:141] libmachine: Successfully made call to close driver server
I0401 18:19:47.151365   27047 main.go:141] libmachine: Making call to close connection to plugin binary
I0401 18:19:47.151348   27047 main.go:141] libmachine: (functional-784295) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-784295 image ls --format json --alsologtostderr:
[{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392","repoDigests":["registry.k8s.io/kube-proxy@sha256:d137dd922e588abc7b0e2f20afd338065e9abccdecfe705abfb19f588fbac11d","registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af
3d9ff3f0863"],"repoTags":["registry.k8s.io/kube-proxy:v1.29.3"],"size":"83634073"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"92b11f67642b62bbb98e7e49169c346b30e20cd3c1c034d31087e46924b9312e","repoDigests":["docker.io/library/nginx@sha256:52478f8cd6a142fd462f0a7614a7bb064e969a4c083648235d6943c786df8cc7","docker.io/library/nginx@sha256:6db391d1c0cfb30588ba0bf72ea999404f2764febf0f1f196acd5867ac7efa7e"],"repoTags":["docker.io/library/nginx:latest"],"size":"190865876"},{"id"
:"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533","repoDigests":["registry.k8s.io/kube-apiserver@sha256:21be2c03b528e582a63a41d8270f469ad1b24e2f6ba8238386768fc981ca1322","registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.29.3"],"size":"128508878"},{"id":"6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:495e03d609009733264502138f33ab4ebff55e4ccc34b51fce1dc48eba5aa606","registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.29.3"],"size":"123142962"},{
"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-784295"],"size":"34114467"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b","repoDigests":["registry.k8s.io/kube-sche
duler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a","registry.k8s.io/kube-scheduler@sha256:c6dae5df00e42512d2baa3e1e74efbf08bddd595e930123f6021f715198b8e88"],"repoTags":["registry.k8s.io/kube-scheduler:v1.29.3"],"size":"60724018"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988","docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"65291810"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908e
b8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6fa0595e4b17138a6a6cafd54c9b77c147969becafcc18a4084394e1873cde18","repoDigests":["localhost/minikube-local-cache-test
@sha256:403b899b55a67e68712ff5e5d2a09bfc3100e7ce261e2bad77cff4ca03ecb278"],"repoTags":["localhost/minikube-local-cache-test:functional-784295"],"size":"3330"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-784295 image ls --format json --alsologtostderr:
I0401 18:19:46.346807   27012 out.go:291] Setting OutFile to fd 1 ...
I0401 18:19:46.347046   27012 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0401 18:19:46.347057   27012 out.go:304] Setting ErrFile to fd 2...
I0401 18:19:46.347061   27012 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0401 18:19:46.347248   27012 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-10493/.minikube/bin
I0401 18:19:46.347813   27012 config.go:182] Loaded profile config "functional-784295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0401 18:19:46.347925   27012 config.go:182] Loaded profile config "functional-784295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0401 18:19:46.348281   27012 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0401 18:19:46.348336   27012 main.go:141] libmachine: Launching plugin server for driver kvm2
I0401 18:19:46.366654   27012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35627
I0401 18:19:46.367144   27012 main.go:141] libmachine: () Calling .GetVersion
I0401 18:19:46.367778   27012 main.go:141] libmachine: Using API Version  1
I0401 18:19:46.367800   27012 main.go:141] libmachine: () Calling .SetConfigRaw
I0401 18:19:46.368183   27012 main.go:141] libmachine: () Calling .GetMachineName
I0401 18:19:46.368410   27012 main.go:141] libmachine: (functional-784295) Calling .GetState
I0401 18:19:46.370414   27012 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0401 18:19:46.370444   27012 main.go:141] libmachine: Launching plugin server for driver kvm2
I0401 18:19:46.386614   27012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40267
I0401 18:19:46.387032   27012 main.go:141] libmachine: () Calling .GetVersion
I0401 18:19:46.387483   27012 main.go:141] libmachine: Using API Version  1
I0401 18:19:46.387502   27012 main.go:141] libmachine: () Calling .SetConfigRaw
I0401 18:19:46.387840   27012 main.go:141] libmachine: () Calling .GetMachineName
I0401 18:19:46.388033   27012 main.go:141] libmachine: (functional-784295) Calling .DriverName
I0401 18:19:46.388252   27012 ssh_runner.go:195] Run: systemctl --version
I0401 18:19:46.388274   27012 main.go:141] libmachine: (functional-784295) Calling .GetSSHHostname
I0401 18:19:46.391117   27012 main.go:141] libmachine: (functional-784295) DBG | domain functional-784295 has defined MAC address 52:54:00:94:b3:40 in network mk-functional-784295
I0401 18:19:46.391452   27012 main.go:141] libmachine: (functional-784295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:b3:40", ip: ""} in network mk-functional-784295: {Iface:virbr1 ExpiryTime:2024-04-01 19:16:05 +0000 UTC Type:0 Mac:52:54:00:94:b3:40 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:functional-784295 Clientid:01:52:54:00:94:b3:40}
I0401 18:19:46.391474   27012 main.go:141] libmachine: (functional-784295) DBG | domain functional-784295 has defined IP address 192.168.39.229 and MAC address 52:54:00:94:b3:40 in network mk-functional-784295
I0401 18:19:46.391591   27012 main.go:141] libmachine: (functional-784295) Calling .GetSSHPort
I0401 18:19:46.391736   27012 main.go:141] libmachine: (functional-784295) Calling .GetSSHKeyPath
I0401 18:19:46.391876   27012 main.go:141] libmachine: (functional-784295) Calling .GetSSHUsername
I0401 18:19:46.392051   27012 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/functional-784295/id_rsa Username:docker}
I0401 18:19:46.499888   27012 ssh_runner.go:195] Run: sudo crictl images --output json
I0401 18:19:46.678592   27012 main.go:141] libmachine: Making call to close driver server
I0401 18:19:46.678613   27012 main.go:141] libmachine: (functional-784295) Calling .Close
I0401 18:19:46.678905   27012 main.go:141] libmachine: (functional-784295) DBG | Closing plugin on server side
I0401 18:19:46.678932   27012 main.go:141] libmachine: Successfully made call to close driver server
I0401 18:19:46.678944   27012 main.go:141] libmachine: Making call to close connection to plugin binary
I0401 18:19:46.678957   27012 main.go:141] libmachine: Making call to close driver server
I0401 18:19:46.678964   27012 main.go:141] libmachine: (functional-784295) Calling .Close
I0401 18:19:46.679231   27012 main.go:141] libmachine: Successfully made call to close driver server
I0401 18:19:46.679246   27012 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-784295 image ls --format yaml --alsologtostderr:
- id: a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392
repoDigests:
- registry.k8s.io/kube-proxy@sha256:d137dd922e588abc7b0e2f20afd338065e9abccdecfe705abfb19f588fbac11d
- registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863
repoTags:
- registry.k8s.io/kube-proxy:v1.29.3
size: "83634073"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
- docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "65291810"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: 39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:21be2c03b528e582a63a41d8270f469ad1b24e2f6ba8238386768fc981ca1322
- registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c
repoTags:
- registry.k8s.io/kube-apiserver:v1.29.3
size: "128508878"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-784295
size: "34114467"
- id: 6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:495e03d609009733264502138f33ab4ebff55e4ccc34b51fce1dc48eba5aa606
- registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104
repoTags:
- registry.k8s.io/kube-controller-manager:v1.29.3
size: "123142962"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a
- registry.k8s.io/kube-scheduler@sha256:c6dae5df00e42512d2baa3e1e74efbf08bddd595e930123f6021f715198b8e88
repoTags:
- registry.k8s.io/kube-scheduler:v1.29.3
size: "60724018"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 92b11f67642b62bbb98e7e49169c346b30e20cd3c1c034d31087e46924b9312e
repoDigests:
- docker.io/library/nginx@sha256:52478f8cd6a142fd462f0a7614a7bb064e969a4c083648235d6943c786df8cc7
- docker.io/library/nginx@sha256:6db391d1c0cfb30588ba0bf72ea999404f2764febf0f1f196acd5867ac7efa7e
repoTags:
- docker.io/library/nginx:latest
size: "190865876"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6fa0595e4b17138a6a6cafd54c9b77c147969becafcc18a4084394e1873cde18
repoDigests:
- localhost/minikube-local-cache-test@sha256:403b899b55a67e68712ff5e5d2a09bfc3100e7ce261e2bad77cff4ca03ecb278
repoTags:
- localhost/minikube-local-cache-test:functional-784295
size: "3330"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-784295 image ls --format yaml --alsologtostderr:
I0401 18:19:45.779408   26947 out.go:291] Setting OutFile to fd 1 ...
I0401 18:19:45.779565   26947 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0401 18:19:45.779575   26947 out.go:304] Setting ErrFile to fd 2...
I0401 18:19:45.779579   26947 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0401 18:19:45.779799   26947 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-10493/.minikube/bin
I0401 18:19:45.781096   26947 config.go:182] Loaded profile config "functional-784295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0401 18:19:45.781606   26947 config.go:182] Loaded profile config "functional-784295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0401 18:19:45.782294   26947 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0401 18:19:45.782344   26947 main.go:141] libmachine: Launching plugin server for driver kvm2
I0401 18:19:45.797563   26947 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36487
I0401 18:19:45.798032   26947 main.go:141] libmachine: () Calling .GetVersion
I0401 18:19:45.798658   26947 main.go:141] libmachine: Using API Version  1
I0401 18:19:45.798685   26947 main.go:141] libmachine: () Calling .SetConfigRaw
I0401 18:19:45.799057   26947 main.go:141] libmachine: () Calling .GetMachineName
I0401 18:19:45.799291   26947 main.go:141] libmachine: (functional-784295) Calling .GetState
I0401 18:19:45.801268   26947 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0401 18:19:45.801303   26947 main.go:141] libmachine: Launching plugin server for driver kvm2
I0401 18:19:45.817150   26947 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36425
I0401 18:19:45.817550   26947 main.go:141] libmachine: () Calling .GetVersion
I0401 18:19:45.818090   26947 main.go:141] libmachine: Using API Version  1
I0401 18:19:45.818119   26947 main.go:141] libmachine: () Calling .SetConfigRaw
I0401 18:19:45.818496   26947 main.go:141] libmachine: () Calling .GetMachineName
I0401 18:19:45.818695   26947 main.go:141] libmachine: (functional-784295) Calling .DriverName
I0401 18:19:45.818978   26947 ssh_runner.go:195] Run: systemctl --version
I0401 18:19:45.819002   26947 main.go:141] libmachine: (functional-784295) Calling .GetSSHHostname
I0401 18:19:45.822126   26947 main.go:141] libmachine: (functional-784295) DBG | domain functional-784295 has defined MAC address 52:54:00:94:b3:40 in network mk-functional-784295
I0401 18:19:45.822539   26947 main.go:141] libmachine: (functional-784295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:b3:40", ip: ""} in network mk-functional-784295: {Iface:virbr1 ExpiryTime:2024-04-01 19:16:05 +0000 UTC Type:0 Mac:52:54:00:94:b3:40 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:functional-784295 Clientid:01:52:54:00:94:b3:40}
I0401 18:19:45.822563   26947 main.go:141] libmachine: (functional-784295) DBG | domain functional-784295 has defined IP address 192.168.39.229 and MAC address 52:54:00:94:b3:40 in network mk-functional-784295
I0401 18:19:45.822716   26947 main.go:141] libmachine: (functional-784295) Calling .GetSSHPort
I0401 18:19:45.822904   26947 main.go:141] libmachine: (functional-784295) Calling .GetSSHKeyPath
I0401 18:19:45.823097   26947 main.go:141] libmachine: (functional-784295) Calling .GetSSHUsername
I0401 18:19:45.823284   26947 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/functional-784295/id_rsa Username:docker}
I0401 18:19:46.011832   26947 ssh_runner.go:195] Run: sudo crictl images --output json
I0401 18:19:46.278586   26947 main.go:141] libmachine: Making call to close driver server
I0401 18:19:46.278602   26947 main.go:141] libmachine: (functional-784295) Calling .Close
I0401 18:19:46.278924   26947 main.go:141] libmachine: Successfully made call to close driver server
I0401 18:19:46.278961   26947 main.go:141] libmachine: Making call to close connection to plugin binary
I0401 18:19:46.278973   26947 main.go:141] libmachine: Making call to close driver server
I0401 18:19:46.278984   26947 main.go:141] libmachine: (functional-784295) Calling .Close
I0401 18:19:46.279255   26947 main.go:141] libmachine: Successfully made call to close driver server
I0401 18:19:46.279274   26947 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-784295
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 image load --daemon gcr.io/google-containers/addon-resizer:functional-784295 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-784295 image load --daemon gcr.io/google-containers/addon-resizer:functional-784295 --alsologtostderr: (5.153547901s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-784295 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-784295 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-b4hc7" [52947371-6538-4f90-a035-66d35e36682c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-b4hc7" [52947371-6538-4f90-a035-66d35e36682c] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.144374768s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 image load --daemon gcr.io/google-containers/addon-resizer:functional-784295 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-784295 image load --daemon gcr.io/google-containers/addon-resizer:functional-784295 --alsologtostderr: (2.428493184s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-784295
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 image load --daemon gcr.io/google-containers/addon-resizer:functional-784295 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-784295 image load --daemon gcr.io/google-containers/addon-resizer:functional-784295 --alsologtostderr: (6.925675987s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 service list -o json
functional_test.go:1490: Took "314.87185ms" to run "out/minikube-linux-amd64 -p functional-784295 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "271.416847ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "64.959627ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.229:30428
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "301.041319ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "58.410247ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-784295 /tmp/TestFunctionalparallelMountCmdany-port2759769923/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1711995570404589554" to /tmp/TestFunctionalparallelMountCmdany-port2759769923/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1711995570404589554" to /tmp/TestFunctionalparallelMountCmdany-port2759769923/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1711995570404589554" to /tmp/TestFunctionalparallelMountCmdany-port2759769923/001/test-1711995570404589554
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-784295 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (304.55598ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr  1 18:19 created-by-test
-rw-r--r-- 1 docker docker 24 Apr  1 18:19 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr  1 18:19 test-1711995570404589554
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 ssh cat /mount-9p/test-1711995570404589554
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-784295 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [7fbfd4b5-8131-4a77-b634-52fa222bd479] Pending
helpers_test.go:344: "busybox-mount" [7fbfd4b5-8131-4a77-b634-52fa222bd479] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [7fbfd4b5-8131-4a77-b634-52fa222bd479] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [7fbfd4b5-8131-4a77-b634-52fa222bd479] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003914967s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-784295 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-784295 /tmp/TestFunctionalparallelMountCmdany-port2759769923/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.229:30428
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 image save gcr.io/google-containers/addon-resizer:functional-784295 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
E0401 18:19:33.817533   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/client.crt: no such file or directory
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-784295 image save gcr.io/google-containers/addon-resizer:functional-784295 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.706090942s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 image rm gcr.io/google-containers/addon-resizer:functional-784295 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-784295 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (2.089370218s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (4.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-784295
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 image save --daemon gcr.io/google-containers/addon-resizer:functional-784295 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-784295 image save --daemon gcr.io/google-containers/addon-resizer:functional-784295 --alsologtostderr: (4.928201969s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-784295
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (4.96s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-784295 /tmp/TestFunctionalparallelMountCmdspecific-port3844241359/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-784295 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (266.503769ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-784295 /tmp/TestFunctionalparallelMountCmdspecific-port3844241359/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-784295 /tmp/TestFunctionalparallelMountCmdspecific-port3844241359/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.17s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-784295 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3836119471/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-784295 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3836119471/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-784295 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3836119471/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-784295 ssh "findmnt -T" /mount1: exit status 1 (335.665023ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-784295 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-784295 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3836119471/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-784295 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3836119471/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-784295 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3836119471/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-784295 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-784295
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-784295
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-784295
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (204.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-293078 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0401 18:20:14.778366   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/client.crt: no such file or directory
E0401 18:21:36.698849   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-293078 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m24.040025395s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (204.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-293078 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-293078 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-293078 -- rollout status deployment/busybox: (2.74061497s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-293078 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-293078 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-293078 -- exec busybox-7fdf7869d9-7tn8z -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-293078 -- exec busybox-7fdf7869d9-ntbk4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-293078 -- exec busybox-7fdf7869d9-z89qx -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-293078 -- exec busybox-7fdf7869d9-7tn8z -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-293078 -- exec busybox-7fdf7869d9-ntbk4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-293078 -- exec busybox-7fdf7869d9-z89qx -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-293078 -- exec busybox-7fdf7869d9-7tn8z -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-293078 -- exec busybox-7fdf7869d9-ntbk4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-293078 -- exec busybox-7fdf7869d9-z89qx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-293078 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-293078 -- exec busybox-7fdf7869d9-7tn8z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-293078 -- exec busybox-7fdf7869d9-7tn8z -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-293078 -- exec busybox-7fdf7869d9-ntbk4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-293078 -- exec busybox-7fdf7869d9-ntbk4 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-293078 -- exec busybox-7fdf7869d9-z89qx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-293078 -- exec busybox-7fdf7869d9-z89qx -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (47.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-293078 -v=7 --alsologtostderr
E0401 18:23:52.855509   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/client.crt: no such file or directory
E0401 18:24:16.857176   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/functional-784295/client.crt: no such file or directory
E0401 18:24:16.862492   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/functional-784295/client.crt: no such file or directory
E0401 18:24:16.872848   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/functional-784295/client.crt: no such file or directory
E0401 18:24:16.893185   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/functional-784295/client.crt: no such file or directory
E0401 18:24:16.933494   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/functional-784295/client.crt: no such file or directory
E0401 18:24:17.014172   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/functional-784295/client.crt: no such file or directory
E0401 18:24:17.174770   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/functional-784295/client.crt: no such file or directory
E0401 18:24:17.495915   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/functional-784295/client.crt: no such file or directory
E0401 18:24:18.136560   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/functional-784295/client.crt: no such file or directory
E0401 18:24:19.417379   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/functional-784295/client.crt: no such file or directory
E0401 18:24:20.540017   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/client.crt: no such file or directory
E0401 18:24:21.978040   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/functional-784295/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-293078 -v=7 --alsologtostderr: (46.309320777s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (47.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-293078 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0401 18:24:27.098334   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/functional-784295/client.crt: no such file or directory
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 cp testdata/cp-test.txt ha-293078:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 ssh -n ha-293078 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 cp ha-293078:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3967030531/001/cp-test_ha-293078.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 ssh -n ha-293078 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 cp ha-293078:/home/docker/cp-test.txt ha-293078-m02:/home/docker/cp-test_ha-293078_ha-293078-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 ssh -n ha-293078 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 ssh -n ha-293078-m02 "sudo cat /home/docker/cp-test_ha-293078_ha-293078-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 cp ha-293078:/home/docker/cp-test.txt ha-293078-m03:/home/docker/cp-test_ha-293078_ha-293078-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 ssh -n ha-293078 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 ssh -n ha-293078-m03 "sudo cat /home/docker/cp-test_ha-293078_ha-293078-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 cp ha-293078:/home/docker/cp-test.txt ha-293078-m04:/home/docker/cp-test_ha-293078_ha-293078-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 ssh -n ha-293078 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 ssh -n ha-293078-m04 "sudo cat /home/docker/cp-test_ha-293078_ha-293078-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 cp testdata/cp-test.txt ha-293078-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 ssh -n ha-293078-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 cp ha-293078-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3967030531/001/cp-test_ha-293078-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 ssh -n ha-293078-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 cp ha-293078-m02:/home/docker/cp-test.txt ha-293078:/home/docker/cp-test_ha-293078-m02_ha-293078.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 ssh -n ha-293078-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 ssh -n ha-293078 "sudo cat /home/docker/cp-test_ha-293078-m02_ha-293078.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 cp ha-293078-m02:/home/docker/cp-test.txt ha-293078-m03:/home/docker/cp-test_ha-293078-m02_ha-293078-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 ssh -n ha-293078-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 ssh -n ha-293078-m03 "sudo cat /home/docker/cp-test_ha-293078-m02_ha-293078-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 cp ha-293078-m02:/home/docker/cp-test.txt ha-293078-m04:/home/docker/cp-test_ha-293078-m02_ha-293078-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 ssh -n ha-293078-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 ssh -n ha-293078-m04 "sudo cat /home/docker/cp-test_ha-293078-m02_ha-293078-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 cp testdata/cp-test.txt ha-293078-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 ssh -n ha-293078-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 cp ha-293078-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3967030531/001/cp-test_ha-293078-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 ssh -n ha-293078-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 cp ha-293078-m03:/home/docker/cp-test.txt ha-293078:/home/docker/cp-test_ha-293078-m03_ha-293078.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 ssh -n ha-293078-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 ssh -n ha-293078 "sudo cat /home/docker/cp-test_ha-293078-m03_ha-293078.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 cp ha-293078-m03:/home/docker/cp-test.txt ha-293078-m02:/home/docker/cp-test_ha-293078-m03_ha-293078-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 ssh -n ha-293078-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 ssh -n ha-293078-m02 "sudo cat /home/docker/cp-test_ha-293078-m03_ha-293078-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 cp ha-293078-m03:/home/docker/cp-test.txt ha-293078-m04:/home/docker/cp-test_ha-293078-m03_ha-293078-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 ssh -n ha-293078-m03 "sudo cat /home/docker/cp-test.txt"
E0401 18:24:37.338727   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/functional-784295/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 ssh -n ha-293078-m04 "sudo cat /home/docker/cp-test_ha-293078-m03_ha-293078-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 cp testdata/cp-test.txt ha-293078-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 ssh -n ha-293078-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 cp ha-293078-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3967030531/001/cp-test_ha-293078-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 ssh -n ha-293078-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 cp ha-293078-m04:/home/docker/cp-test.txt ha-293078:/home/docker/cp-test_ha-293078-m04_ha-293078.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 ssh -n ha-293078-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 ssh -n ha-293078 "sudo cat /home/docker/cp-test_ha-293078-m04_ha-293078.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 cp ha-293078-m04:/home/docker/cp-test.txt ha-293078-m02:/home/docker/cp-test_ha-293078-m04_ha-293078-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 ssh -n ha-293078-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 ssh -n ha-293078-m02 "sudo cat /home/docker/cp-test_ha-293078-m04_ha-293078-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 cp ha-293078-m04:/home/docker/cp-test.txt ha-293078-m03:/home/docker/cp-test_ha-293078-m04_ha-293078-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 ssh -n ha-293078-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 ssh -n ha-293078-m03 "sudo cat /home/docker/cp-test_ha-293078-m04_ha-293078-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.485844115s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-293078 node delete m03 -v=7 --alsologtostderr: (16.852598419s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (374.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-293078 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0401 18:38:52.855018   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/client.crt: no such file or directory
E0401 18:39:16.857057   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/functional-784295/client.crt: no such file or directory
E0401 18:40:39.902894   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/functional-784295/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-293078 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (6m13.745493147s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (374.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (74.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-293078 --control-plane -v=7 --alsologtostderr
E0401 18:43:52.856267   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/client.crt: no such file or directory
E0401 18:44:16.856371   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/functional-784295/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-293078 --control-plane -v=7 --alsologtostderr: (1m13.639675044s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-293078 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (74.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.54s)

                                                
                                    
x
+
TestJSONOutput/start/Command (97.41s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-252402 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-252402 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m37.406543615s)
--- PASS: TestJSONOutput/start/Command (97.41s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.81s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-252402 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.81s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-252402 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.5s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-252402 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-252402 --output=json --user=testUser: (7.499851277s)
--- PASS: TestJSONOutput/stop/Command (7.50s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-783835 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-783835 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (72.263553ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c560845d-b9b0-4dcd-8f29-3aee1a2fa039","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-783835] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a842b3db-3ab7-4388-8ae6-cc45e1391584","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18233"}}
	{"specversion":"1.0","id":"0c932d3a-6a2a-4653-bcfe-c98d3cb92ef1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c2b8bcfc-a3a3-4d65-a876-de7321a64c60","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18233-10493/kubeconfig"}}
	{"specversion":"1.0","id":"ca926c77-b413-4b01-946a-d2284d1c1b03","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-10493/.minikube"}}
	{"specversion":"1.0","id":"eec0cea6-2c9f-42b5-b946-d2fb68fa661f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"d47d871c-a88b-4b18-9529-e1728249340b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4b210dcd-5ddd-440b-8dcf-488004d40297","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-783835" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-783835
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (96.11s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-799143 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-799143 --driver=kvm2  --container-runtime=crio: (47.114866591s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-801798 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-801798 --driver=kvm2  --container-runtime=crio: (46.076646235s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-799143
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-801798
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-801798" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-801798
helpers_test.go:175: Cleaning up "first-799143" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-799143
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-799143: (1.019956838s)
--- PASS: TestMinikubeProfile (96.11s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (26.81s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-555834 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-555834 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.806463367s)
--- PASS: TestMountStart/serial/StartWithMountFirst (26.81s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-555834 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-555834 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (32.15s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-568854 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0401 18:48:52.855584   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-568854 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (31.146860094s)
--- PASS: TestMountStart/serial/StartWithMountSecond (32.15s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.5s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-568854 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-568854 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.50s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.73s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-555834 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-568854 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-568854 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.36s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-568854
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-568854: (1.360119036s)
--- PASS: TestMountStart/serial/Stop (1.36s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (25.66s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-568854
E0401 18:49:16.856583   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/functional-784295/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-568854: (24.662468211s)
--- PASS: TestMountStart/serial/RestartStopped (25.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-568854 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-568854 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (100.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-853477 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-853477 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m40.325088929s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-853477 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (100.74s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-853477 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-853477 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-853477 -- rollout status deployment/busybox: (2.416770326s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-853477 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-853477 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-853477 -- exec busybox-7fdf7869d9-g2mfr -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-853477 -- exec busybox-7fdf7869d9-pdvlk -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-853477 -- exec busybox-7fdf7869d9-g2mfr -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-853477 -- exec busybox-7fdf7869d9-pdvlk -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-853477 -- exec busybox-7fdf7869d9-g2mfr -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-853477 -- exec busybox-7fdf7869d9-pdvlk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.00s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-853477 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-853477 -- exec busybox-7fdf7869d9-g2mfr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-853477 -- exec busybox-7fdf7869d9-g2mfr -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-853477 -- exec busybox-7fdf7869d9-pdvlk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-853477 -- exec busybox-7fdf7869d9-pdvlk -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.88s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (39.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-853477 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-853477 -v 3 --alsologtostderr: (38.630179029s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-853477 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (39.20s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-853477 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-853477 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-853477 cp testdata/cp-test.txt multinode-853477:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-853477 ssh -n multinode-853477 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-853477 cp multinode-853477:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3069131938/001/cp-test_multinode-853477.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-853477 ssh -n multinode-853477 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-853477 cp multinode-853477:/home/docker/cp-test.txt multinode-853477-m02:/home/docker/cp-test_multinode-853477_multinode-853477-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-853477 ssh -n multinode-853477 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-853477 ssh -n multinode-853477-m02 "sudo cat /home/docker/cp-test_multinode-853477_multinode-853477-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-853477 cp multinode-853477:/home/docker/cp-test.txt multinode-853477-m03:/home/docker/cp-test_multinode-853477_multinode-853477-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-853477 ssh -n multinode-853477 "sudo cat /home/docker/cp-test.txt"
E0401 18:51:55.901067   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-853477 ssh -n multinode-853477-m03 "sudo cat /home/docker/cp-test_multinode-853477_multinode-853477-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-853477 cp testdata/cp-test.txt multinode-853477-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-853477 ssh -n multinode-853477-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-853477 cp multinode-853477-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3069131938/001/cp-test_multinode-853477-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-853477 ssh -n multinode-853477-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-853477 cp multinode-853477-m02:/home/docker/cp-test.txt multinode-853477:/home/docker/cp-test_multinode-853477-m02_multinode-853477.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-853477 ssh -n multinode-853477-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-853477 ssh -n multinode-853477 "sudo cat /home/docker/cp-test_multinode-853477-m02_multinode-853477.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-853477 cp multinode-853477-m02:/home/docker/cp-test.txt multinode-853477-m03:/home/docker/cp-test_multinode-853477-m02_multinode-853477-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-853477 ssh -n multinode-853477-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-853477 ssh -n multinode-853477-m03 "sudo cat /home/docker/cp-test_multinode-853477-m02_multinode-853477-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-853477 cp testdata/cp-test.txt multinode-853477-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-853477 ssh -n multinode-853477-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-853477 cp multinode-853477-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3069131938/001/cp-test_multinode-853477-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-853477 ssh -n multinode-853477-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-853477 cp multinode-853477-m03:/home/docker/cp-test.txt multinode-853477:/home/docker/cp-test_multinode-853477-m03_multinode-853477.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-853477 ssh -n multinode-853477-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-853477 ssh -n multinode-853477 "sudo cat /home/docker/cp-test_multinode-853477-m03_multinode-853477.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-853477 cp multinode-853477-m03:/home/docker/cp-test.txt multinode-853477-m02:/home/docker/cp-test_multinode-853477-m03_multinode-853477-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-853477 ssh -n multinode-853477-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-853477 ssh -n multinode-853477-m02 "sudo cat /home/docker/cp-test_multinode-853477-m03_multinode-853477-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.48s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-853477 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-853477 node stop m03: (1.630584466s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-853477 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-853477 status: exit status 7 (436.446148ms)

                                                
                                                
-- stdout --
	multinode-853477
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-853477-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-853477-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-853477 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-853477 status --alsologtostderr: exit status 7 (433.381478ms)

                                                
                                                
-- stdout --
	multinode-853477
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-853477-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-853477-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 18:52:02.826240   42458 out.go:291] Setting OutFile to fd 1 ...
	I0401 18:52:02.826360   42458 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 18:52:02.826370   42458 out.go:304] Setting ErrFile to fd 2...
	I0401 18:52:02.826373   42458 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 18:52:02.826577   42458 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-10493/.minikube/bin
	I0401 18:52:02.826828   42458 out.go:298] Setting JSON to false
	I0401 18:52:02.826863   42458 mustload.go:65] Loading cluster: multinode-853477
	I0401 18:52:02.826966   42458 notify.go:220] Checking for updates...
	I0401 18:52:02.827398   42458 config.go:182] Loaded profile config "multinode-853477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 18:52:02.827418   42458 status.go:255] checking status of multinode-853477 ...
	I0401 18:52:02.827877   42458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:52:02.827919   42458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:52:02.845360   42458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44583
	I0401 18:52:02.845793   42458 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:52:02.847255   42458 main.go:141] libmachine: Using API Version  1
	I0401 18:52:02.847291   42458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:52:02.847652   42458 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:52:02.847855   42458 main.go:141] libmachine: (multinode-853477) Calling .GetState
	I0401 18:52:02.849422   42458 status.go:330] multinode-853477 host status = "Running" (err=<nil>)
	I0401 18:52:02.849436   42458 host.go:66] Checking if "multinode-853477" exists ...
	I0401 18:52:02.849715   42458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:52:02.849746   42458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:52:02.863818   42458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45985
	I0401 18:52:02.864163   42458 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:52:02.864600   42458 main.go:141] libmachine: Using API Version  1
	I0401 18:52:02.864623   42458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:52:02.864915   42458 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:52:02.865086   42458 main.go:141] libmachine: (multinode-853477) Calling .GetIP
	I0401 18:52:02.867481   42458 main.go:141] libmachine: (multinode-853477) DBG | domain multinode-853477 has defined MAC address 52:54:00:e9:6f:8b in network mk-multinode-853477
	I0401 18:52:02.867799   42458 main.go:141] libmachine: (multinode-853477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:6f:8b", ip: ""} in network mk-multinode-853477: {Iface:virbr1 ExpiryTime:2024-04-01 19:49:43 +0000 UTC Type:0 Mac:52:54:00:e9:6f:8b Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:multinode-853477 Clientid:01:52:54:00:e9:6f:8b}
	I0401 18:52:02.867821   42458 main.go:141] libmachine: (multinode-853477) DBG | domain multinode-853477 has defined IP address 192.168.39.161 and MAC address 52:54:00:e9:6f:8b in network mk-multinode-853477
	I0401 18:52:02.867946   42458 host.go:66] Checking if "multinode-853477" exists ...
	I0401 18:52:02.868217   42458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:52:02.868250   42458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:52:02.882426   42458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34581
	I0401 18:52:02.882730   42458 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:52:02.883132   42458 main.go:141] libmachine: Using API Version  1
	I0401 18:52:02.883153   42458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:52:02.883449   42458 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:52:02.883610   42458 main.go:141] libmachine: (multinode-853477) Calling .DriverName
	I0401 18:52:02.883786   42458 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 18:52:02.883815   42458 main.go:141] libmachine: (multinode-853477) Calling .GetSSHHostname
	I0401 18:52:02.886343   42458 main.go:141] libmachine: (multinode-853477) DBG | domain multinode-853477 has defined MAC address 52:54:00:e9:6f:8b in network mk-multinode-853477
	I0401 18:52:02.886743   42458 main.go:141] libmachine: (multinode-853477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:6f:8b", ip: ""} in network mk-multinode-853477: {Iface:virbr1 ExpiryTime:2024-04-01 19:49:43 +0000 UTC Type:0 Mac:52:54:00:e9:6f:8b Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:multinode-853477 Clientid:01:52:54:00:e9:6f:8b}
	I0401 18:52:02.886771   42458 main.go:141] libmachine: (multinode-853477) DBG | domain multinode-853477 has defined IP address 192.168.39.161 and MAC address 52:54:00:e9:6f:8b in network mk-multinode-853477
	I0401 18:52:02.886882   42458 main.go:141] libmachine: (multinode-853477) Calling .GetSSHPort
	I0401 18:52:02.887033   42458 main.go:141] libmachine: (multinode-853477) Calling .GetSSHKeyPath
	I0401 18:52:02.887167   42458 main.go:141] libmachine: (multinode-853477) Calling .GetSSHUsername
	I0401 18:52:02.887296   42458 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/multinode-853477/id_rsa Username:docker}
	I0401 18:52:02.970031   42458 ssh_runner.go:195] Run: systemctl --version
	I0401 18:52:02.978443   42458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 18:52:02.994367   42458 kubeconfig.go:125] found "multinode-853477" server: "https://192.168.39.161:8443"
	I0401 18:52:02.994395   42458 api_server.go:166] Checking apiserver status ...
	I0401 18:52:02.994422   42458 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 18:52:03.008092   42458 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1152/cgroup
	W0401 18:52:03.018219   42458 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1152/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0401 18:52:03.018259   42458 ssh_runner.go:195] Run: ls
	I0401 18:52:03.023802   42458 api_server.go:253] Checking apiserver healthz at https://192.168.39.161:8443/healthz ...
	I0401 18:52:03.028791   42458 api_server.go:279] https://192.168.39.161:8443/healthz returned 200:
	ok
	I0401 18:52:03.028808   42458 status.go:422] multinode-853477 apiserver status = Running (err=<nil>)
	I0401 18:52:03.028817   42458 status.go:257] multinode-853477 status: &{Name:multinode-853477 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0401 18:52:03.028831   42458 status.go:255] checking status of multinode-853477-m02 ...
	I0401 18:52:03.029103   42458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:52:03.029150   42458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:52:03.044107   42458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41523
	I0401 18:52:03.044532   42458 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:52:03.044963   42458 main.go:141] libmachine: Using API Version  1
	I0401 18:52:03.044980   42458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:52:03.045305   42458 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:52:03.045469   42458 main.go:141] libmachine: (multinode-853477-m02) Calling .GetState
	I0401 18:52:03.046866   42458 status.go:330] multinode-853477-m02 host status = "Running" (err=<nil>)
	I0401 18:52:03.046880   42458 host.go:66] Checking if "multinode-853477-m02" exists ...
	I0401 18:52:03.047219   42458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:52:03.047260   42458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:52:03.061413   42458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41051
	I0401 18:52:03.061792   42458 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:52:03.062180   42458 main.go:141] libmachine: Using API Version  1
	I0401 18:52:03.062213   42458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:52:03.062463   42458 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:52:03.062660   42458 main.go:141] libmachine: (multinode-853477-m02) Calling .GetIP
	I0401 18:52:03.065290   42458 main.go:141] libmachine: (multinode-853477-m02) DBG | domain multinode-853477-m02 has defined MAC address 52:54:00:15:ad:6b in network mk-multinode-853477
	I0401 18:52:03.065747   42458 main.go:141] libmachine: (multinode-853477-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ad:6b", ip: ""} in network mk-multinode-853477: {Iface:virbr1 ExpiryTime:2024-04-01 19:50:43 +0000 UTC Type:0 Mac:52:54:00:15:ad:6b Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-853477-m02 Clientid:01:52:54:00:15:ad:6b}
	I0401 18:52:03.065773   42458 main.go:141] libmachine: (multinode-853477-m02) DBG | domain multinode-853477-m02 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:ad:6b in network mk-multinode-853477
	I0401 18:52:03.065932   42458 host.go:66] Checking if "multinode-853477-m02" exists ...
	I0401 18:52:03.066753   42458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:52:03.066802   42458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:52:03.081105   42458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36261
	I0401 18:52:03.081482   42458 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:52:03.081904   42458 main.go:141] libmachine: Using API Version  1
	I0401 18:52:03.081928   42458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:52:03.082178   42458 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:52:03.082320   42458 main.go:141] libmachine: (multinode-853477-m02) Calling .DriverName
	I0401 18:52:03.082450   42458 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 18:52:03.082466   42458 main.go:141] libmachine: (multinode-853477-m02) Calling .GetSSHHostname
	I0401 18:52:03.084723   42458 main.go:141] libmachine: (multinode-853477-m02) DBG | domain multinode-853477-m02 has defined MAC address 52:54:00:15:ad:6b in network mk-multinode-853477
	I0401 18:52:03.085194   42458 main.go:141] libmachine: (multinode-853477-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ad:6b", ip: ""} in network mk-multinode-853477: {Iface:virbr1 ExpiryTime:2024-04-01 19:50:43 +0000 UTC Type:0 Mac:52:54:00:15:ad:6b Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-853477-m02 Clientid:01:52:54:00:15:ad:6b}
	I0401 18:52:03.085218   42458 main.go:141] libmachine: (multinode-853477-m02) DBG | domain multinode-853477-m02 has defined IP address 192.168.39.239 and MAC address 52:54:00:15:ad:6b in network mk-multinode-853477
	I0401 18:52:03.085386   42458 main.go:141] libmachine: (multinode-853477-m02) Calling .GetSSHPort
	I0401 18:52:03.085532   42458 main.go:141] libmachine: (multinode-853477-m02) Calling .GetSSHKeyPath
	I0401 18:52:03.085663   42458 main.go:141] libmachine: (multinode-853477-m02) Calling .GetSSHUsername
	I0401 18:52:03.085770   42458 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18233-10493/.minikube/machines/multinode-853477-m02/id_rsa Username:docker}
	I0401 18:52:03.171104   42458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 18:52:03.186850   42458 status.go:257] multinode-853477-m02 status: &{Name:multinode-853477-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0401 18:52:03.186880   42458 status.go:255] checking status of multinode-853477-m03 ...
	I0401 18:52:03.187165   42458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0401 18:52:03.187198   42458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0401 18:52:03.202912   42458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46367
	I0401 18:52:03.203364   42458 main.go:141] libmachine: () Calling .GetVersion
	I0401 18:52:03.203810   42458 main.go:141] libmachine: Using API Version  1
	I0401 18:52:03.203828   42458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0401 18:52:03.204161   42458 main.go:141] libmachine: () Calling .GetMachineName
	I0401 18:52:03.204364   42458 main.go:141] libmachine: (multinode-853477-m03) Calling .GetState
	I0401 18:52:03.205800   42458 status.go:330] multinode-853477-m03 host status = "Stopped" (err=<nil>)
	I0401 18:52:03.205815   42458 status.go:343] host is not running, skipping remaining checks
	I0401 18:52:03.205822   42458 status.go:257] multinode-853477-m03 status: &{Name:multinode-853477-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.50s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (29.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-853477 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-853477 node start m03 -v=7 --alsologtostderr: (28.450904608s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-853477 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (29.09s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-853477 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-853477 node delete m03: (1.846381832s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-853477 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.38s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (165.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-853477 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-853477 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m45.12241142s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-853477 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (165.69s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (43.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-853477
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-853477-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-853477-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (75.343561ms)

                                                
                                                
-- stdout --
	* [multinode-853477-m02] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18233
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18233-10493/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-10493/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-853477-m02' is duplicated with machine name 'multinode-853477-m02' in profile 'multinode-853477'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-853477-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-853477-m03 --driver=kvm2  --container-runtime=crio: (41.994085399s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-853477
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-853477: exit status 80 (224.663674ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-853477 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-853477-m03 already exists in multinode-853477-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-853477-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (43.14s)

                                                
                                    
x
+
TestScheduledStopUnix (117.79s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-412128 --memory=2048 --driver=kvm2  --container-runtime=crio
E0401 19:08:35.901863   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/client.crt: no such file or directory
E0401 19:08:52.854914   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-412128 --memory=2048 --driver=kvm2  --container-runtime=crio: (46.051235619s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-412128 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-412128 -n scheduled-stop-412128
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-412128 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-412128 --cancel-scheduled
E0401 19:09:16.856787   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/functional-784295/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-412128 -n scheduled-stop-412128
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-412128
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-412128 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-412128
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-412128: exit status 7 (74.519066ms)

                                                
                                                
-- stdout --
	scheduled-stop-412128
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-412128 -n scheduled-stop-412128
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-412128 -n scheduled-stop-412128: exit status 7 (71.563922ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-412128" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-412128
--- PASS: TestScheduledStopUnix (117.79s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (198.88s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2944393469 start -p running-upgrade-349166 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2944393469 start -p running-upgrade-349166 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m44.290914242s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-349166 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-349166 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m32.820500085s)
helpers_test.go:175: Cleaning up "running-upgrade-349166" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-349166
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-349166: (1.19917559s)
--- PASS: TestRunningBinaryUpgrade (198.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-249249 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-249249 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (94.567924ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-249249] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18233
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18233-10493/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-10493/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (125.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-249249 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-249249 --driver=kvm2  --container-runtime=crio: (2m4.849980944s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-249249 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (125.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (13.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-249249 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-249249 --no-kubernetes --driver=kvm2  --container-runtime=crio: (11.707891532s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-249249 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-249249 status -o json: exit status 2 (258.088909ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-249249","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-249249
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-249249: (1.312646597s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (13.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (30.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-249249 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-249249 --no-kubernetes --driver=kvm2  --container-runtime=crio: (30.927356343s)
--- PASS: TestNoKubernetes/serial/Start (30.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-408543 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-408543 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (109.825987ms)

                                                
                                                
-- stdout --
	* [false-408543] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18233
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18233-10493/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-10493/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 19:12:47.812364   50746 out.go:291] Setting OutFile to fd 1 ...
	I0401 19:12:47.812896   50746 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 19:12:47.812911   50746 out.go:304] Setting ErrFile to fd 2...
	I0401 19:12:47.812918   50746 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0401 19:12:47.813358   50746 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18233-10493/.minikube/bin
	I0401 19:12:47.814430   50746 out.go:298] Setting JSON to false
	I0401 19:12:47.815383   50746 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6920,"bootTime":1711991848,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1054-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 19:12:47.815449   50746 start.go:139] virtualization: kvm guest
	I0401 19:12:47.817397   50746 out.go:177] * [false-408543] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 19:12:47.819426   50746 out.go:177]   - MINIKUBE_LOCATION=18233
	I0401 19:12:47.819437   50746 notify.go:220] Checking for updates...
	I0401 19:12:47.821221   50746 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 19:12:47.822723   50746 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18233-10493/kubeconfig
	I0401 19:12:47.823948   50746 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18233-10493/.minikube
	I0401 19:12:47.825269   50746 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 19:12:47.826731   50746 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 19:12:47.828601   50746 config.go:182] Loaded profile config "NoKubernetes-249249": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0401 19:12:47.828687   50746 config.go:182] Loaded profile config "cert-expiration-385547": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 19:12:47.828789   50746 config.go:182] Loaded profile config "cert-options-444257": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0401 19:12:47.828898   50746 driver.go:392] Setting default libvirt URI to qemu:///system
	I0401 19:12:47.863423   50746 out.go:177] * Using the kvm2 driver based on user configuration
	I0401 19:12:47.864695   50746 start.go:297] selected driver: kvm2
	I0401 19:12:47.864710   50746 start.go:901] validating driver "kvm2" against <nil>
	I0401 19:12:47.864739   50746 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 19:12:47.866775   50746 out.go:177] 
	W0401 19:12:47.868024   50746 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0401 19:12:47.869247   50746 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-408543 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-408543

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-408543

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-408543

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-408543

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-408543

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-408543

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-408543

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-408543

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-408543

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-408543

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-408543"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-408543"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-408543"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-408543

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-408543"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-408543"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-408543" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-408543" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-408543" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-408543" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-408543" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-408543" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-408543" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-408543" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-408543"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-408543"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-408543"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-408543"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-408543"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-408543" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-408543" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-408543" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-408543"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-408543"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-408543"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-408543"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-408543"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 01 Apr 2024 19:10:53 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: cluster_info
server: https://192.168.39.26:8443
name: cert-expiration-385547
contexts:
- context:
cluster: cert-expiration-385547
extensions:
- extension:
last-update: Mon, 01 Apr 2024 19:10:53 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: context_info
namespace: default
user: cert-expiration-385547
name: cert-expiration-385547
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-385547
user:
client-certificate: /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/cert-expiration-385547/client.crt
client-key: /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/cert-expiration-385547/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-408543

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-408543"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-408543"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-408543"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-408543"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-408543"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-408543"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-408543"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-408543"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-408543"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-408543"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-408543"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-408543"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-408543"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-408543"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-408543"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-408543"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-408543"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-408543"

                                                
                                                
----------------------- debugLogs end: false-408543 [took: 2.895913603s] --------------------------------
helpers_test.go:175: Cleaning up "false-408543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-408543
--- PASS: TestNetworkPlugins/group/false (3.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-249249 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-249249 "sudo systemctl is-active --quiet service kubelet": exit status 1 (212.844947ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-249249
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-249249: (1.624859422s)
--- PASS: TestNoKubernetes/serial/Stop (1.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (44.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-249249 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-249249 --driver=kvm2  --container-runtime=crio: (44.920555552s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (44.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-249249 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-249249 "sudo systemctl is-active --quiet service kubelet": exit status 1 (203.915237ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.48s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.48s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (171.75s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2250846826 start -p stopped-upgrade-246129 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0401 19:13:52.854305   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/addons-881427/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2250846826 start -p stopped-upgrade-246129 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m49.445668113s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2250846826 -p stopped-upgrade-246129 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2250846826 -p stopped-upgrade-246129 stop: (2.147689983s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-246129 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-246129 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m0.151298879s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (171.75s)

                                                
                                    
x
+
TestPause/serial/Start (98.24s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-208693 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-208693 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m38.238303293s)
--- PASS: TestPause/serial/Start (98.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (104.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-408543 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-408543 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m44.642041701s)
--- PASS: TestNetworkPlugins/group/auto/Start (104.64s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.9s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-246129
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (93.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-408543 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-408543 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m33.363218727s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (93.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-408543 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-408543 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-fnphs" [6273db76-a854-40d3-b7a0-5f06d87563c9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-fnphs" [6273db76-a854-40d3-b7a0-5f06d87563c9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.00470209s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-408543 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-408543 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-408543 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (92.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-408543 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-408543 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m32.672067393s)
--- PASS: TestNetworkPlugins/group/calico/Start (92.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-mgtg8" [984b2d03-7836-43b0-b60b-237f46455cb7] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.007876111s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-408543 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-408543 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-6z5g5" [170459ac-b68e-4f27-9937-4046d84b3633] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-6z5g5" [170459ac-b68e-4f27-9937-4046d84b3633] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004256815s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (99.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-408543 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-408543 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m39.660276636s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (99.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-408543 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-408543 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-408543 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (129.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-408543 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-408543 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (2m9.053980884s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (129.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-jt7wp" [246c9f1f-1912-423e-90fa-cd92ad8d08d5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006955184s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-408543 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-408543 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-9nhk7" [a236249e-00f5-45f9-8592-1b0041fbfd49] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-9nhk7" [a236249e-00f5-45f9-8592-1b0041fbfd49] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.004411886s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-408543 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-408543 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-408543 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-408543 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-408543 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-pb9qq" [3b78d323-247e-46c1-9e63-0ebe6af60792] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-pb9qq" [3b78d323-247e-46c1-9e63-0ebe6af60792] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.005399802s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-408543 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-408543 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-408543 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (80.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-408543 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-408543 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m20.522101287s)
--- PASS: TestNetworkPlugins/group/flannel/Start (80.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (127.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-408543 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-408543 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (2m7.867629688s)
--- PASS: TestNetworkPlugins/group/bridge/Start (127.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-408543 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-408543 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-gjgd7" [f577ae8c-4daf-427a-a50a-23cd7b04d16b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-gjgd7" [f577ae8c-4daf-427a-a50a-23cd7b04d16b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.00864715s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-408543 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-408543 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-408543 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (113.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-472858 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-rc.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-472858 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-rc.0: (1m53.349626155s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (113.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-hpffk" [289f014c-7a22-48ac-b998-f8c1466fe745] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004365728s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-408543 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-408543 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-xpjmk" [355e466f-4258-48e2-8037-75ec80f2ff87] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-xpjmk" [355e466f-4258-48e2-8037-75ec80f2ff87] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.011803449s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-408543 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-408543 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-408543 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (107.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-882095 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-882095 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3: (1m47.573035047s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (107.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-408543 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-408543 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-8vnsl" [4318c661-09b6-48b6-abb5-4efb794eef69] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-8vnsl" [4318c661-09b6-48b6-abb5-4efb794eef69] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.006915912s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-408543 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-408543 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-408543 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.19s)
E0401 19:51:44.321378   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/flannel-408543/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (103.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-734648 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3
E0401 19:23:14.750564   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/client.crt: no such file or directory
E0401 19:23:14.755839   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/client.crt: no such file or directory
E0401 19:23:14.766148   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/client.crt: no such file or directory
E0401 19:23:14.787002   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/client.crt: no such file or directory
E0401 19:23:14.827325   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/client.crt: no such file or directory
E0401 19:23:14.907725   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/client.crt: no such file or directory
E0401 19:23:15.068155   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/client.crt: no such file or directory
E0401 19:23:15.389153   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/client.crt: no such file or directory
E0401 19:23:16.030130   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/client.crt: no such file or directory
E0401 19:23:17.310941   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/client.crt: no such file or directory
E0401 19:23:19.804624   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/auto-408543/client.crt: no such file or directory
E0401 19:23:19.871864   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-734648 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3: (1m43.462802938s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (103.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-472858 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c0a99f99-591a-4893-a177-4d81ae532032] Pending
helpers_test.go:344: "busybox" [c0a99f99-591a-4893-a177-4d81ae532032] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c0a99f99-591a-4893-a177-4d81ae532032] Running
E0401 19:23:24.992900   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/kindnet-408543/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004450591s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-472858 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-472858 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-472858 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-882095 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f41dc4bb-a6d8-487b-91bf-140d1c117a5a] Pending
helpers_test.go:344: "busybox" [f41dc4bb-a6d8-487b-91bf-140d1c117a5a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f41dc4bb-a6d8-487b-91bf-140d1c117a5a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.005433506s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-882095 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-882095 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-882095 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.210361428s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-882095 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-734648 create -f testdata/busybox.yaml
E0401 19:24:55.737315   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/calico-408543/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9cf51e32-64e1-478b-ae56-366f7e2b9489] Pending
helpers_test.go:344: "busybox" [9cf51e32-64e1-478b-ae56-366f7e2b9489] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9cf51e32-64e1-478b-ae56-366f7e2b9489] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004735614s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-734648 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-734648 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-734648 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (769.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-472858 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-rc.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-472858 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-rc.0: (12m49.495797949s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-472858 -n no-preload-472858
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (769.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (574.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-882095 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3
E0401 19:26:46.881120   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/flannel-408543/client.crt: no such file or directory
E0401 19:26:49.442016   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/flannel-408543/client.crt: no such file or directory
E0401 19:26:54.562718   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/flannel-408543/client.crt: no such file or directory
E0401 19:27:04.803225   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/flannel-408543/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-882095 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3: (9m34.612366041s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-882095 -n embed-certs-882095
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (574.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (546.59s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-734648 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3
E0401 19:27:45.216355   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/bridge-408543/client.crt: no such file or directory
E0401 19:27:45.221592   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/bridge-408543/client.crt: no such file or directory
E0401 19:27:45.231808   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/bridge-408543/client.crt: no such file or directory
E0401 19:27:45.252069   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/bridge-408543/client.crt: no such file or directory
E0401 19:27:45.292317   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/bridge-408543/client.crt: no such file or directory
E0401 19:27:45.372610   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/bridge-408543/client.crt: no such file or directory
E0401 19:27:45.533186   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/bridge-408543/client.crt: no such file or directory
E0401 19:27:45.854005   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/bridge-408543/client.crt: no such file or directory
E0401 19:27:46.495007   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/bridge-408543/client.crt: no such file or directory
E0401 19:27:47.775653   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/bridge-408543/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-734648 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3: (9m6.308404763s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-734648 -n default-k8s-diff-port-734648
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (546.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (4.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-163608 --alsologtostderr -v=3
E0401 19:27:50.019056   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/custom-flannel-408543/client.crt: no such file or directory
E0401 19:27:50.336758   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/bridge-408543/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-163608 --alsologtostderr -v=3: (4.317179536s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (4.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-163608 -n old-k8s-version-163608
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-163608 -n old-k8s-version-163608: exit status 7 (73.784553ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-163608 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (58.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-705837 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-rc.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-705837 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-rc.0: (58.407826004s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (58.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-705837 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-705837 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.163069503s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-705837 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-705837 --alsologtostderr -v=3: (7.392933261s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-705837 -n newest-cni-705837
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-705837 -n newest-cni-705837: exit status 7 (74.783938ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-705837 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (37.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-705837 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-rc.0
E0401 19:52:45.217261   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/bridge-408543/client.crt: no such file or directory
E0401 19:52:59.323047   17751 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/auto-408543/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-705837 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-rc.0: (37.12869818s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-705837 -n newest-cni-705837
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (37.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-705837 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.7s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-705837 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-705837 -n newest-cni-705837
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-705837 -n newest-cni-705837: exit status 2 (251.575064ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-705837 -n newest-cni-705837
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-705837 -n newest-cni-705837: exit status 2 (255.72085ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-705837 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-705837 -n newest-cni-705837
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-705837 -n newest-cni-705837
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.70s)

                                                
                                    

Test skip (39/319)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.29.3/cached-images 0
15 TestDownloadOnly/v1.29.3/binaries 0
16 TestDownloadOnly/v1.29.3/kubectl 0
23 TestDownloadOnly/v1.30.0-rc.0/cached-images 0
24 TestDownloadOnly/v1.30.0-rc.0/binaries 0
25 TestDownloadOnly/v1.30.0-rc.0/kubectl 0
29 TestDownloadOnlyKic 0
43 TestAddons/parallel/Olm 0
56 TestDockerFlags 0
59 TestDockerEnvContainerd 0
61 TestHyperKitDriverInstallOrUpdate 0
62 TestHyperkitDriverSkipUpgrade 0
113 TestFunctional/parallel/DockerEnv 0
114 TestFunctional/parallel/PodmanEnv 0
127 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.02
128 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
129 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
130 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
162 TestGvisorAddon 0
184 TestImageBuild 0
211 TestKicCustomNetwork 0
212 TestKicExistingNetwork 0
213 TestKicCustomSubnet 0
214 TestKicStaticIP 0
246 TestChangeNoneUser 0
249 TestScheduledStopWindows 0
251 TestSkaffold 0
253 TestInsufficientStorage 0
257 TestMissingContainerUpgrade 0
265 TestNetworkPlugins/group/kubenet 3.13
273 TestNetworkPlugins/group/cilium 3.35
286 TestStartStop/group/disable-driver-mounts 0.16
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-408543 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-408543

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-408543

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-408543

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-408543

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-408543

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-408543

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-408543

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-408543

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-408543

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-408543

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-408543"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-408543"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-408543"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-408543

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-408543"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-408543"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-408543" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-408543" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-408543" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-408543" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-408543" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-408543" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-408543" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-408543" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-408543"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-408543"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-408543"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-408543"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-408543"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-408543" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-408543" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-408543" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-408543"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-408543"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-408543"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-408543"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-408543"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 01 Apr 2024 19:10:53 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: cluster_info
server: https://192.168.39.26:8443
name: cert-expiration-385547
contexts:
- context:
cluster: cert-expiration-385547
extensions:
- extension:
last-update: Mon, 01 Apr 2024 19:10:53 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: context_info
namespace: default
user: cert-expiration-385547
name: cert-expiration-385547
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-385547
user:
client-certificate: /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/cert-expiration-385547/client.crt
client-key: /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/cert-expiration-385547/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-408543

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-408543"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-408543"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-408543"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-408543"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-408543"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-408543"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-408543"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-408543"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-408543"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-408543"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-408543"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-408543"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-408543"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-408543"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-408543"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-408543"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-408543"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-408543"

                                                
                                                
----------------------- debugLogs end: kubenet-408543 [took: 2.987658719s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-408543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-408543
--- SKIP: TestNetworkPlugins/group/kubenet (3.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-408543 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-408543

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-408543

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-408543

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-408543

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-408543

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-408543

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-408543

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-408543

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-408543

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-408543

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-408543"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-408543"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-408543"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-408543

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-408543"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-408543"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-408543" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-408543" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-408543" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-408543" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-408543" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-408543" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-408543" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-408543" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-408543"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-408543"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-408543"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-408543"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-408543"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-408543

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-408543

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-408543" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-408543" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-408543

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-408543

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-408543" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-408543" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-408543" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-408543" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-408543" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-408543"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-408543"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-408543"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-408543"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-408543"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18233-10493/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 01 Apr 2024 19:10:53 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: cluster_info
server: https://192.168.39.26:8443
name: cert-expiration-385547
contexts:
- context:
cluster: cert-expiration-385547
extensions:
- extension:
last-update: Mon, 01 Apr 2024 19:10:53 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: context_info
namespace: default
user: cert-expiration-385547
name: cert-expiration-385547
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-385547
user:
client-certificate: /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/cert-expiration-385547/client.crt
client-key: /home/jenkins/minikube-integration/18233-10493/.minikube/profiles/cert-expiration-385547/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-408543

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-408543"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-408543"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-408543"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-408543"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-408543"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-408543"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-408543"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-408543"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-408543"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-408543"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-408543"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-408543"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-408543"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-408543"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-408543"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-408543"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-408543"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-408543" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-408543"

                                                
                                                
----------------------- debugLogs end: cilium-408543 [took: 3.213582843s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-408543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-408543
--- SKIP: TestNetworkPlugins/group/cilium (3.35s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-580301" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-580301
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard